I once programmed a computer with a hammer. On the IBM 360 mod 30 boot code was stored in read only TROS strips (transformer read only strip). A one or zero was encoded by which side of a magnet the strip had a hole. One night we were doing engineering changes on a mod 30 and I was tasked with adding new holes to a TROS strip. This was done using a hammer and a 3/8 inch punch.
It was 1968 when we were doing EC's on the IBM 360 equipment. Interestingly the 360 mod 40 did use punched cards for micro code and boot code. Rather than TROC the mod40 used CROS (capacitive read only storage). One and zeros were encoded by holes the a standard punch card. the cards were sandwiched between a flat plate and an array of addressable electrodes. The card hole provided a dielectric different from a no hole. CROS was bulkier than TROS, but much faster to read. Instead of a hammer I used an 029 to program the micro code. Higher speed cpu's used still other methods for read only storage, including diode arrays. The first floppy disk was invented to store micro code the 360 channel controller. Oh yeah, I was there even before the 7094 back to the 650.
Finally I understand where the BIOS comes from. After 15 years of programming on the high level, watching this video actually closed some gaps in my thinking about computers.
Wonderful video. I've used a Data General Nova 1220 with front-panel toggle switches (1978 in High School), and also booted a Univac 1100/42 from drum (1981 in the USAF). Also later used a bunch of PDP-11's as front end to Bell & Howell COM recorders, making microfiche in a service bureau.
Awesome! I've was taught by someone who worked with the lead female coder in the Navy in the late 60s. She's in the history books. He was a tall, bent, cigarette-smoking, hard drinking, cussing, programmer, who talked about her with such reverence.
Thanks for this. Yes the character P is 00000 in EDSAC encoding and these 5 bits occupy the LH end of the word (see Fig. 5 of the Tutorial). It's a no-op as an op-code but is very useful in a machine where integer constants can only be inserted by making them look like pseudo-instructions! So , in P 23 F, the F resolves into a 0 at the far RH end of the word to show it's a single-length instruction. The 23 , or 0000010111 in 10-bit binary, occupies the 10-bit "Address" field but this is pushed one place to the left by the trailing 0 bit we already inserted for `F' . This shift left doubles the 23 into looking like 46 - if the word is regarded just as a bit-pattern for an integer. A trailing `D' in the instruction format plants a 1 at the far RH end. Combined with the shifted-left doubling already referred to, this explains why P 16 D is 16*2+1 = 33
If you wanted to print out "WORLD" i.e. O5@ O6@ O7@ O8@ O9@ and then later you decided you wanted do something else before printing WORLD (e.g. jump/branch, print another word "HELLO", etc.) wouldn't you still have to update all your offsets, because you had inserted more instructions between the program origin in memory and the location of your data (WF, OF, RF, LF, DF)?
Yes, you still have to get your offsets correct. But the much bigger pain (of having to change almost everything, if you moved a chunk of code to a new position) has now been overcome.
Yes if you download the EDSAC simulator and read the documentation that comes with it, there is a listing of Initial Orders II somewhere in there. The more you study them, and work out exactly what they are doing, the more impressive David Wheeler's achievement becomes.
*@Computerphile* Would humans be able to hear the beeping from the mercury tank, or is it outside human hearing? maybe because of the weight of the mercury or something?
It is way higher than human hearing over 100kHz, I think. Incidentally, Maurice Wilkes told me they wanted to use Alcohol instead of Mercury as it was more stable with temperature, but no one could guarantee the security of that much alcohol in the presence of students.
A LOT, LOT, LOT more memory was used to write it. XDXD - Your comment alone is about 8 18bit words. All the JS and browser stuff to write this... I don't even want to think how much more that is. XD
As true as that is, even worse is how many PB we are wasting storing and transmitting rubbish videos on YT XD. - I'm BY NO MEANS saying this video is rubbish, btw.! - It's great, but most stuff uploaded to YT actually is (and most people never see it) ... The world is moving fast. I wonder if in few years, people will laugh at us for worrying about a few GB. XD --- "What do you mean you measured your storage capacity in GB? My *pen* alone has 128GB of persistent RAM-SSD-XYZ memory-battery, and I buy a new one every week and it can do nothing but write*!" XDXD ..... yeah..... (... and they don't _need_ the new one, they are just tired of the color of the old one or something....) (* on every AR board around, synced to global network connected to everyone's mind or..... well, they probably don't need a pen, but that's beyond the point XD)
Martin Verrisin The speed at which memory and storage grow always seem to be the ones that are the most apparent to me. When I got my previous laptop, it was the most advanced computer I’d ever owned, and I was questioning whether to bother getting 6 GB of RAM rather than 4 because why would I need more than 4? Then I built a desktop 2 years later with 16 GB, and my new laptop has 32 GB. Within 6 or 7 years, I went from thinking 6 GB was overkill to getting a laptop with 32 GB of RAM. For storage, Windows’s File Explorer makes it really easy to see how quickly what we consider big changes. This is its size classification: 0 - Empty 0-10 KB - Tiny 10-100 KB - Small 100 KB - 1 MB - Medium 1-16 MB - Large 16-128 MB - Huge 128 MB+ - Gigantic Which seems ridiculous to me because when freeing up space, I consider anything under 1 GB completely insignificant. If I was asked to come up with the numbers to match those names today, I’d come up with this: 0 - Empty 0-1 MB - Tiny 1-128 MB - Small 128 MB - 1 GB - Medium 1-8 GB - Large 8-128 GB - Huge 128 GB+ - Gigantic It’s clear from the vast differences between what Windows named the sizes whenever they started doing that and what I think of now that the concept of what’s big is changing rapidly.
My first (and so far only) program didn't say "Hello world" but converted units of measure with fractions and multiple units. I think I missed a step or two.
There is one thing I don't understand, if the address given by @ is relative to the current instruction, should it not be always O5@? As each instruction will the base address? Or the @ is relative to the beginning of the program?
I think he may have misspoken at some point if he implied that. Reasoning backward, if you get to address 69 by adding 05 to something, then that something is 64, but that instruction was from address 66. So it's not relative to the current instruction. Scrolling up, the "T64K" instruction is probably setting the base offset shared by all the subsequent instructions (and that might be what the video was about).
I made a quick reply to another similar comment, and I think this has to do with the limited size of the initial orders. In order to make the memory reference relative to the current line versus relative to the start of the program, it would have to take more memory and code to keep track of your current line, as well as the relative reference itself, to then add both of them to the initialized start of the program.
It'd be really cool if you interviewed youtuber Sethbling, the guy who programmed a hex editor into Super Mario World using SNES controllers. Lots of interesting low-level stuff there.
The name "Initial Orders" always seemed quaint and faintly militaristic to me - fitting for the post-war period the EDSAC was developed in. But I'm glad other terms (instruction, statement, command, etc.) became popular, avoiding confusion with "order" in the sense of a sequence, or "order" as in a class or rank within a hierarchy.
You say wizardry, others might say "mere" competent engineering. Doesn't seem all that different from learning to read hex dumps of some binary format. Surprising and impressive if someone just does it unexpectedly, but very do-able if you sit down and make yourself learn it.
Indeed. It's an interesting etymological mystery. But the most fruitful avenue appears to be that it's actually related to "bug" in the sense of illness. When you've caught a cold or the flu and you might say "I've caught a bug". Which actually takes its etymological journey past Shakespeare, who talked of a "bug" being upon someone, meaning that an illness had befallen them. Tracing it further, this sense of "bug" actually comes from the pre-scientific belief that illness was caused by "evil spirits". That if someone had gone mad by illness, this was because a malicious ghost had "possessed" them. And, in Medieval times, they had a wonderful means of "curing" someone - called "trepanning" - where they drilled a hole in a person's skull to let out or release this "spirit". So this brings us to the conclusion that when we say a program has a bug, the underlying semantics are that there is a "ghost in the machine", as it were, that's behaving maliciously and driving the thing to a bit of "temporary insanity" and a bout of the sniffles, so to speak. That it's feeling a bit under the weather because it's "caught a bug", which is itself the notion that you've been possessed by a malicious spirit, passing through. So, really, a debugging session could alternatively be called "an exorcism". **cue Mike Oldfield's "The Exorcist" theme**
In an old video he said if someone makes one he'll buy it. The formula itself describes something like how many more binary digits it takes to encode a decimal number, but I'm not sure if I remember right.
Clearly some people find this historical information interesting, but I would much prefer to hear more modern stuff such as the admirable talk on Bitcoin Scaling or how to set up a Mesh network or using the Raspberry Pi GPU I used the machine at the Maths Lab in 1970. I remember the flexowriters where you could feed the 5 hole paper tape output back into the input and it would copy it until stopped. It had letter shift or similar. Then the PDP8 and PDP 11 had switch registers with Load address and deposit keys. The PDP8 ? had a RIM and a BIN loader & the PDP11 had an absolute loader I believe. The PDP11 maintained its core memory after power down and one could leave a program in memory which would start automatically on power up. Later generations will have entered Basic or Z80 machine code into their ZX80 or Spectrum computers and nowadays there is information as to how to write in Assembler for the Raspberry Pi
I find this early stuff much more interesting than the stuff from my lifetime. When I visited the museum of computing at Bletchley Park in 2016, I spent most of my time with the machines from before I was born, and only had an hour before closing time to skim over the stuff from the 1980s and onwards. There's something fundamentally more interesting about punching a program on paper tape than typing into GW-BASIC.
Onno Inada rarely, but I've heard of systems in use at banks that are specced to perform some ops exactly as "the original".. which is written in COBOL on punched cards and stored in a vault.
Normally I wouldn't watch this as it is too advanced for me BUT, who can resist to this guy?
His spirit is strong.
I once programmed a computer with a hammer.
On the IBM 360 mod 30 boot code was stored in read only TROS strips (transformer read only strip). A one or zero was encoded by which side of a magnet the strip had a hole. One night we were doing engineering changes on a mod 30 and I was tasked with adding new holes to a TROS strip. This was done using a hammer and a 3/8 inch punch.
It was 1968 when we were doing EC's on the IBM 360 equipment. Interestingly the 360 mod 40 did use punched cards for micro code and boot code. Rather than TROC the mod40 used CROS (capacitive read only storage). One and zeros were encoded by holes the a standard punch card. the cards were sandwiched between a flat plate and an array of addressable electrodes. The card hole provided a dielectric different from a no hole. CROS was bulkier than TROS, but much faster to read. Instead of a hammer I used an 029 to program the micro code. Higher speed cpu's used still other methods for read only storage, including diode arrays. The first floppy disk was invented to store micro code the 360 channel controller. Oh yeah, I was there even before the 7094 back to the 650.
Now that's what I call coding close to the metal
@@amicloud_yt Coding right *through* the metal, it sounds like.
@@VoteScientist It's probably worth mentioning that the IBM 029 was a card punch.
@@VoteScientist brilliant
Finally I understand where the BIOS comes from. After 15 years of programming on the high level, watching this video actually closed some gaps in my thinking about computers.
How wonderful see an explanation of the difference between 'operating code', and 'programming code'! Lordy I just learned so much!
Wonderful video. I've used a Data General Nova 1220 with front-panel toggle switches (1978 in High School), and also booted a Univac 1100/42 from drum (1981 in the USAF). Also later used a bunch of PDP-11's as front end to Bell & Howell COM recorders, making microfiche in a service bureau.
I still remember a lot of the PDP11 boot loader. Can't remember where I put my car keys, but part of my memory is tied up with that.
This is my favorite computer stuff related channel on youtube. Thank you so much Professor Brailsford, I always learn a lot with you.
This is one of the best Computer Science introduction I have found on TH-cam. And @ProfDaveB are really great in explaining theses stuff.
Brilliant stuff! Great to get a glimpse into the early days of computing. Thanks a million for it.
I think computerphile should do a video on the Pilot ACE computer, it was designed by Alan Turing (starting 1945, finished in the 50s)
Brings back memories of writing bit-slice code on the AM2900. Our system used 3 in parallel. Fun times.
this series is absolutely precious!
i love every video with Professor Brailsford in it. idk why. he's my favourite.
Awesome! I've was taught by someone who worked with the lead female coder in the Navy in the late 60s. She's in the history books. He was a tall, bent, cigarette-smoking, hard drinking, cussing, programmer, who talked about her with such reverence.
Grace Hopper?
6:37 "Oh Yeeah!" :D I love that reaction
I absolutely love this channel, thank you guys.
12:25 That's a bit confusing. I take that to mean 5 locations from the instruction pointer, when in fact it's 5 locations from the base address.
Yes, I also expected the program to be like: O5@, O5@, O5@
Exactly what I came here to say!
Yes -- many apologies for that slip-up! I'll try and indicate in the sub-titles that the offsets are relative to the base address.
Thanks for this. Yes the character P is 00000 in EDSAC encoding and these 5 bits occupy the LH end of the word (see Fig. 5 of the Tutorial).
It's a no-op as an op-code but is very useful in a machine where integer constants can only be inserted by making them look like pseudo-instructions!
So , in P 23 F, the F resolves into a 0 at the far RH end of the word to show it's a single-length instruction. The 23 , or 0000010111 in 10-bit binary, occupies the 10-bit "Address" field but this is pushed one place to the left by the trailing 0 bit we already inserted for `F' . This shift left doubles the 23 into looking like 46 - if the word is regarded just as a bit-pattern for an integer. A trailing `D' in the instruction format plants a 1 at the far RH end. Combined with the shifted-left doubling already referred to, this explains why P 16 D is 16*2+1 = 33
Can you do an episode dedicated to emulators. I really want to learn about them, and others would like to learn too.
. ^^^ yes please ^^^
Wonderful video, great professor, nice lamp. ok curtain.
If you wanted to print out "WORLD" i.e. O5@ O6@ O7@ O8@ O9@ and then later you decided you wanted do something else before printing WORLD (e.g. jump/branch, print another word "HELLO", etc.) wouldn't you still have to update all your offsets, because you had inserted more instructions between the program origin in memory and the location of your data (WF, OF, RF, LF, DF)?
Yes, you still have to get your offsets correct. But the much bigger pain (of having to change almost everything, if you moved a chunk of code to a new position) has now been overcome.
That's very convenient and impressive that Wheeler managed to achieve this in forty-something words 😳
Yes if you download the EDSAC simulator and read the documentation that comes with it, there is a listing of Initial Orders II somewhere in there. The more you study them, and work out exactly what they are doing, the more impressive David Wheeler's achievement becomes.
*@Computerphile* Would humans be able to hear the beeping from the mercury tank, or is it outside human hearing? maybe because of the weight of the mercury or something?
It is way higher than human hearing over 100kHz, I think. Incidentally, Maurice Wilkes told me they wanted to use Alcohol instead of Mercury as it was more stable with temperature, but no one could guarantee the security of that much alcohol in the presence of students.
Wow...I'm amazed they were able to get the entire "operating system" of the computer into 42 WORDS. More memory was used to type this comment.
A LOT, LOT, LOT more memory was used to write it. XDXD
- Your comment alone is about 8 18bit words. All the JS and browser stuff to write this... I don't even want to think how much more that is. XD
Yes imagine if the people back in 1950 knew we would be wasting MEGABYTES of memory simply to display comments.
As true as that is, even worse is how many PB we are wasting storing and transmitting rubbish videos on YT XD.
- I'm BY NO MEANS saying this video is rubbish, btw.! - It's great, but most stuff uploaded to YT actually is (and most people never see it)
... The world is moving fast. I wonder if in few years, people will laugh at us for worrying about a few GB. XD
--- "What do you mean you measured your storage capacity in GB? My *pen* alone has 128GB of persistent RAM-SSD-XYZ memory-battery, and I buy a new one every week and it can do nothing but write*!" XDXD ..... yeah..... (... and they don't _need_ the new one, they are just tired of the color of the old one or something....)
(* on every AR board around, synced to global network connected to everyone's mind or..... well, they probably don't need a pen, but that's beyond the point XD)
Martin Verrisin The speed at which memory and storage grow always seem to be the ones that are the most apparent to me. When I got my previous laptop, it was the most advanced computer I’d ever owned, and I was questioning whether to bother getting 6 GB of RAM rather than 4 because why would I need more than 4? Then I built a desktop 2 years later with 16 GB, and my new laptop has 32 GB. Within 6 or 7 years, I went from thinking 6 GB was overkill to getting a laptop with 32 GB of RAM.
For storage, Windows’s File Explorer makes it really easy to see how quickly what we consider big changes. This is its size classification:
0 - Empty
0-10 KB - Tiny
10-100 KB - Small
100 KB - 1 MB - Medium
1-16 MB - Large
16-128 MB - Huge
128 MB+ - Gigantic
Which seems ridiculous to me because when freeing up space, I consider anything under 1 GB completely insignificant. If I was asked to come up with the numbers to match those names today, I’d come up with this:
0 - Empty
0-1 MB - Tiny
1-128 MB - Small
128 MB - 1 GB - Medium
1-8 GB - Large
8-128 GB - Huge
128 GB+ - Gigantic
It’s clear from the vast differences between what Windows named the sizes whenever they started doing that and what I think of now that the concept of what’s big is changing rapidly.
Yeah I just bought the Apple iPen 128GB for $1000 and I'm already tired of the color. First world problems lol
My first (and so far only) program didn't say "Hello world" but converted units of measure with fractions and multiple units.
I think I missed a step or two.
So? Initial Orders 2 are a sort of very simple version of ld.so(8) or ld-linux.so(8) in Linux, for loading and dynamic linking in Linux?
There is one thing I don't understand, if the address given by @ is relative to the current instruction, should it not be always O5@? As each instruction will the base address? Or the @ is relative to the beginning of the program?
I think he may have misspoken at some point if he implied that. Reasoning backward, if you get to address 69 by adding 05 to something, then that something is 64, but that instruction was from address 66. So it's not relative to the current instruction. Scrolling up, the "T64K" instruction is probably setting the base offset shared by all the subsequent instructions (and that might be what the video was about).
I made a quick reply to another similar comment, and I think this has to do with the limited size of the initial orders.
In order to make the memory reference relative to the current line versus relative to the start of the program, it would have to take more memory and code to keep track of your current line, as well as the relative reference itself, to then add both of them to the initialized start of the program.
Necessity may be the mother of invention, but laziness is its father
Brailsford? Press play and thumbs up immediately.
I was looking for a video called "Initial Orders I" in vain.
It'd be really cool if you interviewed youtuber Sethbling, the guy who programmed a hex editor into Super Mario World using SNES controllers. Lots of interesting low-level stuff there.
The name "Initial Orders" always seemed quaint and faintly militaristic to me - fitting for the post-war period the EDSAC was developed in. But I'm glad other terms (instruction, statement, command, etc.) became popular, avoiding confusion with "order" in the sense of a sequence, or "order" as in a class or rank within a hierarchy.
I find these wretched things fascinating
That is some matrix level wizardry right there at 10:39
You say wizardry, others might say "mere" competent engineering.
Doesn't seem all that different from learning to read hex dumps of some binary format. Surprising and impressive if someone just does it unexpectedly, but very do-able if you sit down and make yourself learn it.
I don't even see the code, I just see blonde, brunette, redhead
Positive Vibrations from Brazil!
I wonder when this video was shot. Couldn't help but notice Prof. Brailsford is still using Windows 7.
When I do use Windows, I still use Windows 7. I refuse to use a newer version.
I was keying-in instructions up until 1993. Only for diagnostics though. Harris 100 24 bits with twinkling lights.
14:45 Now you know why linkers used to be called link-loaders, and the Unix linker is still named _ld._
Didn't code "bugs" get thier name from the insects that'd get stuck in the rotors and memory and prevent intended function?
No. The term "bug" has been used in engineering circles since the 19th century.
Indeed. It's an interesting etymological mystery.
But the most fruitful avenue appears to be that it's actually related to "bug" in the sense of illness. When you've caught a cold or the flu and you might say "I've caught a bug". Which actually takes its etymological journey past Shakespeare, who talked of a "bug" being upon someone, meaning that an illness had befallen them.
Tracing it further, this sense of "bug" actually comes from the pre-scientific belief that illness was caused by "evil spirits". That if someone had gone mad by illness, this was because a malicious ghost had "possessed" them. And, in Medieval times, they had a wonderful means of "curing" someone - called "trepanning" - where they drilled a hole in a person's skull to let out or release this "spirit".
So this brings us to the conclusion that when we say a program has a bug, the underlying semantics are that there is a "ghost in the machine", as it were, that's behaving maliciously and driving the thing to a bit of "temporary insanity" and a bout of the sniffles, so to speak.
That it's feeling a bit under the weather because it's "caught a bug", which is itself the notion that you've been possessed by a malicious spirit, passing through.
So, really, a debugging session could alternatively be called "an exorcism".
**cue Mike Oldfield's "The Exorcist" theme**
@Evi1M4chine I made two comments. Which are you challenging?
I love those curtains.
I see someone was able to produce the T-shirt the professor wanted.
I'm relatively new at this. Are O and F "APIs" or "system calls"? Just starting to play with asm.
I love this prof.
Holy crud it did address relocation?
The ultimate cruel irony would be to write the edsac emulator in lua running over java.
In fact, scratch that. It would be to write it in Haskell.
I love lua!!! :D gives me a warm feeling in my heart.
This is so cool.
The rubik's cube in the back has been in this exact position and state, for what is now 60 years xD.
And then 44 years ago Rubik actually invented the thing. Man, time travel is complicated.
The mitochondria is the powerhouse of the cell
Looks like someone posted in the wrong tab. This is a computer science video, not a biology one.
Roxor128 Oh jeez youre right, Ill correct that comment for you.
The mitochondria is the powerhouse of the CPU.
Oh my god, it's like program and data segment???
The 66~68 is program and 69~71 is data.
Or not...
Correct. Von Neumann architecture.
Beautiful.........................
Great story!
how to print a ‚Z‘ if ZF is stop?
Life is good!!!
He so happy :)
and some people complain about assembly language
How do you output an asterisk?
With an escape char ofcource! :)
Buen video
but what are the 42 words tho
For that, the mice will need a computer the exact size and shape of Earth...
John Hunter - not sure what comes after life, the universe, and everything..,
hello world level: expert
Hi
salut
Hey
Hejsa
Γεια σου
I'm getting distracted by the Rubik's cube in the recent videos. ^_^
>>>THIS
Runtime Linker like thing.
ICAR
jit compiling asm
Why does he have a log(2,10) shirt?
Alessandro Zanardi in a previous video i think he said that a base 2 computer needs log(2,10) times the circuitry than a base 10 computer
In an old video he said if someone makes one he'll buy it. The formula itself describes something like how many more binary digits it takes to encode a decimal number, but I'm not sure if I remember right.
it’s in the why use binary video and the prof even said that if someone made a shirt saying “log2(10) = 3.32193” he’ll buy one
eggsac
Hi again
>not using discord
>using skype
kek
hi 3rd!
Clearly some people find this historical information interesting, but I would much prefer to hear more modern stuff such as the admirable talk on Bitcoin Scaling or how to set up a Mesh network or using the Raspberry Pi GPU
I used the machine at the Maths Lab in 1970. I remember the flexowriters where you could feed the 5 hole paper tape output back into the input and it would copy it until stopped. It had letter shift or similar. Then the PDP8 and PDP 11 had switch registers with Load address and deposit keys. The PDP8 ? had a RIM and a BIN loader & the PDP11 had an absolute loader I believe. The PDP11 maintained its core memory after power down and one could leave a program in memory which would start automatically on power up.
Later generations will have entered Basic or Z80 machine code into their ZX80 or Spectrum computers and nowadays there is information as to how to write in Assembler for the Raspberry Pi
I find this early stuff much more interesting than the stuff from my lifetime. When I visited the museum of computing at Bletchley Park in 2016, I spent most of my time with the machines from before I was born, and only had an hour before closing time to skim over the stuff from the 1980s and onwards.
There's something fundamentally more interesting about punching a program on paper tape than typing into GW-BASIC.
Roxor128 A stint at maintaining a suite of COBOL programs held on punched cards would probably cure you of this ;)
Richard Mullens - Heh. Probably. I've looked up COBOL on Wikipedia and from the source code samples I've seen there, I don't fancy having to write it.
Richard Mullens They... They still use those?!
Onno Inada rarely, but I've heard of systems in use at banks that are specced to perform some ops exactly as "the original".. which is written in COBOL on punched cards and stored in a vault.