Yes, I just wanted to show that "Computerphile" presenters other than Dr Bagley are also allowed to wear Hawaiian shirts! This particular one was bought about 8 years ago at Macy's, in Stanford University's Shopping Mall.
It’s a nice looking shirt! I have quite a few Hawaiians myself. I have always had a passion for electronics and computers and I have learned a lot through your videos on Computerphile. I love Microcomputers and home brew cpu’s and computer builds I am going to be studying computer engineering at the University of Cincinnati next year. I hope that one day I will be able to go to the UK to check out the centre of computing history and maybe sneak into one of your classes at Nottingham. Thanks for all you have done.
I hope you don't mind me saying this, professor Dave B, but your likeliness to professor Donald Ervin Knuth is quite uncanny. I wonder if he joined your stroll through Stanford University's mall. 😄
The ASCII equivalent of 42 is asterisk. The asterisk is used in computing to represent 'anything'. So the answer to life, the universe and everything is anything ;)
SQL uses both, actually. Asterisk is used in SELECT statements and means all matching records, % and _ with the LIKE operator. _ works like DOS ?, and matches a single character. % works like DOS *, and matches zero to all following characters.
In regular expressions, the asterisk is used to me "zero or more of whatever happened previously". Since an asterisk at the beginning of a regex is nonsense, we must conclude that there was a supercomputer previous to Deep Thought which computed part of the answer, which Deep Though is saying must be repeated indefinitely. In summary, the meaning of life is that it is something which goes on, and on, and on.
The exact wording is important. The Deep Thought was specifically asked "What is the answer to the ultimate question of life, the universe and everything?" The answer it eventually gave was 42. It was not, however, able to compute the ultimate question itsself, to which 42 was the answer.
Top choice of shirt Prof. B! Every time I see one of these videos, I get nostalgic for when I was an undergrad, and a wee bit jealous of all those people about to go to university to learn all sorts of amazing new things. Good times!
I was programming some stuff with binary coded decimal the other day. It's still used for real-time clock chips! If you want the bare-metal low level time from these chips they give you BCD, presumably because it's easier to print BCD out to cheap digital displays than to convert an integer to a string. EDIT: Yep you mentioned digital clocks at the end of the video! Very excellent.
That, and as it has been the standard since "the beginning of time" there's really no reason to change. (Yes, many do provide a non-BCD interface. I'd not be overly surprised to find one using the 64-bit UNIX epoch.)
mwalsher. asm and nasm are standard GNU toolset. I worked with it on a modern computer to write a rudimentary OS just to learn how OSes work at the lowest level. Had a mix of C compiled to assembly and raw assembly to do a handoff for memory management purposes using PC memory management instructions. Pretty much anywhere I wanted to access the underlying machine from C, I had to write a two part C and assembly pair of functions to create a binding to the instruction and inject repeated setup stuff like stuffing args into registers and memory locations. It's great what you can do with a modern computer. My rudimentary OS is complex enough that it could handle a program and exposed an OpenGL 3.3 binding that had to be watched careful because keeping those alive without drivers is a real challenge. Originally I was going to try making a game on it but the binding management is so heavy that modern OSes with drivers handle so much better it's crazy.
Xilefian - A 'fun' something I discovered with PC CMOS RTC when investigating Y2K was that at least one design had a flaw for leap years, very relevant to all this. If you've got a value in pure binary, you can easily check if it is divisible by four by checking for zero in the least significant two bits, but this won't work for BCD. That fact didn't stop them trying! So consider the year 1992 (the century part 19, is just stored elsewhere as a static value not part of the clock), so we have just 92, in BCD, the least bits are not zero, so the defective chips did not have a leap year in 92, but for 94 and 98 they did! So if you have a vintage PC that gets the date wrong at the end of February, this is the little known about reason! Thankfully Y2K itself was a leap year because it was divisible by 400 and so an exception to the 100 year rule.
HP scientific calculators all used BCD so avoided the rounding problems inherent in a true binary calculator. In 2004 Thomas Okken's released his exellent HP-42 simulator, Free42. It comes in both BCD and the slightly faster floating point binary versions. SwissMicros brought it full circle in 2017 when they released the DM-42, a physical calculator based on the HP-42 but using a modified version of Free42 for it's firmware. And yes... it uses BCD.
42 in binary is (mentioned in the video) 101010. When Parliament of the Czech Republic (where I am from) agreed a law on digital broadcasting or digital phoning or whatever digital (I can't remember), they made it valid since 10. October 2010, or 10.10.10. They I believe proved their sense of humor...
An episode on FUNDAMENTAL differences between AI and classical programming would be GRAND! Basically, for the lay person, why AI isn't just a giga-complex classical programming effort that is pre-loaded with all possible outcomes...
in ML/AI, code you write sets up stats models which process input data for patterns to determine a response, instead of programming its behavior directly. you don't pre-load all outcomes, but models are flexible
I love these journeys into history. It's actually very important to have some knowledge of these topics so people who want to be Computer Scientists and not just programmers (although there is nothing wrong with the profession) know how we got to where we are and where we might be heading in the future. Is there some chance that Professor Brailsford or someone can do a session on packed decimal? That is the technology on which so much of the financial world was built and still lives today.
The double dabble algorithm is a pretty great way to convert beateeen binary and BDF. It's how you can output to an lcd screen at the end of an arythmatic unit.
8-bit arcade machines use BCD for displaying scores, etc. on the screen. The Z80 CPU has the DAA instruction for _fixing_ BCD values after ADD, SUB, etc.
Some 40 years ago, I used to work on a computer that worked in BCD (not IBM). You typed in a decimal digit, it processed it in decimal, and then output the result in decimal.
I read hexadecimal for a living. I have to review register dumps in order to verify a design before we approve it for manufacturing. (display and camera controllers for an ARM system-on-chip). the 8 digit hexadecimal represents a 32-bit register. And the registers can have multiple fields packed into them, often weird sizes like 5 or 6-bit fields. Horizontal pixel count for display is 13-bits to represent 0 to 8191, and it's packed into the same 32-bit register with the vertical line count. I have scripts that decode the fields for me, but sometimes if something is weird I have to make sure the scripts are working correctly and I must work it out by hand.
One more step from BCD is "packed decimal" where each byte is split into 2 BCD nibbles - then you have the fun of dealing with signs as well. As for converting binary to/from text or BCD - yes done that, got the scars. Real performance hit on a VAX. Accountants really didn't like floating point and rounding. Actuaries much more relaxed, the end result of an Actuarial calc always needs rounding so go for the better performance of floating point for compound interest and mortality rate calcs.
Now here is a fun fact: The BASIC interpreter in most of the 8-bit computers of the 70s and 80s use binary for their floating-point representation, but some (like the MSX and the Tandy 100 series) use BCD.
Having worked in the IBM mainframe arena since the mid 70s and also being a Douglas Adams fan, I recall an unverified and less technical explanation for the answer "42". Assume each of your hands is a nibble with the fingers the bits. Extend your fists in front of you, then raise the longest finger on each hand. You will see "42" -- the answer to life, the universe, and everything.
When I was young and naive, I wanted to work with huge integers in QBasic (to calculate large Lucas numbers) and inadvertently reinvented a highly inefficient BCD system. It was ugly and slow, but it worked goddammit. Still one of my proudest moments in a strange way. I still have a printout of a 20000 digit Lucas number I generated with that nasty bit of work.
FYI, The correct way to type this would be: ``` There are 10 types of people in the world 01. those who understand binary 02. those who don’t 10. those who didn’t expect this joke to be base 10 ```
Actually, by that logic it would be: There are 11 types of people in the world 01. those who understand binary 10. those who don’t 11. those who didn’t expect this joke to be base 11
There are 10 types of people in the world: Those, who understand base two jokes; those, who understand base three jokes; those, who understand base four jokes; those, ....
In the NES port of Tetris, a mistake in BCD arithmetic causes levels not to update properly if you start above level 9. The player is always supposed to reach level L after clearing 10*L lines, then leveling up every ten lines thereafter. That way, by starting at a higher level (and therefore faster speed but more points per line), you have a higher scoring potential. For instance, if you start on level 5, you will hit level 6 at 60 lines, then 7 at 70, and so on. But if you start at level 10-15, it takes only 100 lines to reach the next level (same as a level 9 start), 110 for a lvl 16 start 120 for 17, 130 for 18, and 140 for 19. The theory is that to compare L
IEEE 754-2008 standardized a decimal floating point format, which is related to BCD. Many programming languages (Java, C#) support decimal floats as well, but often they have to do pure software emulation as much of the mainstream hardware is limited to only the older binary floating point format from an earlier IEEE 754 standard, with all of its weird rounding issues.
Would've loved to see more on actual arithmetic in BCD. How much of a performance hit does it take when done in hardware? How much more complicated do the circuits become?
Most early high speed scientific computers stored entire numeric (binary) words of data or an entire instruction in each addressable unit of memory (IBM 700 and 7000 series with 36-bit words, for example), so a single arithmetic operation required six memory cycles: fetch the first instruction, load the first operand into the accumulator, fetch the second instruction, add/subtract/whatever the second operand against the accumulator, fetch the third instruction, and store the accumulator in the desired memory word. These machines had the fastest arithmetic-logic units of the era, and wide parallel logic circuits that required the minimum number of CPU cycles per memory cycles to get the job done. But commercial computers, like the IBM-1401, generally had variable word length arithmetic and stored each character in its own memory location. Arithmetic and text operations were done with each character from one area of memory matched against its corresponding character in another, replacing one of them with the result. Their logic and memory cycles were also longer, making them much slower in raw computing power than the big number crunchers. But they were more than fast enough to keep up with high speed card and print devices and tape decks. Extra processing power to compute an invoice, payroll, or grade point average would be wasted if the data files could not be read or written fast enough. I once used the entire 132 character as an accumulator in a Fibonacci series program on a 1401, and even the 100+ digit to 100+ digit addition didn’t slow down the high speed printer (one line in 100 ms)! Only when the System-360 architecture, allowing a mix of binary and decimal (BCD) operations and memory accesses, was introduced, was a single machine practical that could do both kinds of processing efficiently.
I did not know that, thanks for the addition. Although to be entirely correct, modern 64 bit CPUs still have these instructions, but only when running in any mode other than long mode.
How efficient is multiplication with BCD? I ask because the Intel x86 line allows for BCD addition and subtraction, with "adjustments" and a flag to handle the overflow into the 0xA-0xF space. But for multiplication and division, the operands have to be converted to binary first, and the result back into BCD afterwards. (The FPU can load and store BCD, but internally arithmetic is done in binary.) Is it just the manual method performed in hardware?
Your quote of the dialogue was incomplete. The sentence uttered by deep thought before the answer was slightly longer. But the recipients of the output didn't know what the question was, so they commissioned the building of planet Earth to compute the corresponding question.
We used BDC to give a stock code to a blank PCB when we could only use drilled holes. Using two drill sizes, and, as a "magic" marker of a pair of vertical holes, to write 42 it would look like : .o.. oo.o
Another brilliant video from this amazing man. Also reminded me of the old TO BE thing. T=20, O=15, B=2 and E=5, adds up to 42. Even Shakespeare was in on the joke.
13:24 - could have sworn he was about to say "digital watches"... On a more thoughtful note, I appreciate how Deep Thought stalled for time to let her kid "Earth" grow up - job security that enabled the great thinker to watch TV while "working".
For comparison, there's also FIO-DEC that didn't quite make it this easy (1-9 were precisely those values, but the digit 0 was code 20 octal, or 10000 binary). Decimal types nowadays frequently use word cutoffs rather than digit cutoffs, which can fit e.g. 9 digits in 32 bits or 19 digits in 64 bits.
The IBM-1620 had BCD integer instructions, but to save logic circuits, they used a table in the first few hundred digit pairs to get the answers. The FORTRAN and other language compilers used library subroutines to implement decimal floating point.
Sort of: Deep thought: The ultimate "answer" to the TLTU&E *is* 42. The reason why humans couldn't understand it is because they didn't know the ultimate "question" to TLTU&E. Since Deep thought couldn't calculate the ultimate question, earth was created to calculate that. Earth: [4.5 billion years later] the ultimate "question" to TLTU&E is... (just nanoseconds from earth completing its calculations, the vogon space fleet destroyed it to make room for the intergalactic highway) Magraethea (Earth 2.0): since Arthur Dent is the only surviving earthling and element to the ultimate "question", his brain could be used to reboot the program.
Yes, I ran into that at some point. I was looking at zero ohm surface mount resistors on an electronics website... (why would you want a zero ohm resistor? Well, obviously it's a jumper. It lets you build a single circuit board then bridge relevant traces to create various different options without needing an entirely new circuit board design.) And the way they're priced is... Interesting. I believe thanks to volume pricing it was roughly the same price to buy 5000 of the things as it was to buy 10? Can't remember the exact rates, but, yeah, cheap doesn't begin to cover it. XD Actually, looking it up.... A single unit is 13.8 cents. Already you see a fraction of a cent involved. But... Volume pricing at 1000 units is 1.9 cents At 10,000 it's 0.6 cents and at 50,000 it's 0.4 cents So in all cases you're dealing in 10ths of a cent. But also consider that the volume price is 1/34.5 of the unit price. And those '50,000 unit' volume prices... Yeah, that's going to set you back all of $200 But if you look at a slightly different version of the product with the same price for a single unit but that reaches a low of 0.3 cents at just 100 units... Well, those 100 units collectively cost 30 cents. While individually they cost 13.8 cents. Anyway, that's off topic really. But fractions of a cent do get used in some context, regardless.
Midrange and Mainframe's do... they use decimals (BCD - packed, signed, unsigned; along with other numerical formats) with the numerical length(s) and the decimal point(s) defined (both in the input fields and the result field independently defined) and with the ability to do rounding (half adjust) that works as expected without the imprecision of floating point calculations. It makes working with accounting or manufacturing data and transactions so much simpler (except where the result is ill defined and an overflow occurs or a lack of precision causes the result to be zero when the answer is less than the fractional part of the number). Its great for its intended use but falls down at the most annoying moments as the initial programming has to make some assumptions that don't always hold up over time.
Assuming you start with dollars and cents, that works for addition, subtraction, and multiplication, but it doesn't necessarily work for division, since you can get a part of a cent. And sometimes you even do more complex calculations involving higher maths. That said, using fixed decimal does exist, instead of using floating point. But it's going to be more than 2 decimal places. And, even then, it will ultimately have to round.
Computer nerds in 1959: Hmm, if we base our character set on BCD Plus a set number as the most significant nybble, conversions will be super fast and can remain in the registers. Computer nerds in 2019: HURR DUR DURRR DOES IT RUN KRYSIS?!?
Classic printers expect a 7 or 8-bit value for every character they print. Since the printer's capabilities will probably include things like 26 upper case, 26 lower case, 10 digits, lots of punctuation and special characters, you begin to see why the values need to be made larger than just the 4 bits being used by a single BCD digit. So, it was decided that the number characters would reside in the ASCII table from 00110000 to 00111001 (characters 48 to 57) so that the conversion could be done quickly by simply slapping 0011 onto the front of the BCD digit. It may be helpful to pull up an ASCII table online and look at how it's all arranged.
Great lecture I always wondered about 42. ASCII is symbols represented bits. That’s the physical level. I love Boolean algebra. I am security teach. I am from Downey ca.
I'm an absolute gutter coder with no formal training. I just started learning some 6502 assembly as an interest/hobby, and suddenly a lot of this stuff is beginning to make sense to me. There's a BCD register flag and that was literally the first I'd ever heard of it. While I appreciated that there was a need to represent decimal numbers it wasn't until I watched this that I understood why that would be a hardware feature.
I'd like to point out that IBM's predecessor to Deep Blue _was_ in fact named "Deep Thought". Not sure why Prof. Brailsford forgot or didn't know about that one.
Is there something like a number format with variable bit length? There are so many formats (integer, word, signed, unsigned, float, double, quad), but there are always numbers, that cannot be stored exactly. A BCD format with variable bitlength (up to millions of decimals) would be really helpful.
You can't really make it entirely dynamic. It's up to the programmer to decide what a sensible scale is when he uses something like Java's BigDecimal class. Otherwise the code has no way of knowing how far it needs to run a calculation. For example: the result of 1/3 is 0.33333333333... etc. If there was no fixed limit, it could theoretically use up all available space just to store that result, because it never ends. Even if you made a special case for periodic parts of numbers, you'd still have to deal with all the irrational numbers.
I'm wondering why his go-to example of where you might find a digital clock is a shopping mall. Is it a UK cultural thing? Are malls there known for having digital clocks on the wall or something? (Not trying to poke fun here; genuinely curious.) In any case, another great video from my favorite Computerphile presenter.
At some point I always end up going analog signals instead of digital because of the interface problems like this. It has it's own list of problems like data corruption but for me things like that feel more like a change.
Can't see this mentioned among the comments? For me the best Douglas Adams joke (which he denied of course) was when the Deep Thought minions declared what they were answering was "What do you get if you multiply six by nine?" and they answered "42" ... well d'oh, but try that in base 13...
also 42 on the ascii table is '*' which is used (among other things) to reference everything therefore, the answer to the universe and everything . . . is everything want a new universe with a new answer? just find the root directory of the universe and execute "cp * version2.universe". alternatively if you don't like the answer, delete it. i'm sure nothing bad will happen when "rm *" is run on the root of the universe
BCD and the exposure of it's internals was a huge mistake on IBMs part. Sure decimal arithmetic is necessary for many applications. However, the internals of the format should have been opaque with a number of instructions available to manipulate that opaque data type. Exposing the details in languages like Cobol, PL/I and RPG is disastrous especially if one wants to change operations from memory to memory to register to register as large main frames do today. One can emulate decimal arithmetic in the floating point registers if necessary or special instructions could operate of the FP regs to perform decimal arithmetic.
It's kind of interesting how this kind of knowledge is largly removed from the everyday programmer's required knowledge set and experience, apart from assembly and embedded systems writers
b.. b.. but the 42. Ascii character (ascii dezimal) is the wildcard character * That means everything is the answer to life, the universe and everything!
Intresting. So that is the reason why digital clocks and calculators are so power efficent. That is. As long as there is no tick, the digital symbols is just fixed to the display.
If we just adopted hexadecimal as the number base of choice for everyday life, this would not be an issue at all. Just pad the number to the next nybble with zeros, and you have your printable number ready.
Deep thought was a least in part a parody of an earlier science fiction short story "The Last Question" by Issac Asimov which too itself much more seriously (for better or for worse).
Does anything change if 1 "transistor" has 16 states instead of 0-1? Does it make the cpu faster? Less transistors per cpu, is it possible? Which math formula to calculate it? Thanks in advance
Depth of knowledge is wonderful but when you have depth breadth and width it’s awesomely interesting. Perhaps we could further examine developments in American vote counting equipment and what is entailed when you don’t have to count votes. Is it just a random number generator or does it work backwards from the pre programmed outcome? 😉 I’ll get my coat......sounds of tumble weed!
Yes, I just wanted to show that "Computerphile" presenters other than Dr Bagley are also allowed to wear Hawaiian shirts! This particular one was bought about 8 years ago at Macy's, in Stanford University's Shopping Mall.
It’s a nice looking shirt! I have quite a few Hawaiians myself. I have always had a passion for electronics and computers and I have learned a lot through your videos on Computerphile. I love Microcomputers and home brew cpu’s and computer builds
I am going to be studying computer engineering at the University of Cincinnati next year. I hope that one day I will be able to go to the UK to check out the centre of computing history and maybe sneak into one of your classes at Nottingham. Thanks for all you have done.
Best wishes for your computer engineering studies. I'm pleased that our "Computerphile" videos helped you on the way.
I hope you don't mind me saying this, professor Dave B, but your likeliness to professor Donald Ervin Knuth is quite uncanny. I wonder if he joined your stroll through Stanford University's mall. 😄
...and I learned the meaning of a new word! to nibble - knabbern :D
Prof. Brailsford, please only wear Hawaiian shirts from now on. I thought you were cool before but now you've proven how much cooler you are!
Professor Brailsford, NEVER STOP telling us stories about computers and computing. I could listen all day.
I could not. But it seems that I cannot resist listening all night.
The ASCII equivalent of 42 is asterisk. The asterisk is used in computing to represent 'anything'. So the answer to life, the universe and everything is anything ;)
+Klapaucius Fitzpatrick
Yep, that's how writing works, lol.
SQL uses both, actually. Asterisk is used in SELECT statements and means all matching records, % and _ with the LIKE operator. _ works like DOS ?, and matches a single character. % works like DOS *, and matches zero to all following characters.
Thank you! As a developer for a number of years. I've had difficulty multiplying numbers.
Well, now you know. :D
In regular expressions, the asterisk is used to me "zero or more of whatever happened previously". Since an asterisk at the beginning of a regex is nonsense, we must conclude that there was a supercomputer previous to Deep Thought which computed part of the answer, which Deep Though is saying must be repeated indefinitely. In summary, the meaning of life is that it is something which goes on, and on, and on.
The exact wording is important. The Deep Thought was specifically asked "What is the answer to the ultimate question of life, the universe and everything?"
The answer it eventually gave was 42. It was not, however, able to compute the ultimate question itsself, to which 42 was the answer.
And the Earth was created by mice to compute the ultimate question.
@@chucku00 They only LOOKED like mice in this dimension.
Of course it was not able, there was a spatial detour to be constructed.
Why weren't my teachers more like this guy?! Hats off to the knowledge and experience.
Top choice of shirt Prof. B!
Every time I see one of these videos, I get nostalgic for when I was an undergrad, and a wee bit jealous of all those people about to go to university to learn all sorts of amazing new things. Good times!
You Sir Dave bring internet to a very different level of usefulness . Thank you for such a high quality yet free content.
I was programming some stuff with binary coded decimal the other day. It's still used for real-time clock chips! If you want the bare-metal low level time from these chips they give you BCD, presumably because it's easier to print BCD out to cheap digital displays than to convert an integer to a string.
EDIT: Yep you mentioned digital clocks at the end of the video! Very excellent.
That, and as it has been the standard since "the beginning of time" there's really no reason to change. (Yes, many do provide a non-BCD interface. I'd not be overly surprised to find one using the 64-bit UNIX epoch.)
Cool
mwalsher. asm and nasm are standard GNU toolset. I worked with it on a modern computer to write a rudimentary OS just to learn how OSes work at the lowest level. Had a mix of C compiled to assembly and raw assembly to do a handoff for memory management purposes using PC memory management instructions. Pretty much anywhere I wanted to access the underlying machine from C, I had to write a two part C and assembly pair of functions to create a binding to the instruction and inject repeated setup stuff like stuffing args into registers and memory locations. It's great what you can do with a modern computer. My rudimentary OS is complex enough that it could handle a program and exposed an OpenGL 3.3 binding that had to be watched careful because keeping those alive without drivers is a real challenge. Originally I was going to try making a game on it but the binding management is so heavy that modern OSes with drivers handle so much better it's crazy.
Xilefian - A 'fun' something I discovered with PC CMOS RTC when investigating Y2K was that at least one design had a flaw for leap years, very relevant to all this. If you've got a value in pure binary, you can easily check if it is divisible by four by checking for zero in the least significant two bits, but this won't work for BCD. That fact didn't stop them trying! So consider the year 1992 (the century part 19, is just stored elsewhere as a static value not part of the clock), so we have just 92, in BCD, the least bits are not zero, so the defective chips did not have a leap year in 92, but for 94 and 98 they did! So if you have a vintage PC that gets the date wrong at the end of February, this is the little known about reason! Thankfully Y2K itself was a leap year because it was divisible by 400 and so an exception to the 100 year rule.
@Gerben van Straaten Hours : Duodecimal (or Tetravigesimal on a 24 hours basis)
Minutes : Sexagesimal
These stories are pure gold.
HP scientific calculators all used BCD so avoided the rounding problems inherent in a true binary calculator. In 2004 Thomas Okken's released his exellent HP-42 simulator, Free42. It comes in both BCD and the slightly faster floating point binary versions. SwissMicros brought it full circle in 2017 when they released the DM-42, a physical calculator based on the HP-42 but using a modified version of Free42 for it's firmware. And yes... it uses BCD.
re: rounding problems inherent in a true binary calculator.
Should be rounding problems inherent in a floating point calculator.
42 in binary is (mentioned in the video) 101010. When Parliament of the Czech Republic (where I am from) agreed a law on digital broadcasting or digital phoning or whatever digital (I can't remember), they made it valid since 10. October 2010, or 10.10.10. They I believe proved their sense of humor...
Every video of yours is like a journey through the time and brains of engineers & scientists!
An episode on FUNDAMENTAL differences between AI and classical programming would be GRAND! Basically, for the lay person, why AI isn't just a giga-complex classical programming effort that is pre-loaded with all possible outcomes...
in ML/AI, code you write sets up stats models which process input data for patterns to determine a response, instead of programming its behavior directly. you don't pre-load all outcomes, but models are flexible
What a beautiful explanation and historical context of BCD.
I love these journeys into history. It's actually very important to have some knowledge of these topics so people who want to be Computer Scientists and not just programmers (although there is nothing wrong with the profession) know how we got to where we are and where we might be heading in the future. Is there some chance that Professor Brailsford or someone can do a session on packed decimal? That is the technology on which so much of the financial world was built and still lives today.
The double dabble algorithm is a pretty great way to convert beateeen binary and BDF. It's how you can output to an lcd screen at the end of an arythmatic unit.
"Heavyweight macho calculations". I love it.
Thank you sir.
I always appreciate your brilliant way to express your vast knowledge.
8-bit arcade machines use BCD for displaying scores, etc. on the screen. The Z80 CPU has the DAA instruction for _fixing_ BCD values after ADD, SUB, etc.
DAA - decimal adjust accumulator. An instruction on the Z80. Fixes decimals overflowing into hexadecimal.
SED - SEt to Decimal. An instruction on the 65xx. Makes it's math instructions operate on BCD values.
Not from computing profession, but learn a lot here just by listening.
BCD seems to be so common that you don't really think about it on a daily basis. Great to see there is a story behind it :)
Great video on the importance and use of Binary Coded Decimal (BCD). Thank you !
Clicks LIKE before the video starts because Brailsford... respect your elders, they have a lot they can teach you.
Some 40 years ago, I used to work on a computer that worked in BCD (not IBM). You typed in a decimal digit, it processed it in decimal, and then output the result in decimal.
Very entertaining piece of history from the professor
4:52 "Because... the average person wants their answers out in decimal, not hexadecimal"
Like!
At this point we should really teach hexadecimal to our children and ditch decimal x)
I read hexadecimal for a living. I have to review register dumps in order to verify a design before we approve it for manufacturing. (display and camera controllers for an ARM system-on-chip). the 8 digit hexadecimal represents a 32-bit register. And the registers can have multiple fields packed into them, often weird sizes like 5 or 6-bit fields. Horizontal pixel count for display is 13-bits to represent 0 to 8191, and it's packed into the same 32-bit register with the vertical line count. I have scripts that decode the fields for me, but sometimes if something is weird I have to make sure the scripts are working correctly and I must work it out by hand.
One more step from BCD is "packed decimal" where each byte is split into 2 BCD nibbles - then you have the fun of dealing with signs as well. As for converting binary to/from text or BCD - yes done that, got the scars. Real performance hit on a VAX. Accountants really didn't like floating point and rounding. Actuaries much more relaxed, the end result of an Actuarial calc always needs rounding so go for the better performance of floating point for compound interest and mortality rate calcs.
Now here is a fun fact: The BASIC interpreter in most of the 8-bit computers of the 70s and 80s use binary for their floating-point representation, but some (like the MSX and the Tandy 100 series) use BCD.
Having worked in the IBM mainframe arena since the mid 70s and also being a Douglas Adams fan, I recall an unverified and less technical explanation for the answer "42". Assume each of your hands is a nibble with the fingers the bits. Extend your fists in front of you, then raise the longest finger on each hand. You will see "42" -- the answer to life, the universe, and everything.
When I was young and naive, I wanted to work with huge integers in QBasic (to calculate large Lucas numbers) and inadvertently reinvented a highly inefficient BCD system. It was ugly and slow, but it worked goddammit. Still one of my proudest moments in a strange way. I still have a printout of a 20000 digit Lucas number I generated with that nasty bit of work.
oh wow! great acheivement!
whoa!! excellent sunny shirt!!
There are 10 types of people in the world
1. those who understand binary
2. those who don’t
3. those who didn’t expect this joke to be base 3
FYI, The correct way to type this would be:
```
There are 10 types of people in the world
01. those who understand binary
02. those who don’t
10. those who didn’t expect this joke to be base 10
```
Actually, by that logic it would be:
There are 11 types of people in the world
01. those who understand binary
10. those who don’t
11. those who didn’t expect this joke to be base 11
I think you mean 1 2 10
There are 10 types of people in the world:
Those, who understand base two jokes;
those, who understand base three jokes;
those, who understand base four jokes;
those, ....
There are 10 types of people in the world:
People who understand hexadecimal,
And f the rest
In the NES port of Tetris, a mistake in BCD arithmetic causes levels not to update properly if you start above level 9. The player is always supposed to reach level L after clearing 10*L lines, then leveling up every ten lines thereafter. That way, by starting at a higher level (and therefore faster speed but more points per line), you have a higher scoring potential. For instance, if you start on level 5, you will hit level 6 at 60 lines, then 7 at 70, and so on. But if you start at level 10-15, it takes only 100 lines to reach the next level (same as a level 9 start), 110 for a lvl 16 start 120 for 17, 130 for 18, and 140 for 19.
The theory is that to compare L
If only they had continued and made 4bits = nibble, 8bits = byte, 16bit= munch, 32bit = gobble, 64bit = meal.
So Windows XP is a gobble-based operating system. And Windows 7 is a meal-based operating system.
RaymondHng
Sure if you like. But we never called 8-bit operating systems 'byte-based' in that context.
And what about 36 bits? 72 bits?
36bit = Mushrooms and 72bit = Peyote 😂
128bit = buffet?
IEEE 754-2008 standardized a decimal floating point format, which is related to BCD. Many programming languages (Java, C#) support decimal floats as well, but often they have to do pure software emulation as much of the mainstream hardware is limited to only the older binary floating point format from an earlier IEEE 754 standard, with all of its weird rounding issues.
I had NO idea that the ascii digit characters corresponded to BCD in their least significant nybble!
That’s so cool!!!!
Would've loved to see more on actual arithmetic in BCD. How much of a performance hit does it take when done in hardware? How much more complicated do the circuits become?
Most early high speed scientific computers stored entire numeric (binary) words of data or an entire instruction in each addressable unit of memory (IBM 700 and 7000 series with 36-bit words, for example), so a single arithmetic operation required six memory cycles: fetch the first instruction, load the first operand into the accumulator, fetch the second instruction, add/subtract/whatever the second operand against the accumulator, fetch the third instruction, and store the accumulator in the desired memory word. These machines had the fastest arithmetic-logic units of the era, and wide parallel logic circuits that required the minimum number of CPU cycles per memory cycles to get the job done.
But commercial computers, like the IBM-1401, generally had variable word length arithmetic and stored each character in its own memory location. Arithmetic and text operations were done with each character from one area of memory matched against its corresponding character in another, replacing one of them with the result. Their logic and memory cycles were also longer, making them much slower in raw computing power than the big number crunchers. But they were more than fast enough to keep up with high speed card and print devices and tape decks. Extra processing power to compute an invoice, payroll, or grade point average would be wasted if the data files could not be read or written fast enough. I once used the entire 132 character as an accumulator in a Fibonacci series program on a 1401, and even the 100+ digit to 100+ digit addition didn’t slow down the high speed printer (one line in 100 ms)!
Only when the System-360 architecture, allowing a mix of binary and decimal (BCD) operations and memory accesses, was introduced, was a single machine practical that could do both kinds of processing efficiently.
The Ultimate Answer is 42. The Ultimate Question is "What do you get when you multiply nine by six?"
And you would think that something is fundamentally wrong with the universe, but the calculation actually works in base 13!
I was looking to create 10bit bin2bcd convertor and this man really solved my issue😭👍👍👍🙌🙌🙌🙌.
The Z80 had a special instruction for convert the A register to BCD. DAA (Decimal Adjust accumulator)
x86 has similar instructions. I recently coded FizzBuzz directly in assembly and used exactly this to represent the counter.
Not any more it doesn't - long mode/x64 - reuses those opcodes for something else.
I did not know that, thanks for the addition. Although to be entirely correct, modern 64 bit CPUs still have these instructions, but only when running in any mode other than long mode.
How efficient is multiplication with BCD? I ask because the Intel x86 line allows for BCD addition and subtraction, with "adjustments" and a flag to handle the overflow into the 0xA-0xF space. But for multiplication and division, the operands have to be converted to binary first, and the result back into BCD afterwards. (The FPU can load and store BCD, but internally arithmetic is done in binary.)
Is it just the manual method performed in hardware?
Your quote of the dialogue was incomplete. The sentence uttered by deep thought before the answer was slightly longer. But the recipients of the output didn't know what the question was, so they commissioned the building of planet Earth to compute the corresponding question.
Yes, this is so often misquoted. They wanted the answer to the *ultimate question* of life, the universe and everything.
Thank you
wow, what happened at 7:20?
Ok, it's a simple matrix transform, but it looks so cool
We used BDC to give a stock code to a blank PCB when we could only use drilled holes.
Using two drill sizes, and, as a "magic" marker of a pair of vertical holes, to write 42 it would look like : .o.. oo.o
Wonderful conclusion!
BCD makes multi-digit 7 segment displays easier :D
This spawned the double dabble. Wish I watched this last semester
Did I already mention this in another video? Motorola's 68000 CPU had the assembler instruction ABCD - Add Binary Coded Decimal.
Fun fact: all x86 processors have instructions for BCD and they have one-byte opcodes because someone though that they would be used very often.
All 65xx processors have a single one-byte opcode that sets them to BCD mode (no other special BCD instructions needed).
Another brilliant video from this amazing man. Also reminded me of the old TO BE thing. T=20, O=15, B=2 and E=5, adds up to 42. Even Shakespeare was in on the joke.
13:24 - could have sworn he was about to say "digital watches"...
On a more thoughtful note, I appreciate how Deep Thought stalled for time to let her kid "Earth" grow up - job security that enabled the great thinker to watch TV while "working".
At 4:50 -- the "hexadecimal range" (0xa to 0xf) is 10 to 15, not 10 to 16.
For comparison, there's also FIO-DEC that didn't quite make it this easy (1-9 were precisely those values, but the digit 0 was code 20 octal, or 10000 binary). Decimal types nowadays frequently use word cutoffs rather than digit cutoffs, which can fit e.g. 9 digits in 32 bits or 19 digits in 64 bits.
Decimal floating point next?
Penny Lane Ti-99/4A had decimal base100 floating point.
IEEE754: 0.1 + 0.2
The IBM-1620 had BCD integer instructions, but to save logic circuits, they used a table in the first few hundred digit pairs to get the answers. The FORTRAN and other language compilers used library subroutines to implement decimal floating point.
Some microprocessors had rudimentary BCD opcodes, such as the Intel X86.
Sort of:
Deep thought: The ultimate "answer" to the TLTU&E *is* 42. The reason why humans couldn't understand it is because they didn't know the ultimate "question" to TLTU&E. Since Deep thought couldn't calculate the ultimate question, earth was created to calculate that.
Earth: [4.5 billion years later] the ultimate "question" to TLTU&E is... (just nanoseconds from earth completing its calculations, the vogon space fleet destroyed it to make room for the intergalactic highway)
Magraethea (Earth 2.0): since Arthur Dent is the only surviving earthling and element to the ultimate "question", his brain could be used to reboot the program.
Why not use an integer number of cents to represent money? Then there's no problem with incorrect rounding.
Mister Hat The fixed decimal format is used in several circumstances, to represent both cents and percentages.
because some things, like small parts like nuts and bolts for example, cost less than a penny.
Yes, I ran into that at some point.
I was looking at zero ohm surface mount resistors on an electronics website...
(why would you want a zero ohm resistor? Well, obviously it's a jumper. It lets you build a single circuit board then bridge relevant traces to create various different options without needing an entirely new circuit board design.)
And the way they're priced is... Interesting.
I believe thanks to volume pricing it was roughly the same price to buy 5000 of the things as it was to buy 10?
Can't remember the exact rates, but, yeah, cheap doesn't begin to cover it. XD
Actually, looking it up....
A single unit is 13.8 cents. Already you see a fraction of a cent involved.
But...
Volume pricing at 1000 units is 1.9 cents
At 10,000 it's 0.6 cents
and at 50,000 it's 0.4 cents
So in all cases you're dealing in 10ths of a cent.
But also consider that the volume price is 1/34.5 of the unit price.
And those '50,000 unit' volume prices...
Yeah, that's going to set you back all of $200
But if you look at a slightly different version of the product with the same price for a single unit but that reaches a low of 0.3 cents at just 100 units...
Well, those 100 units collectively cost 30 cents.
While individually they cost 13.8 cents.
Anyway, that's off topic really. But fractions of a cent do get used in some context, regardless.
Midrange and Mainframe's do... they use decimals (BCD - packed, signed, unsigned; along with other numerical formats) with the numerical length(s) and the decimal point(s) defined (both in the input fields and the result field independently defined) and with the ability to do rounding (half adjust) that works as expected without the imprecision of floating point calculations. It makes working with accounting or manufacturing data and transactions so much simpler (except where the result is ill defined and an overflow occurs or a lack of precision causes the result to be zero when the answer is less than the fractional part of the number). Its great for its intended use but falls down at the most annoying moments as the initial programming has to make some assumptions that don't always hold up over time.
Assuming you start with dollars and cents, that works for addition, subtraction, and multiplication, but it doesn't necessarily work for division, since you can get a part of a cent. And sometimes you even do more complex calculations involving higher maths.
That said, using fixed decimal does exist, instead of using floating point. But it's going to be more than 2 decimal places.
And, even then, it will ultimately have to round.
That actually reminds me a little be of how PostNET codes used to work.
I liked those. I miss 'em.
What about Excess-3? Isn’t that the more common way to do bcd in low level hardware like calculators?
My favorite microprocessor, 6502, has BCD built in
Computer nerds in 1959:
Hmm, if we base our character set on BCD Plus a set number as the most significant nybble, conversions will be super fast and can remain in the registers.
Computer nerds in 2019:
HURR DUR DURRR DOES IT RUN KRYSIS?!?
Answer to the Ultimate Question of Life, the Universe, and Everything !
A note on EBCDIC being E-BCD-IC (extended BCD interchange code) would have been great in this context. Or did I miss it. ;-)
I don't get it. What is the need for prepending 4 bit before printing?
Classic printers expect a 7 or 8-bit value for every character they print. Since the printer's capabilities will probably include things like 26 upper case, 26 lower case, 10 digits, lots of punctuation and special characters, you begin to see why the values need to be made larger than just the 4 bits being used by a single BCD digit. So, it was decided that the number characters would reside in the ASCII table from 00110000 to 00111001 (characters 48 to 57) so that the conversion could be done quickly by simply slapping 0011 onto the front of the BCD digit. It may be helpful to pull up an ASCII table online and look at how it's all arranged.
Great lecture I always wondered about 42. ASCII is symbols represented bits. That’s the physical level. I love Boolean algebra. I am security teach. I am from Downey ca.
I'm an absolute gutter coder with no formal training. I just started learning some 6502 assembly as an interest/hobby, and suddenly a lot of this stuff is beginning to make sense to me. There's a BCD register flag and that was literally the first I'd ever heard of it. While I appreciated that there was a need to represent decimal numbers it wasn't until I watched this that I understood why that would be a hardware feature.
Interesting!😎 Saving the video in the computer and network file.
Every MineCraft Calc guy out there:
I understand BCD, but what is Binary?
I'd like to point out that IBM's predecessor to Deep Blue _was_ in fact named "Deep Thought". Not sure why Prof. Brailsford forgot or didn't know about that one.
Is there something like a number format with variable bit length? There are so many formats (integer, word, signed, unsigned, float, double, quad), but there are always numbers, that cannot be stored exactly. A BCD format with variable bitlength (up to millions of decimals) would be really helpful.
You can't really make it entirely dynamic. It's up to the programmer to decide what a sensible scale is when he uses something like Java's BigDecimal class. Otherwise the code has no way of knowing how far it needs to run a calculation. For example: the result of 1/3 is 0.33333333333... etc. If there was no fixed limit, it could theoretically use up all available space just to store that result, because it never ends. Even if you made a special case for periodic parts of numbers, you'd still have to deal with all the irrational numbers.
Yes there is. There are "infinite" precision integers (limited by system memory) and arbitrary precision floating point numbers.
Yes there is. There are "infinite" precision integers (limited by system memory) and arbitrary precision floating point numbers.
It's called 'arbitary precision arithmatic' and it does indeed exist. It's just really slow to compute with.
42 || !42
- William Shakespeare
Surely that should be 43||!43?
@Jason You mean because 0x2bH === 0x43D?
True?
To A, or not to A? That isn't the question.
printf("0x2b | ~0x2b, that is 0xff (%#x)
", 0x2b | ~0x2b);
So why is there this thing called floating point error? :S Shouldn't it be very easy to overcome if we're just using ascii or bcd characters?
Do modern hand calculators use BCD? What about digital clocks on things like desktop computers and smartphones?
Has Professor Brailsford retired now? None of these appear to be in his office.
See reply to Neil Roy (above)
Something I've always wondered is, why on Earth did IBM embed punctuation in the middle of the EBCDIC alphabet?
I'm wondering why his go-to example of where you might find a digital clock is a shopping mall. Is it a UK cultural thing? Are malls there known for having digital clocks on the wall or something? (Not trying to poke fun here; genuinely curious.) In any case, another great video from my favorite Computerphile presenter.
At some point I always end up going analog signals instead of digital because of the interface problems like this. It has it's own list of problems like data corruption but for me things like that feel more like a change.
0:18 He who controls the spice controls the universe
Can't see this mentioned among the comments? For me the best Douglas Adams joke (which he denied of course) was when the Deep Thought minions declared what they were answering was "What do you get if you multiply six by nine?" and they answered "42" ... well d'oh, but try that in base 13...
What does this have to do with 42?
also 42 on the ascii table is '*' which is used (among other things) to reference everything
therefore, the answer to the universe and everything . . . is everything
want a new universe with a new answer? just find the root directory of the universe and execute "cp * version2.universe".
alternatively if you don't like the answer, delete it. i'm sure nothing bad will happen when "rm *" is run on the root of the universe
BCD and the exposure of it's internals was a huge mistake on IBMs part. Sure decimal arithmetic is necessary for many applications. However, the internals of the format should have been opaque with a number of instructions available to manipulate that opaque data type. Exposing the details in languages like Cobol, PL/I and RPG is disastrous especially if one wants to change operations from memory to memory to register to register as large main frames do today. One can emulate decimal arithmetic in the floating point registers if necessary or special instructions could operate of the FP regs to perform decimal arithmetic.
The point on .1 in binary is super important. I would like to see a video o c# decimals or java BigDecimals.
Or, to sound fancy, you could call it a decimal power float. Scientific notation joys.
I can't understand anything but I love his voice
It's kind of interesting how this kind of knowledge is largly removed from the everyday programmer's required knowledge set and experience, apart from assembly and embedded systems writers
"controlling elections from punched cards.." - priceless!!
b.. b.. but the 42. Ascii character (ascii dezimal) is the wildcard character *
That means everything is the answer to life, the universe and everything!
Intresting. So that is the reason why digital clocks and calculators are so power efficent. That is. As long as there is no tick, the digital symbols is just fixed to the display.
in the clock there is a part where it counts quartz vibrations and then that part sends a tick to another part to update the time.
Usually inverting gates that have a special reset condition that happen to be the same as the frequency of the quarts.
I wish he went into the math of binary decimal expansions before floating point.
I fancy a nybble
Needless nitpicking: at 4:50 it should be "10 to 15" not 16 :)
*
*thank you*
If we just adopted hexadecimal as the number base of choice for everyday life, this would not be an issue at all. Just pad the number to the next nybble with zeros, and you have your printable number ready.
Unfortunately, very few people have eight fingers per hand.
so that's where gluon comes from!
for last a was seekeng the answer to this question on internet after watching floating point numbers and thinking on binary a lot
2:37 that's some scary warped fingers
Deep thought was a least in part a parody of an earlier science fiction short story "The Last Question" by Issac Asimov which too itself much more seriously (for better or for worse).
I really enjoyed that story, The last question, one of my favourites. Thanks for the memory jog.
Fiat lux
Does anything change if 1 "transistor" has 16 states instead of 0-1? Does it make the cpu faster? Less transistors per cpu, is it possible? Which math formula to calculate it? Thanks in advance
Depth of knowledge is wonderful but when you have depth breadth and width it’s awesomely interesting.
Perhaps we could further examine developments in American vote counting equipment and what is entailed when you don’t have to count votes. Is it just a random number generator or does it work backwards from the pre programmed outcome? 😉 I’ll get my coat......sounds of tumble weed!