Well said! I’m about half way through building Ben’s computer and so look forward to making it Turing complete. Having a universal computer sitting on my desk, that is a simple version of the much more complex and powerful ones that surround us, will be a fantastic way to get into the deep aspects of computation. My mind has been blown by reading David Deutsch, the “father of quantum computation”, and watching his you tube videos (rec Closer to Truth interviews) and explanations of the connections between computation, physics/physical reality, and knowledge. I hope Ben Eater can one day make the “how to build an 8-bit quantum computer at home” series!
@@randalljam2000 don't stop now, continue onto the nand2tetris lecture and project series on TH-cam and their web! Using completely free tools, you learn how to build a 16-bit alu, cpu, 32k ram all with hardware simulators and writing chip logic code using, say, Notepad. Then you use a high level language, and learn how to write an assembler, virtual machine translator, and compiler while also coding up a simple game in the computer's own simple high level language, then run everything on your machine! You can also simulate your work on their vm emulator and cou emulator. You also write a small os for your computer and learn how to map ram words to the 256x512 pixel screen, including text and lines drawing. How to build strings. Memory allocation. It's a great hands-on challenge! They teach you and hold your hand just enough where you are then left on your own to figure out how to finish the various projects. It's phenomenal
Been looking forward to this. This series completely opened my eyes to fundamentals of a CPU and EE. (Software engineer by trade, never really dabbled past building my own PCs). Thank you Ben for making this series
I did start building it :) It's moving forward slowly. I need a better multimeter/oscilloscope. Troubleshooting is very difficult with the one I have now
On a related note, the x86 mov instruction is Turing complete (yes, that single instruction). For anyone who's interested, Chris Domas made a mov-only compiler called "Movfuscator" He has a presentation on TH-cam somewhere
This! I just posted a link to his presentation up-thread, where Chris demonstrated his obfuscator, MOV-only gcc, and a floating-point-based 3D library exlusively using MOVs.
Ben's computer's instruction set includes lda and sta along with a perfectly good jmp instruction to put at the end. So if it had more memory the concept could easily be implemented without the conditional jump.
I was thinking about how to add more memory to that thing. Maybe adding a instruction that replaces the memory with another piece of memory from an eeprom. If i need to multiply, load the multiply function into memory, and when done, load where the program originally was. It still has only 8 Bits to know which memory chunk to load. So maybe if we swap memory, always jump to the first address? Then we could have 255 different memory contents to load...
Entscheidungsproblem is pronounced a bit like Entshydeongsproblehm or entshydoungsproblehm with r spelled harder than in english and emphasised on "shy", ou like in "you". 😊 It really means "decision problem". It's german, because Kurt Gödel was the first person writing a paper 1931 about the Entscheidungsproblem which was reformatted by Alan Turing 1936. Btw as a computer scientist I love your videos. Best chilling learning and educational videos about this for everyone who wants to get in touch with these computer stuff even when you may be a beginner. Very well done 😊
Ben, I don't know how i would thank you. You completely changed my perception of computers(/computing) and your 8-bit computer series got me in electronics and in how does a computer work (the elemental stuff no one looks at nowdays). Stay AWESOME, cheers from Czech Republic! PS: do you think it is possible to simulate a basic quantum computer with hardware? (just like your 8-bit computer)
It *is* possible to simulate a quantum computer with classical computing hardware. You just have to create enough separate components to represent all of the possible states that each qubit can assume simultaneously, as well as the needed quantum logic "gates" (several of which perform different operations than classical logic gates). The issue is that the classical simulation will have exponentially more components than the quantum computer (i.e., the number of classical components increases exponentially as you simulate more qubits).
What im wondering is : Would it be possible to build (program) a basic interpreter for the CPU of Ben Eater if we just use an lcd screen in stead of 7 segment displays ? And if it is possible, how hard would it be to program it ?
I think I have a program that can multiply two numbers using only the instructions that we have right now (so no conditional jumps). The idea is that we can edit the RAM, which means we can both change the storage AND the actual program itself. This means that the data can actually change how the program behaves, and with this we should be able to make a multiplication program. This is the program that I came up with: 0: LDA 3 1: ADD 17 2: STA 3 3: JMP 31 4: LDA 3 5: SUB 17 6: STA 3 7: LDA 18 8: ADD 16 9: STA 18 10: LDI 1 11: STA 19 12: LDA 17 13: SUB 19 14: STA 17 15: JMP 0 16: X 17: Y 18: 0 19: 0 . . . 28: LDA 18 29: OUT 30: HLT 31: JMP 28 I actually needed 5 address bits, or 32 memory locations to make this work, but I still only used the instructions we have available. Lines 0-6 are essentially what are doing the conditional jump that we need for multiplication. What they do is jump to line 31 if the number stored at address 17 is 0. With this it's relatively easy to do the rest of the program, which just adds the number in position 16 to the number in position 18, and stores the sum back in position 18. At the same time, it subtracts 1 from the number in position 17, and loops back to the beginning of the program where again, it checks if position 17 is 0 and if so, goes to the last line, line 31. Then, on line 31, I just jump back a bit so I can output the answer, which is in position 18. So, basically, positions 16 and 17 are storing the two numbers we want to multiply, and position 18 is storing the result of that multiplication. The way the conditional jump works is that I am editing the jump statement right before I reach it. In memory, the actual jump statement will be represented as ABCD11111, where ABCD is just the code for the jump statement. Ben used 0110, so I'll also use that. The other digits are 11111 because that is the position we are jumping to, 31. Right before the program reaches the jump statement, we add the number in position 17 (which I'll call Y from now on) to the number in position 3. But, the number in position 3 is the jump statement. If Y is anything other than 0, we will be editing this jump statement. The jump statement before was 011011111. If we add anything to this, we will be changing the first four digits of the statement. As long as we add a number that is less than 5 bits we will only change the 4th bit, which means the new code will be 0111. If we just make this code do nothing, then if we add anything other than 0 to the jump statement, the line will then do nothing. Only if we add 0, which means Y = 0, will we actually jump to line 31, as then, the jump statement won't be edited. So, this works as a conditional jump statement that jumps to the last line if Y = 0 (and Y is less than 5 bits). I'm pretty sure that this means the computer is Turing complete, even without adding a conditional jump statement, but it's just much harder to reason this out, and it's probably easier to just add a conditional jump.
Chaitanya Ravuri pretty sure it’s people with brains like yours that got us to the moon with the AGC and then hacked it mid flight even with the bulk of its programming in read only memory. The limit here is the address size (more bits needed) and the ability to modify its own memory. Here’s a crazy thought. You could write a program that uses pseudo random number generation pulling out different predefined “gibberish” ram memory allocations that changes after every power on and use it to print out random ASCII characters to hear from the universe or create random music generation. 🤪
No need for all that if we can assume both inputs are positive Check this out 0: LDA 10 1: ADD 11 2: STR 10 3: OUT 4: LDA 7 5: SUB 9 6: STR 7 7: (Y-2) 8: JMP 0 9: 1 10: (RESULT) 11: (X) Since 0000 is NOP and 1111 is HLT, this will add X to itself Y times and then halt when address 7 rolls over to the maximum. I suppose we also have to assume Y-2
4 ปีที่แล้ว +3
What you wrote is an impure program. People generally hate impure programs because of reasons. Good job nonetheless.
@@ajreukgjdi94 Nice one! however I think these tricks only works because the op-code and argument is part of the same address. I do not think you could do it like this if the OP-codes and arguments were in separate RAM addresses (which you would need if you wanted 8-bit addressing and more than 16 OP-codes (but with more than 16 opcodes you surely would add a Jump-carry or Jump-zero or similar, making this argument a moot point :-)
Indeed. You don't need a conditional jump. Your program is just data that you can rewrite, just like the data/program that is on the tape of a Turing machine. The question is: what is the minimal instruction set for a computer like the one in this series to be Turing complete? Lambda calculus doesn't have a conditional jump either. On the other hand, a conditional jump is very useful, like many other "non essential" instructions, to allow for a smaller program and using less RAM for calculations.
I love this series, I could never quite “get” how you go from binary or on/off in the hardware of a computer to running software. This has been an amazing introduction into how that’s possible, deep enough into the subject to be informative, but light enough to not be overwhelming! Thank you!
I've never built my own computer, and probably will never find the time to do so.... But I almost feel like I've had the experience of doing so, just by watching your videos. Your presentation is clear and compelling, and has given me a much better understanding of what's actually going-on "down-deep" in a computer's circuitry... things that I've mostly already understood at an abstract level but always wanted to understand all the way down, in detail, to the physical level. Thank you for making these videos, and I definitely look forward to more.
@@wolfgangsanyer3544 I used to think the computer should’ve said something earlier and was being needlessly picky with the “well you didn’t ask me a proper question” stuff, but now it makes total sense - leave your input lines floating and the computer will spit out some random answer!
Absolutely fantastic video! The question of what a computer is, on a fundamental level, was never thoroughly discussed in my Computer Architecture class. This video truly opened my eyes. Thank you so very much!
your classes are incredible, I am an economist trying to learn networks, eletronics, software engineering, etc. and found your videos, the best ones in internet, you are an excellent professor
I teach programming to teens, and am always hunting for videos that are young learner friendly. This week I was searching for something regarding Church Turing thesis, and this is the most accessible one I have found. Bravo!
Great intuitive introduction to the fundamentals of assembly language. As always I am excited to see your hardware implementation to add the branch instruction. Amazing work Ben, you sir are a teacher and this series should be mandatory for any Elec/CompEng/CompSci student.
It is possible to use those instructions to create a multiply command with values below a certain bit length. You can permute the stored instructions to create a conditional jump. It's not necessarily with the limitations of this computers memory but with the same instruction set it is indeed possible. In this case I'll do multiplication with repeated addition. You start with two arguments somewhere in memory, these are the numbers to be multiplied together. The first of these arguments we'll call the iterator since it will count down each time we execute the loop. As well designate an address in memory we'll call the product because it will store the product of multiplication. ADD the second argument and the product together and store the result over the product. Load the iterator, subtract 1 and store it again then add (0110 ) to the result. Store it to The next command in the fetch is now equivalent to jump with the offset being the iterator. All commands in memory addressable by a number the bit length of the largest argument you accept should be jumps, which will jump the program back to where we loaded from memory and then added the product and the second argument together. The only exception to this is the command immediately after which should jump out of that loop to the commands LDA , OUT, HLT Our program execution then for lets say the values 4 and 3 should be. loop1: iterator = 3 product = 3 JMP offset 4 loop2: iterator = 2 product = 6 JMP offset 3 loop3: iterator = 1 product = 9 JMP offset 2 loop4: iterator = 0 product = 12 JMP offset 1 Additionally such a command would be possible using ADD as a bitshift. Adding a value to itself is the same as bitshifting it left once. So by bit shifting one argument by the bit length of the other, adding the two together then using that conditional jump trick you could create a situation where you have a unique jump address for every combination of arguments. Given this I do think the original instruction set is Turing complete given an arbitrarily long instruction bitlength and an arbitrarily large amount of memory in which to compute.
The same can be done with subtraction to compute a division, given an arbitrarily amount of memory. There are probably better hardware to be had though to do multiplication and division.
I absolutely love this series. It starts with the chemistry of transistors and ends up organically teaching assembly language and Turing-completeness in a way that's completely organic. Each step logically follows from the last.
TL;DR Thank you so much for this video! The long version: I'm a CompSci student that is currently taking a class based on the book "Introduction to Computing Systems: from bits & gates to C/C++ & Beyond". In the book there is this computer, the LC-3 (little computer 3) that is essentially just a very simplified computer that could be built (but doesn't need to be) for the purpose of teaching. After our class moved past Finite State Machines and moved into Microarchitecture and Instruction sets (and the LC-3), I noticed a transition in the curriculum, from machines that used the concept of bits and gates to do very specific things, to machines that were supposedly able to do anything that a computer could do. Me being myself, this transition got into my brain and wouldn't let go. "What differentiates a machine built for a specific purpose and a machine built for a general purpose?" "What does it mean for a computer to be Turing Complete?". This video has clarified my questions in a way that I really cannot thank you enough for. So thank you! And thank you again. Keep up the great videos! P.S. is there any more literature on continuing to answer the question of what it means to be computable? Based on what you said in the video, it seems almost like for the sake of practicality and usefulness, a definition (the Church-Turing Thesis) was created so that useful machines could be created without needing to definitively answer the question. But have there been any more attempts to make a definition that can be proven to be true? That sounds like interesting stuff to me
I was wondering how you were going to end this series. I thought it might be similar to just saying "tadah! I'm done!'. But no. You lit the metaphorical dynamite and said: "here, hold this, it'll blow your mind"
Had written a comment out asking you to make a video explaining how to allow your machine to perform conditional jumps and how to program those individual conditions. Needless to say I can't wait for the next video.
3:10 about minimal instruction sets: Christopher Domas famously wrote a code obfuscator and even a complete gcc plugin. He showed that Intel’s MOV instruction is Turing complete. Presentation is here: th-cam.com/video/R7EEoWg6Ekk/w-d-xo.html (At 39:13 in that talk Christopher even shows a 3D-floating-point framework driven by MOV instructions exclusively - an incredible feat of concept proving, in my opinion!)
I wrote a multiplication and division program in my assembly class. The language we were using(SRC) didn't even have those operations built in. Cool to learn about the correlation between Landa calculus and turring machines. Another great video!
I'm so happy to see Ben come back with his 8-bit! The comments of the last video blew up with requests for an if-statement, that last crucial bit of necessary CPU architecture, and it's finally coming! My christmas came late it seems!
At first glance, I thought "drat, no actual hardware got built" but then I realized that this is exactly the kind of practical theory that is desperately needed in the world. Great explanation of computability!
MY GOD MAN! You just made me realize the Architecture im working on... it can expand its own instruction set. Its like a Virtualized Wetware... but not virtual in the common way of thinking. Meaning no sacrifice in speed to do this.... i mean it by hardware design doesn't have an ALU (at all) in the physical components level, its all done by code. Maybe just maybe one way to think of my architecture is one large mass of control unit s, but still not exactly. Its not Harvard or Von Newman.
I like this series of videos. It takes me back to my college days when we built a 4-bit computer on a breadboard and studied computational complexity, NP Complete, and other such esoteric computer science topics. Even though I know this material (but it's been 25 years ago) I am enjoying your presentation style and the review.
You can literally translate Entscheidungsproblem to "decission problem" and as a German I wasn't even aware that "Entscheidungsproblem" is used without translation in the English papers :D
Entscheidunsproblem is not a word or phrase, but a name. The problem was obviously proposed by a German scientist, that's why it holds a German name. Similarly we call 'sudoku' by its japanese name, instead of saying something like 'number place'.
As I saw in many comments about the SAP(-ish) video series, and probably already commented myself, I'm astonished by the clarity and simplicity of your explanations.. I'm absolutely glad that you started this series, Ben, I managed to simulate the computer in Logisim and I wrote some scripts to help me with microcode and programs assembly, all thanks to your wonderful videos. Keep it up!
Ben, I built my 8 bit computer starting with your step by step guide. I used another rom to be a "conditional controller" which takes instructions from the main control rom along with flags (zero via or gates from ALU, carry, and subtract) as inputs for address lines, and wired the program counter increment, and parellel enable to the output/data lines of this conditional control rom. It works very well and gives me the ability to easily jump on less than, less than or equal, equal, greater than, greater than or equal, carry, not carry, zero, and not zero. The next step I suppose would be fpga, which I am now working to implement for my ALU (since building a real ALU for xor, not, shift, etc. Would require a lot of wires and breadboards).
pcred, You're in luck! I happen to manufacture infinitely long strips of paper ! They're not cheap though. Being infinitely long, we manufacture them to also be infinitely narrow - we find this saves on shipping costs.
All you need is a self-replenishing forest providing feedstock to an automated solar-powered paper mill that puts out a continuous strip of paper. Access time is a little slow, though.
Ben, you are just a wonderful gift to many of us. I love your videos and Thank you so much for making them. Your content is rich yet easy to understand; the mark of an excellent teacher.
Say we have two unsigned numbers in the memory cells 14 and 15 which we want to multiply. This program will then print the result of the multiplication (to the out register). This assumes JC jumps if and only if the result of the SUB before it was
I tested your solution, it works. I was a little confused because of the `STR` instruction though, until I realized that it is the instruction for storing the value (he called it `STA`). You don't have to clear A, since you load something into it anyway. And you can remove instructions 4 and 5 (STR 15 and LDA 13) and insert them before the conditional jump. This way, you don't have to load the value in instruction 9 again. This makes you use one byte less memory, but there will be one additional unneeded step (STR 15) executed before the result is shown.
Yes, instruction 0 is absolutely unnessecary, I derped there. I did not pull the STR 15 and LDA 13 in front of the conditional jump since the conditional jump maybe only works if it is directly behind a SUB. (depending on how Ben will wire it up). Otherwise, that would surely be better. Have you actually build his CPU, or simulated it in logisim? How did you test it?
JoJoModding I built a little simulation in python. And the carry bit is only changed by the `ADD` and `SUB` instructions, as you can see in the assembly he wrote for displaying Fibonacci numbers (in an older video).
I'm learning assembly atm and the first thing that comes to mind is a loop (jmp)! it's crazy that to just make a computer turing complete, you just had to add an if statement. i tried learning about turning machines when i was younger and i think i watched this video when it was uploaded 4 years ago but i now i got it and it's this simple!
I want to say thank you for such a complete and informative history and explanation. This was eye opening and gave a lot of purpose to words and phrases I use regularly.
3:36, ahem, the answer to life the universe and everything has already been computed by Deep Thought and that answer is.... 42 One of the things that bugs me when people talk about the early years of computers is they always forget about Tommy Flowers who built Colossus, the world's first programmable electronic computer... Everyone just bangs on about Turing...
Colossus was a mighty work of wizardry and deserves better renown than wartime secrecy allowed it. But OTOH, for many years the same was true of the bombe. In the end, both were purpose-built cryptanalytic devices, not general-purpose computers (I do wish _The Imitation Game_ had name-checked Turing machines a little more cautiously), and both were programmed mechanically even if Colossus did _operate_ electronically. Mechanical programming kind of limits the payoff for electronic operation. IIRC someone eventually proved that you could build a Turing-complete computer out of ten Colossi. I'm not confident you could do that with bombes. So there's that. And cryptanalysis always was a first-rate example of why you'd want to systematically burn through a lot of grunt-work with some sort of a machine that could repeat sequences of operations and switch from one sequence to another. I used to teach a class about classical cryptography and I often found myself saying, "And *that's* another reason it was cryptanalysts that invented computers."
Got the notification in Biology. Had to put it off until I got home because that class requires my full attention. Love your videos man. Keep putting these out!
First of all great work! I'm a assembly programmer my self and to see that home made hardware can run opcodes, is really cool thing to see! I'm more to the x86 syntax, but can surely understand 6502 like syntax as well. Are you going someday add I/O portability to your'e computer? And for last question: Will you give it a name?
Your content is great. Your style of editing/ narration makes the subject matter extremely understandable. I wish I had the motivation to tactical a project like this.
true :D *and @Hugohopser: I would say the comment above seems better, and if you were to translate it word-for-word I would go for "Schlachtung" /"Schlachterei" instead of "Metzgerei" :)
I made the multiply program using the jc the other computer has, which, if I got it correct, jumps when the carry bit on the ALU is set. This works due to the possibly wrong observation that subtracting one from every positive number except 0 will trigger the carry bit. I unfortunately do not have the money or time to spend at the moment to make a breadboard computer, so I have no easy way to test this. But here's the (revised) program: 0: JMP 4 1: LDA d // start of loop, add first input to the result 2: ADD e 3: STA d 4: LDA f // subtract one from the second input 5: SUB c 6: STA f 7: JC 1 // jump if second input was greater than 0 8: LDA d // load the result and output 9: OUT a: HLT b: c: 1 // needed for quickly subtracting one d: 0 // the result, needs to be initialized to 0 e: // first input f: // second input My first incorrect attempt: 0: LDA e // move e to d, essentially preserving the input 1: STA d 2: LDA e // start of loop, add first input to stored number 3: ADD d 4: STA e 5: LDA f // subtract one from the second input 6: SUB c 7: STA f 8: JC 2 // jump if second input was greater than 0 9: SUB d // I think this is needed to correct an off by one error a: OUT // output b: HLT // and halt to make sure the data isn't interpreted as code or whatever c: 1 // needed for quickly subtracting one d: // reserved for use by program e: // first input f: // second input
I suggest you to check Logisim: it's a digital logic simulator. From basic logic gates, to registers and memory, you can create pretty much any circuit. It's freeware, too! I've almost finished to simulate the computer discussed in the series by Ben
I tried simulating an ALU built out of logic gates years ago with Logisim. It really didn't like it, eventually it just stopped working properly. It may have improved by now, though.
JoJoModding, You're right. Not only that, but I actually had two off by one errors. The first one was initializing the result to the first input rather than 0, the second one was hard to explain, because off by one errors suck, but was fixed by the jump on line 0. So it's fixed, with one byte to spare this time.
Yees, really hyped that you are going to cover computability in the future videos! Last year finished course on Theoretical Computer Science, which made me really interested in what computers are (in)capable off. Now starting to delve into Turing's papers with help off Charles Petzold's book "Annotated Turing". Thank you for your work!
While it isn't do able in practice when it come to your machine there is a way of doing branching statements. As reference I point to Movfuscator ( github.com/xoreaxeaxeax/movfuscator ) which is a turning complete compiler based solely on the mov instruction. So if your computer can emulate a mov instruction(which I believe it can) then your computer is in fact turning complete.
No the computer would need indirect move instructions for this to work. Also I don't think you could do anything with this approach and only 16 memory locations.
Nope it's not. the X86 MOV instruction can be used as and equivalent to conditional jump if your creative. The instructions on this computer can't emulate a conditional jump in it's current state.
Splendid take on Turing completeness. This series of videos is a gem. I would recommend this as a starting point for anyone interested in computer architecture. Kudos!
Philosophically speaking, "computer" is in the eye of the beholder. John Searle points out that a falling pencil is a computer. It computes the time it takes to cross a distance in constant acceleration. If you use it for that purpose, you use it as a computer. So, "computer" is anything you know how to use to acquire answers to your numerical questions. Of course, a Turing machine can be used in limitless ways to compute limitless answers. (Well... the pencil too can be used to compute limitless things, by dropping it from different heights. But it's not obvious how to use it to compute the length of a vector. On the other hand, maybe we are not smart enough at using the pencil. Maybe, if we are smart enough, we can use the pencil for that purpose, e.g. by writing the norm down and doing all the calculations by hand, using some paper and the graphite of the pencil to keep track of all the numbers. So, a pencil is probably a general computer after all, provided you have enough paper and enough time to write things out.)
@@Handlessuck1 well yes, everything about thhgttg is jokes What do you get when you multiply six by nine?" is the ultimate question of life, the universe and everything. As explained in Douglas Adams' book, The Hitchhiker's Guide to the Galaxy , the answer to this question explains the purpose of life…. incidentally, the answer is 42. This explains why a lot of things are very weird in our universe
I would suggest looking into a language called brainf**k, which is literally designed specifically to run on a turing machine. It only has 8 instructions, is turing complete, and doesn't even have a jump function. It has < and >, which mean to look at the next or previous memory positions (Same as checking specific memory address), + and - which add or subtract 1 from the current memory position (equivalent to 'i = i + 1' and 'i = i - 1'), '.' which is the same as your OUT command, ',' which takes user input (Not actually in your computer but not actually needed), and the final two command "[" and "]", which are the only logic of the language, meaning "Loop whatever is between these two symbols until the current memory position we are looking at does not equal zero". That is turing complete. Infact, there are compilers from C# or java to brainf**k, and people have written everything all the way to a functional version of "The game of life" on it. For example: "[-]" means to subtract 1 from the current memory position until it equals zero. AKA, set the current memory position to 0. E.g. 2. "[-]++++>[-]++++++++" means to set the current memory position to the number 4, then move over to the next memory position and set it to 8.
Hello ! Im from Poland, sorry for my English. I think Your computer is very interesting ( My hobby is AVR ), but i have one qestion to You: If You have memory adressable on 4 bits, instructions are easy: KKKKDDDD K-command D-mem. adress, but what if You have memory adressable on 8 bits ? one command is two bytes? Can you answer this qestion?
Yes, that is exactly what I am building at the moment, an 8-bit computer with 6 bit opcodes and 8-bit address space. Most commands is still only one byte long, but all the ram IO and jump instructions are 2 bytes long, one for the op-code and one for the address. By using 6-bit opcodes i also have the 'luxury' of having 2 user operated inputs, 4 registers, special inputs (0 and one) and in addition to add and subtract I have also included logical AND as well as NOT (that is a true ALU :-) and increment and decrement value in A register (by loading the special value 1 in B register and doing sum or subtract). I have also used 74LS194's for the A and B registers as they allow to do bitshifts. I am now working on giving it a persistent storage (an 28C16A EEPROM like the ones Ben and I use for the OP-code decoding). I'm also planning to do a 4 byte hardware stack (using 9 74LS194's) for subroutine calls.
man its cool you test your computer for turing hardness, barely anyone does that anymore in the hardware scene. I think its the entire dogma of if A is turing complete then another product that uses A -> A+B there is no need to test for turing completeness since A being turing complete suffices to say B is also turing complete. Tensorflow is a great example of this.....
Love you ben!!!, there's a paper out there explaining that the mov instruction its by itself turing complete. Keep doing your thing man, you're an inspiration
Ben, this is some of the most illuminating video on system design I've ever seen. Please don't stop, it gets me so inspired to understand the machines I use and get paid for to use everyday.
3:36 "We don't really expect a computer to be able to compute the meaning of life? right?" said the man who built a computer that outputs 42 every time
So... I FINALLY got done binge watching this series. What a ride! Now, without more to watch, I have nothing to do but give unsolicited commentary. I actually have a much larger wall of things to say, but I'm going to cut it down to just the two things I'm most focused on right now. I know there is limited merit in talking about optimization, but a thought occurred to me early on that I think adds substantial value. With almost no effort, you could replace the current automatic reset of the microcode clock with that unused bit of microcode EEPROM. Do this and you gain two simultaneous advantages. First, you open up the fullest possible stride (number of micro-ops) for instructions. Second, any instruction will terminate itself as soon as it's able. This last part gives a (probably massive) speed improvement because the vast majority of instructions don't use most of the stride so the next instruction can begin immediately instead of waiting. Ultimately this produces a very clean dynamic instruction timing. The only trade-offs are every instruction must terminate with an "end" micro-op, and that there is increased complexity when quantifying execution timing, which brings me to my next point. I'd love to see a run-time input segment. There are several good reasons to do this, I'll list a few. 1) You have an extensible bus, so why not demonstrate that? 2) Practical computers have external input, it's fairly essential. 3) Potential to tack-on interrupt, something practical systems need. Interesting things happen with this, more so than any other possible addition I think. Foremost is that it might be able to do real world work, making this a truly useful system at that point. It would be very reasonable to use it as a microcontroller, maybe a PID controller for something like a thermostat. This might even shut up people that like to insinuate that this project is useless. Yes, computational speed is great, but ACTUALLY the larger use of computers, their greatest value, is endlessly applying optimal solutions to real problems without making mistakes. These are typically "Embedded Systems" and it turns out that 9 out of 10 "computers" are made for this reason actually. The computer you have here is rapidly approaching that level of completeness, which I think is a critical milestone you owe yourself to cross.
@@Vlad-1986 They didn't "run" quake on an oscilloscope, they used the scope screen to output a representation of the output video - this is much different.
5 ปีที่แล้ว
Wow, very well explained! I just understood what "Turing completeness" means. I mean intuitively, really, it just all clicked in my head. Thank you!
Can't wait for the next episode, in fact I'm doing a cpu with transistors and I am following this tutorials as a guide but with a diferent alu with more options! Thanks for all!
+Huntracony No, not really. Gigabyte1337 is saying that the computer is safe (from the spectre attack) because it does not have the security features which spectre would break. Spectre can't be performed on this computer -> It's safe from spectre.
Oh my god, never been so early. Amazing work, I got heavily inspired by you with my classmate and we are building our 16b computer now. I would love to see you do more of this stuff, like try to do pipelined CPU (based on this one), or extending RAM (and thus capabilities) by choosing active bank (similar to 6502) or making UI to it with LCD display and HEX keyboard to program it in assembly. I really love your work and everything you do for us. Huge support from Slovakia and Czech republic ❤
I saw a YT video somewhere that showed how to program in brainfuck without hurting yourself. The trick is to use macros to create useful instructions first.
Nice video! I'll have to check out the rest of the series as I just jumped to this one. Also (not really that important) there is a model of computing that is like a turing machine except with a finite tape, and that's an LBA (Linear bounded automaton) -- but I never really hear anyone talk about those because they're literally the same as turing machines but with markers on the left and right side of the tape that stop you from moving beyond them, essentially making the tape finitely sized.
You guys seriously underestimate the power of macOS and its subsystems: *BSD, GNU tools, gcc, Ruby/Python/Samba/SSH preinstalled, a proper firewall, OpenGL, OpenCL etc, all open-source and industry standard, unlike proprietory Microsoft crap.
a multiplication with these instructions is possible, it all depends on dynamically overwriting the address following the JMP instruction. you can just jump into a series of "ADD" instructions. You just have to calculate the entry point correctly and overwrite the address (self-modifying code, writeable text-segment) following the JMP opcode. Here's a C-code that multiplies two 4 bit numbers. With the exact knowledge about your CPU's machine code (which I don't have, but I'm sure it's explained in your previous videos), you can easily generate the correct machine code. The "trick" is that you don't "break" after each "case", but just "fallthrough" to the next case statement. int imult4(int i,int j) { int n=0; switch(i) { case 15: n+=j; case 14: n+=j; case 13: n+=j; case 12: n+=j; case 11: n+=j; case 10: n+=j; case 9: n+=j; case 8: n+=j; case 7: n+=j; case 6: n+=j; case 5: n+=j; case 4: n+=j; case 3: n+=j; case 2: n+=j; case 1: n+=j; case 0: /* nothing */ ; } return n; } Unfortunately, I don't exactly know how the design of the machine code. I assume that LDI is a 'load immediate', STA / LDA / ADD operarte on memory addresses. An assembly code doing a multiplication without a MUL instruction could look like this: ; assumption: ; i and j are memory locations, ; they contain the numbers ; to be multiplied imult4: ; ; calculate entry point: ; ldi @_mul0 ; load immediate sub i ; calculate the offset backwards sub i ; accu = addr(mul0) - 2 x i sta _jmp+1 ; overwrite jmp addr with entry to add table ldi 0 ; sets accu to 0 _jmp: jmp 0x0000 ; 3 bytes; JMP, LSB, MSB (yes?) ; ; an add takes two bytes ; add j ; x 15 add j ; x 14 add j ; x 13 add j add j add j ; x 10 add j add j add j add j add j ; x 5 add j add j add j add j ; x 1 _mul0: out ; x 0 and output the accu hlt for an 8bit multiplication, the list of ADD instructions is times longer (255 instead of 15 ADDs) do you see any issues with this approach?
In Technicality if you can jump and you can move memory around your machine is Turing complete. Take a look at a project called the Movfuscator. In short the Movfuscator was a project to show that any program can be written with only a series of mov instructions (or in this computer's case loadA/StoreA instructions) With the caveat that there was but one jmp instruction to bring the execution back to the beginning. And if you at first aren't convinced, consider for a second that this computer already utilizes the Turring completeness of memory in using it's eeprom as a hex decoder. Memory can emulate any discrete logical element and so if you can move that memory around and have the right program loaded you can build a Turring machine with it. So no you don't technically need a conditional jump instruction but it would cut down on the execution time of general programs by an exponential amount.
Haven't watched the next video yet because I wanted to try my hand at the MUL program given a JC instruction. Got it in exactly 16 instructions, if you load the operands a part of the program (here I'm multiplying 3 by 5). This assumes 0-1 sets the carry bit. If it doesn't you'd just have to set the second operand to 256-5+1=251, and ADD 0 instead of SUB 0. Also, because ram is only 16b I cannibalize the first bytes of the program to store the operands/temp values meaning you can't rerun it without rewriting it. I also output the value as it's calculated to save a crucial instruction... 0: LDI 1 1: STA 0 //we've read instruction 0 by now 2: LDI 3 3: STA 1 //we've read instruction 1 by now 4: LDI 4 //multiply by 5, load 4 to get JC to work 5: STA 2 6: LDA 2 //Start of loop 7: SUB 0 8: JC 15 9: STA 2 10: LDA 1 11: ADD 1 12: OUT 13: STA 1 14: JMP 6 15: HLT Gonna watch the next video now to see how close I got :) (if you just initialize ram directly instead of what I've done you save 3 instructions, and you won't have to cannibalize your program, but where's the fun in that?
By watching this I feel almost like I'm back in my high school (it was a weird one that taught us stuff like truth tables, rll encoding, how to calculate carry for each bit without waiting for actual results from the register, and tons of other related things). I'm glad people are still playing with such internals as if they were some toys, not just a set of skills required by a big industry. Thank you.
Another great video! I have already learned so much by watching some of your 8-bit computer videos. You explain things in a way I can understand and the videos really reinforce the concepts I am learning in school (going for Computer Systems Engineering). I can't wait until I have learned enough to build my own computer. Looking forward to your next video!
2:30 you can, actually. But it uses a technique despised by most. (Drumroll) self-modifying code! Example: (Goto $9 if $7 is zero, $d if it is 1) --code-- 0 lda $7 ; a is value to be checked 1 add $7 ; a is 0 or 2 2 sta $7 3 add $7 ; a is 0 or 4 4 add $7 ; a is $9 or $d 5 sta 6 6 halt ; this will be modified by the previous instruction 7: check value 8: #$69 ; opcode for 'jmp 9' 9-c: some code d-f: some other code This is a conditional instruction. With it it is Turing complete and can do anything.
+Ben Eater I like your videos. It's just awesome. You explain the whole stuff really good. Please go on with computer videos. I'm really looking foreward vor the next Video. 👍
I love the little joke at the end .. the distance between 16 bytes and 16GB from infinity is about the same.
Well infinity - 16 = infinity, as does infinity - 65536 so it's funny and true :D
65536? 16000000 iirc
@ACAB\\ Mela BAKAta 2^15 is 32768, (2^15)*2 or 2^16 is 65536.
Not just about the same, it's exactly the same.
@@clusterfork not exactly the same. they are both infinitely far away from infinity, but 16 is simply not the same number as 65536
Mind blown! I watched the whole 8-bit computer video start to finish, and this video just completely changed my perspective on how I view computers.
Edward Wray His content is awesome. I learned a lot . Thanks man I really hope you get more subscribers 😊
I was just about to write the same thing: 'Mind Blown'
now imagine 16bit 286 how complex it was LOL
Well said! I’m about half way through building Ben’s computer and so look forward to making it Turing complete. Having a universal computer sitting on my desk, that is a simple version of the much more complex and powerful ones that surround us, will be a fantastic way to get into the deep aspects of computation. My mind has been blown by reading David Deutsch, the “father of quantum computation”, and watching his you tube videos (rec Closer to Truth interviews) and explanations of the connections between computation, physics/physical reality, and knowledge. I hope Ben Eater can one day make the “how to build an 8-bit quantum computer at home” series!
@@randalljam2000 don't stop now, continue onto the nand2tetris lecture and project series on TH-cam and their web! Using completely free tools, you learn how to build a 16-bit alu, cpu, 32k ram all with hardware simulators and writing chip logic code using, say, Notepad. Then you use a high level language, and learn how to write an assembler, virtual machine translator, and compiler while also coding up a simple game in the computer's own simple high level language, then run everything on your machine! You can also simulate your work on their vm emulator and cou emulator. You also write a small os for your computer and learn how to map ram words to the 256x512 pixel screen, including text and lines drawing. How to build strings. Memory allocation.
It's a great hands-on challenge! They teach you and hold your hand just enough where you are then left on your own to figure out how to finish the various projects. It's phenomenal
Been looking forward to this. This series completely opened my eyes to fundamentals of a CPU and EE. (Software engineer by trade, never really dabbled past building my own PCs). Thank you Ben for making this series
I did start building it :) It's moving forward slowly. I need a better multimeter/oscilloscope. Troubleshooting is very difficult with the one I have now
*dabb*
On a related note, the x86 mov instruction is Turing complete (yes, that single instruction).
For anyone who's interested, Chris Domas made a mov-only compiler called "Movfuscator"
He has a presentation on TH-cam somewhere
This! I just posted a link to his presentation up-thread, where Chris demonstrated his obfuscator, MOV-only gcc, and a floating-point-based 3D library exlusively using MOVs.
XOR also is Turing complete. :D
cgerman5 i
th-cam.com/video/R7EEoWg6Ekk/w-d-xo.html
Ben's computer's instruction set includes lda and sta along with a perfectly good jmp instruction to put at the end. So if it had more memory the concept could easily be implemented without the conditional jump.
Loved that last argument! ;D
I was thinking about how to add more memory to that thing.
Maybe adding a instruction that replaces the memory with another piece of memory from an eeprom. If i need to multiply, load the multiply function into memory, and when done, load where the program originally was.
It still has only 8 Bits to know which memory chunk to load. So maybe if we swap memory, always jump to the first address? Then we could have 255 different memory contents to load...
Michi Lo, wrong thread?
It happens from time to time...
the comment about the distance to infinity? Yeah I really liked that, I had to stare at the screen for a moment and just absorb how freeing that felt
You just invented paging and segmented memory.
Entscheidungsproblem is pronounced a bit like Entshydeongsproblehm or entshydoungsproblehm with r spelled harder than in english and emphasised on "shy", ou like in "you". 😊 It really means "decision problem". It's german, because Kurt Gödel was the first person writing a paper 1931 about the Entscheidungsproblem which was reformatted by Alan Turing 1936.
Btw as a computer scientist I love your videos. Best chilling learning and educational videos about this for everyone who wants to get in touch with these computer stuff even when you may be a beginner. Very well done 😊
Ben, I don't know how i would thank you. You completely changed my perception of computers(/computing) and your 8-bit computer series got me in electronics and in how does a computer work (the elemental stuff no one looks at nowdays). Stay AWESOME, cheers from Czech Republic!
PS: do you think it is possible to simulate a basic quantum computer with hardware? (just like your 8-bit computer)
It *is* possible to simulate a quantum computer with classical computing hardware. You just have to create enough separate components to represent all of the possible states that each qubit can assume simultaneously, as well as the needed quantum logic "gates" (several of which perform different operations than classical logic gates). The issue is that the classical simulation will have exponentially more components than the quantum computer (i.e., the number of classical components increases exponentially as you simulate more qubits).
What im wondering is : Would it be possible to build (program) a basic interpreter for the CPU of Ben Eater if we just use an lcd screen in stead of 7 segment displays ? And if it is possible, how hard would it be to program it ?
@alysdexia What do you mean ?
@alysdexia do you speak English?
@alysdexia wow crazy languages blend and evolve
I think I have a program that can multiply two numbers using only the instructions that we have right now (so no conditional jumps). The idea is that we can edit the RAM, which means we can both change the storage AND the actual program itself. This means that the data can actually change how the program behaves, and with this we should be able to make a multiplication program.
This is the program that I came up with:
0: LDA 3
1: ADD 17
2: STA 3
3: JMP 31
4: LDA 3
5: SUB 17
6: STA 3
7: LDA 18
8: ADD 16
9: STA 18
10: LDI 1
11: STA 19
12: LDA 17
13: SUB 19
14: STA 17
15: JMP 0
16: X
17: Y
18: 0
19: 0
.
.
.
28: LDA 18
29: OUT
30: HLT
31: JMP 28
I actually needed 5 address bits, or 32 memory locations to make this work, but I still only used the instructions we have available. Lines 0-6 are essentially what are doing the conditional jump that we need for multiplication. What they do is jump to line 31 if the number stored at address 17 is 0. With this it's relatively easy to do the rest of the program, which just adds the number in position 16 to the number in position 18, and stores the sum back in position 18. At the same time, it subtracts 1 from the number in position 17, and loops back to the beginning of the program where again, it checks if position 17 is 0 and if so, goes to the last line, line 31. Then, on line 31, I just jump back a bit so I can output the answer, which is in position 18.
So, basically, positions 16 and 17 are storing the two numbers we want to multiply, and position 18 is storing the result of that multiplication.
The way the conditional jump works is that I am editing the jump statement right before I reach it. In memory, the actual jump statement will be represented as ABCD11111, where ABCD is just the code for the jump statement. Ben used 0110, so I'll also use that. The other digits are 11111 because that is the position we are jumping to, 31. Right before the program reaches the jump statement, we add the number in position 17 (which I'll call Y from now on) to the number in position 3. But, the number in position 3 is the jump statement. If Y is anything other than 0, we will be editing this jump statement. The jump statement before was 011011111. If we add anything to this, we will be changing the first four digits of the statement. As long as we add a number that is less than 5 bits we will only change the 4th bit, which means the new code will be 0111. If we just make this code do nothing, then if we add anything other than 0 to the jump statement, the line will then do nothing. Only if we add 0, which means Y = 0, will we actually jump to line 31, as then, the jump statement won't be edited. So, this works as a conditional jump statement that jumps to the last line if Y = 0 (and Y is less than 5 bits).
I'm pretty sure that this means the computer is Turing complete, even without adding a conditional jump statement, but it's just much harder to reason this out, and it's probably easier to just add a conditional jump.
Chaitanya Ravuri pretty sure it’s people with brains like yours that got us to the moon with the AGC and then hacked it mid flight even with the bulk of its programming in read only memory. The limit here is the address size (more bits needed) and the ability to modify its own memory. Here’s a crazy thought. You could write a program that uses pseudo random number generation pulling out different predefined “gibberish” ram memory allocations that changes after every power on and use it to print out random ASCII characters to hear from the universe or create random music generation. 🤪
No need for all that if we can assume both inputs are positive Check this out
0: LDA 10
1: ADD 11
2: STR 10
3: OUT
4: LDA 7
5: SUB 9
6: STR 7
7: (Y-2)
8: JMP 0
9: 1
10: (RESULT)
11: (X)
Since 0000 is NOP and 1111 is HLT, this will add X to itself Y times and then halt when address 7 rolls over to the maximum.
I suppose we also have to assume Y-2
What you wrote is an impure program. People generally hate impure programs because of reasons. Good job nonetheless.
@@ajreukgjdi94 Nice one!
however I think these tricks only works because the op-code and argument is part of the same address.
I do not think you could do it like this if the OP-codes and arguments were in separate RAM addresses (which you would need if you wanted 8-bit addressing and more than 16 OP-codes (but with more than 16 opcodes you surely would add a Jump-carry or Jump-zero or similar, making this argument a moot point :-)
Indeed. You don't need a conditional jump. Your program is just data that you can rewrite, just like the data/program that is on the tape of a Turing machine.
The question is: what is the minimal instruction set for a computer like the one in this series to be Turing complete?
Lambda calculus doesn't have a conditional jump either.
On the other hand, a conditional jump is very useful, like many other "non essential" instructions, to allow for a smaller program and using less RAM for calculations.
JNZ, jump if not zero, is my vote for your conditional jump instruction. Was my number 1 decision maker when I was an 8051 assembly programmer.
I love this series, I could never quite “get” how you go from binary or on/off in the hardware of a computer to running software. This has been an amazing introduction into how that’s possible, deep enough into the subject to be informative, but light enough to not be overwhelming! Thank you!
I've never built my own computer, and probably will never find the time to do so.... But I almost feel like I've had the experience of doing so, just by watching your videos. Your presentation is clear and compelling, and has given me a much better understanding of what's actually going-on "down-deep" in a computer's circuitry... things that I've mostly already understood at an abstract level but always wanted to understand all the way down, in detail, to the physical level. Thank you for making these videos, and I definitely look forward to more.
“We wouldnt expect a computer to be able to compute the meaning of life”
*Asimov starts sweating nervously*
It's very simple. LDI 15; STA 15; ADD 15; STA 15; LDI 12; ADD 15; OUT; HLT.
Asimov/Shmasimov: Douglas Adams is rolling in his grave!
42
@@wolfgangsanyer3544 I used to think the computer should’ve said something earlier and was being needlessly picky with the “well you didn’t ask me a proper question” stuff, but now it makes total sense - leave your input lines floating and the computer will spit out some random answer!
Absolutely fantastic video! The question of what a computer is, on a fundamental level, was never thoroughly discussed in my Computer Architecture class. This video truly opened my eyes. Thank you so very much!
your classes are incredible, I am an economist trying to learn networks, eletronics, software engineering, etc. and found your videos, the best ones in internet, you are an excellent professor
Ben, you are very practical and inspiring. Know many of us appreciate you beyond the breadboard.
I teach programming to teens, and am always hunting for videos that are young learner friendly. This week I was searching for something regarding Church Turing thesis, and this is the most accessible one I have found. Bravo!
At 17:30 I half-expected him to say "Here's an infinitely long tape I ordered from ebay..."
ahahah
Tried that once. Delivery takes forever.
it is unbounded
you can just add more tape as you go
Great intuitive introduction to the fundamentals of assembly language. As always I am excited to see your hardware implementation to add the branch instruction. Amazing work Ben, you sir are a teacher and this series should be mandatory for any Elec/CompEng/CompSci student.
It is possible to use those instructions to create a multiply command with values below a certain bit length. You can permute the stored instructions to create a conditional jump. It's not necessarily with the limitations of this computers memory but with the same instruction set it is indeed possible. In this case I'll do multiplication with repeated addition.
You start with two arguments somewhere in memory, these are the numbers to be multiplied together.
The first of these arguments we'll call the iterator since it will count down each time we execute the loop. As well designate an address in memory we'll call the product because it will store the product of multiplication.
ADD the second argument and the product together and store the result over the product.
Load the iterator, subtract 1 and store it again then add (0110 ) to the result.
Store it to
The next command in the fetch is now equivalent to jump with the offset being the iterator.
All commands in memory addressable by a number the bit length of the largest argument you accept should be jumps, which will jump the program back to where we loaded from memory and then added the product and the second argument together. The only exception to this is the command immediately after which should jump out of that loop to the commands LDA , OUT, HLT
Our program execution then for lets say the values 4 and 3 should be.
loop1:
iterator = 3
product = 3
JMP offset 4
loop2:
iterator = 2
product = 6
JMP offset 3
loop3:
iterator = 1
product = 9
JMP offset 2
loop4:
iterator = 0
product = 12
JMP offset 1
Additionally such a command would be possible using ADD as a bitshift.
Adding a value to itself is the same as bitshifting it left once. So by bit shifting one argument by the bit length of the other, adding the two together then using that conditional jump trick you could create a situation where you have a unique jump address for every combination of arguments. Given this I do think the original instruction set is Turing complete given an arbitrarily long instruction bitlength and an arbitrarily large amount of memory in which to compute.
Thank you for pointing that out i had an idea of how to multiply but this has realy helped
Glad I wasn’t the only one thinking this is definitely possible!
The same can be done with subtraction to compute a division, given an arbitrarily amount of memory. There are probably better hardware to be had though to do multiplication and division.
Yep, many basic instructions are Turing complete when an arbitrary amount of memory is considered.
I absolutely love this series. It starts with the chemistry of transistors and ends up organically teaching assembly language and Turing-completeness in a way that's completely organic. Each step logically follows from the last.
Very smart of you not to build in any speculative execution - there will be no Spectre attacks on the Ben Eater Machine.
there is no instruction pipeline here, how can there be any speculative execution?
didnt understand shit. you guys have a slang of your own.
Didn't matter, as soon as you use C to program it, you will have buffer overflows, that's enough to be insecure forever.
@@tvoipapa I think you missed the joke
@@tvoipapa r/wooosh :-)
TL;DR Thank you so much for this video!
The long version:
I'm a CompSci student that is currently taking a class based on the book "Introduction to Computing Systems: from bits & gates to C/C++ & Beyond". In the book there is this computer, the LC-3 (little computer 3) that is essentially just a very simplified computer that could be built (but doesn't need to be) for the purpose of teaching. After our class moved past Finite State Machines and moved into Microarchitecture and Instruction sets (and the LC-3), I noticed a transition in the curriculum, from machines that used the concept of bits and gates to do very specific things, to machines that were supposedly able to do anything that a computer could do. Me being myself, this transition got into my brain and wouldn't let go. "What differentiates a machine built for a specific purpose and a machine built for a general purpose?" "What does it mean for a computer to be Turing Complete?". This video has clarified my questions in a way that I really cannot thank you enough for. So thank you! And thank you again. Keep up the great videos!
P.S. is there any more literature on continuing to answer the question of what it means to be computable? Based on what you said in the video, it seems almost like for the sake of practicality and usefulness, a definition (the Church-Turing Thesis) was created so that useful machines could be created without needing to definitively answer the question. But have there been any more attempts to make a definition that can be proven to be true? That sounds like interesting stuff to me
I was wondering how you were going to end this series. I thought it might be similar to just saying "tadah! I'm done!'. But no. You lit the metaphorical dynamite and said: "here, hold this, it'll blow your mind"
Had written a comment out asking you to make a video explaining how to allow your machine to perform conditional jumps and how to program those individual conditions. Needless to say I can't wait for the next video.
3:10 about minimal instruction sets: Christopher Domas famously wrote a code obfuscator and even a complete gcc plugin. He showed that Intel’s MOV instruction is Turing complete.
Presentation is here: th-cam.com/video/R7EEoWg6Ekk/w-d-xo.html
(At 39:13 in that talk Christopher even shows a 3D-floating-point framework driven by MOV instructions exclusively - an incredible feat of concept proving, in my opinion!)
I wrote a multiplication and division program in my assembly class. The language we were using(SRC) didn't even have those operations built in.
Cool to learn about the correlation between Landa calculus and turring machines. Another great video!
Ben: Hey, I built a computer
Alan: *bUt CAn iT "If" ????*
I'm so happy to see Ben come back with his 8-bit! The comments of the last video blew up with requests for an if-statement, that last crucial bit of necessary CPU architecture, and it's finally coming! My christmas came late it seems!
yes, yes.but turing actually ment: will it run crysis?
Will it run ms dos?
Technically? yes.
@@Mostlyharmless1985 shaddup, bitch. How would you know? 😂
Pumpkin because That’s literally what Turing complete means. If it is a universal computer, it can be made to run any arbitrary code.
@@Mostlyharmless1985 at 1 frame per 10million years. 😂
Super video! I applauded for $5.00 👏👏
Wow I just watched all of these in 2 or 3 days. Damn. Nice series.
At first glance, I thought "drat, no actual hardware got built" but then I realized that this is exactly the kind of practical theory that is desperately needed in the world. Great explanation of computability!
MY GOD MAN! You just made me realize the Architecture im working on... it can expand its own instruction set. Its like a Virtualized Wetware... but not virtual in the common way of thinking. Meaning no sacrifice in speed to do this.... i mean it by hardware design doesn't have an ALU (at all) in the physical components level, its all done by code. Maybe just maybe one way to think of my architecture is one large mass of control unit s, but still not exactly. Its not Harvard or Von Newman.
I like this series of videos. It takes me back to my college days when we built a 4-bit computer on a breadboard and studied computational complexity, NP Complete, and other such esoteric computer science topics. Even though I know this material (but it's been 25 years ago) I am enjoying your presentation style and the review.
You can literally translate Entscheidungsproblem to "decission problem" and as a German I wasn't even aware that "Entscheidungsproblem" is used without translation in the English papers :D
Entscheidunsproblem is not a word or phrase, but a name. The problem was obviously proposed by a German scientist, that's why it holds a German name. Similarly we call 'sudoku' by its japanese name, instead of saying something like 'number place'.
As I saw in many comments about the SAP(-ish) video series, and probably already commented myself, I'm astonished by the clarity and simplicity of your explanations.. I'm absolutely glad that you started this series, Ben, I managed to simulate the computer in Logisim and I wrote some scripts to help me with microcode and programs assembly, all thanks to your wonderful videos. Keep it up!
16 bytes is just about as far from infinity as 16 GB, so it's just as good. Loved it!
Ben, I built my 8 bit computer starting with your step by step guide. I used another rom to be a "conditional controller" which takes instructions from the main control rom along with flags (zero via or gates from ALU, carry, and subtract) as inputs for address lines, and wired the program counter increment, and parellel enable to the output/data lines of this conditional control rom.
It works very well and gives me the ability to easily jump on less than, less than or equal, equal, greater than, greater than or equal, carry, not carry, zero, and not zero.
The next step I suppose would be fpga, which I am now working to implement for my ALU (since building a real ALU for xor, not, shift, etc. Would require a lot of wires and breadboards).
Woah, where did you find an infinite piece of paper like that? I want one.
:)
I ordered one at christmas, i'll see what answer i get from santa.
pcred, You're in luck! I happen to manufacture infinitely long strips of paper ! They're not cheap though.
Being infinitely long, we manufacture them to also be infinitely narrow - we find this saves on shipping costs.
@@garychap8384 you sell air? cool!
Moebius Inc. sells it.
All you need is a self-replenishing forest providing feedstock to an automated solar-powered paper mill that puts out a continuous strip of paper. Access time is a little slow, though.
Just found another easter egg in the thumbnail, it says: "How to compute everything" and the display shows the number 42. Very cool!
Interestingly enough, Church was Turing's PhD advisor.
Ben, you are just a wonderful gift to many of us. I love your videos and Thank you so much for making them. Your content is rich yet easy to understand; the mark of an excellent teacher.
Say we have two unsigned numbers in the memory cells 14 and 15 which we want to multiply. This program will then print the result of the multiplication (to the out register).
This assumes JC jumps if and only if the result of the SUB before it was
Hope this works
I tested your solution, it works. I was a little confused because of the `STR` instruction though, until I realized that it is the instruction for storing the value (he called it `STA`).
You don't have to clear A, since you load something into it anyway. And you can remove instructions 4 and 5 (STR 15 and LDA 13) and insert them before the conditional jump. This way, you don't have to load the value in instruction 9 again. This makes you use one byte less memory, but there will be one additional unneeded step (STR 15) executed before the result is shown.
Yes, instruction 0 is absolutely unnessecary, I derped there.
I did not pull the STR 15 and LDA 13 in front of the conditional jump since the conditional jump maybe only works if it is directly behind a SUB. (depending on how Ben will wire it up). Otherwise, that would surely be better.
Have you actually build his CPU, or simulated it in logisim? How did you test it?
JoJoModding I built a little simulation in python. And the carry bit is only changed by the `ADD` and `SUB` instructions, as you can see in the assembly he wrote for displaying Fibonacci numbers (in an older video).
CodenameLambda is it set when the result is
Thanks!
Ben, have you considered venturing into FPGA? I'm learning them now, and it would be awesome if there were some Ben Eater style tutorials out there ;)
I'm learning assembly atm and the first thing that comes to mind is a loop (jmp)! it's crazy that to just make a computer turing complete, you just had to add an if statement. i tried learning about turning machines when i was younger and i think i watched this video when it was uploaded 4 years ago but i now i got it and it's this simple!
"We don't really expect a computer to be able to computer the meaning of life" why shall we? We already know it's 42
I want to say thank you for such a complete and informative history and explanation. This was eye opening and gave a lot of purpose to words and phrases I use regularly.
3:36, ahem, the answer to life the universe and everything has already been computed by Deep Thought and that answer is.... 42
One of the things that bugs me when people talk about the early years of computers is they always forget about Tommy Flowers who built Colossus, the world's first programmable electronic computer... Everyone just bangs on about Turing...
Colossus was a mighty work of wizardry and deserves better renown than wartime secrecy allowed it. But OTOH, for many years the same was true of the bombe. In the end, both were purpose-built cryptanalytic devices, not general-purpose computers (I do wish _The Imitation Game_ had name-checked Turing machines a little more cautiously), and both were programmed mechanically even if Colossus did _operate_ electronically. Mechanical programming kind of limits the payoff for electronic operation.
IIRC someone eventually proved that you could build a Turing-complete computer out of ten Colossi. I'm not confident you could do that with bombes. So there's that. And cryptanalysis always was a first-rate example of why you'd want to systematically burn through a lot of grunt-work with some sort of a machine that could repeat sequences of operations and switch from one sequence to another. I used to teach a class about classical cryptography and I often found myself saying, "And *that's* another reason it was cryptanalysts that invented computers."
Got the notification in Biology. Had to put it off until I got home because that class requires my full attention. Love your videos man. Keep putting these out!
First of all great work!
I'm a assembly programmer my self and to see that home made hardware can run opcodes, is really cool thing to see!
I'm more to the x86 syntax, but can surely understand 6502 like syntax as well.
Are you going someday add I/O portability to your'e computer?
And for last question: Will you give it a name?
Your content is great. Your style of editing/ narration makes the subject matter extremely understandable. I wish I had the motivation to tactical a project like this.
grade A German word butchering :)
I bet there's a German word for that
erstklassige Wortzerstückelung
true :D *and @Hugohopser: I would say the comment above seems better, and if you were to translate it word-for-word I would go for "Schlachtung" /"Schlachterei" instead of "Metzgerei" :)
Of course "Gemetzel" would be more natural to say. I also like "Massaker" in this context.
allrighty ^^
I watched some of your earlier videos (but not the ones building a computer) and I'm looking forward to it! This is really great material.
I made the multiply program using the jc the other computer has, which, if I got it correct, jumps when the carry bit on the ALU is set.
This works due to the possibly wrong observation that subtracting one from every positive number except 0 will trigger the carry bit.
I unfortunately do not have the money or time to spend at the moment to make a breadboard computer, so I have no easy way to test this.
But here's the (revised) program:
0: JMP 4
1: LDA d // start of loop, add first input to the result
2: ADD e
3: STA d
4: LDA f // subtract one from the second input
5: SUB c
6: STA f
7: JC 1 // jump if second input was greater than 0
8: LDA d // load the result and output
9: OUT
a: HLT
b:
c: 1 // needed for quickly subtracting one
d: 0 // the result, needs to be initialized to 0
e: // first input
f: // second input
My first incorrect attempt:
0: LDA e // move e to d, essentially preserving the input
1: STA d
2: LDA e // start of loop, add first input to stored number
3: ADD d
4: STA e
5: LDA f // subtract one from the second input
6: SUB c
7: STA f
8: JC 2 // jump if second input was greater than 0
9: SUB d // I think this is needed to correct an off by one error
a: OUT // output
b: HLT // and halt to make sure the data isn't interpreted as code or whatever
c: 1 // needed for quickly subtracting one
d: // reserved for use by program
e: // first input
f: // second input
I suggest you to check Logisim: it's a digital logic simulator. From basic logic gates, to registers and memory, you can create pretty much any circuit. It's freeware, too! I've almost finished to simulate the computer discussed in the series by Ben
Russell Teapot, Thanks, I'll check it out.
I tried simulating an ALU built out of logic gates years ago with Logisim. It really didn't like it, eventually it just stopped working properly.
It may have improved by now, though.
before you SUB d to correct your 1-off error, you should LDA e, so that you actually subtract from your result and not from -1.
JoJoModding, You're right. Not only that, but I actually had two off by one errors. The first one was initializing the result to the first input rather than 0, the second one was hard to explain, because off by one errors suck, but was fixed by the jump on line 0. So it's fixed, with one byte to spare this time.
Yees, really hyped that you are going to cover computability in the future videos! Last year finished course on Theoretical Computer Science, which made me really interested in what computers are (in)capable off. Now starting to delve into Turing's papers with help off Charles Petzold's book "Annotated Turing". Thank you for your work!
While it isn't do able in practice when it come to your machine there is a way of doing branching statements. As reference I point to Movfuscator ( github.com/xoreaxeaxeax/movfuscator ) which is a turning complete compiler based solely on the mov instruction. So if your computer can emulate a mov instruction(which I believe it can) then your computer is in fact turning complete.
I think that this computer can't simulate a x86 mov instuction because is way more than a simple memory copy
No the computer would need indirect move instructions for this to work. Also I don't think you could do anything with this approach and only 16 memory locations.
Nope it's not. the X86 MOV instruction can be used as and equivalent to conditional jump if your creative. The instructions on this computer can't emulate a conditional jump in it's current state.
Reon Fourie, yes if you have a mov that can write directly on the PC(thing easily doable in this computer) then you probably created a Turing machine
Yes, you need to store a register at the adress hold in another register, because that's the way comparsions are made by the mobfuscator.
Splendid take on Turing completeness. This series of videos is a gem. I would recommend this as a starting point for anyone interested in computer architecture. Kudos!
Philosophically speaking, "computer" is in the eye of the beholder. John Searle points out that a falling pencil is a computer. It computes the time it takes to cross a distance in constant acceleration. If you use it for that purpose, you use it as a computer. So, "computer" is anything you know how to use to acquire answers to your numerical questions. Of course, a Turing machine can be used in limitless ways to compute limitless answers. (Well... the pencil too can be used to compute limitless things, by dropping it from different heights. But it's not obvious how to use it to compute the length of a vector. On the other hand, maybe we are not smart enough at using the pencil. Maybe, if we are smart enough, we can use the pencil for that purpose, e.g. by writing the norm down and doing all the calculations by hand, using some paper and the graphite of the pencil to keep track of all the numbers. So, a pencil is probably a general computer after all, provided you have enough paper and enough time to write things out.)
I love how Ben explains things so easily, back to the roots of meaning
I like what you got
Good job!
One of the best videos I‘ve ever seen. Just fantastic!
To compute the meaning of life, you multiply six by seven
22 and 11
She's.......
Actually it’s What you get if you multiply six by nine?
@@Keneo1 Are you joking?
@@Handlessuck1 well yes, everything about thhgttg is jokes
What do you get when you multiply six by nine?" is the ultimate question of life, the universe and everything. As explained in Douglas Adams' book, The Hitchhiker's Guide to the Galaxy , the answer to this question explains the purpose of life…. incidentally, the answer is 42.
This explains why a lot of things are very weird in our universe
@@Keneo1 I was talking about your conclusion for six by nine
I would suggest looking into a language called brainf**k, which is literally designed specifically to run on a turing machine. It only has 8 instructions, is turing complete, and doesn't even have a jump function. It has < and >, which mean to look at the next or previous memory positions (Same as checking specific memory address), + and - which add or subtract 1 from the current memory position (equivalent to 'i = i + 1' and 'i = i - 1'), '.' which is the same as your OUT command, ',' which takes user input (Not actually in your computer but not actually needed), and the final two command "[" and "]", which are the only logic of the language, meaning "Loop whatever is between these two symbols until the current memory position we are looking at does not equal zero". That is turing complete. Infact, there are compilers from C# or java to brainf**k, and people have written everything all the way to a functional version of "The game of life" on it.
For example: "[-]" means to subtract 1 from the current memory position until it equals zero. AKA, set the current memory position to 0.
E.g. 2. "[-]++++>[-]++++++++" means to set the current memory position to the number 4, then move over to the next memory position and set it to 8.
Hello ! Im from Poland, sorry for my English.
I think Your computer is very interesting ( My hobby is AVR ), but i have one qestion to You:
If You have memory adressable on 4 bits, instructions are easy: KKKKDDDD K-command D-mem. adress,
but what if You have memory adressable on 8 bits ? one command is two bytes? Can you answer this qestion?
That's exactly right. Or it could be even longer than 2 bytes. The PC has opcodes up to 15 bytes long.
Yes, that is exactly what I am building at the moment, an 8-bit computer with 6 bit opcodes and 8-bit address space.
Most commands is still only one byte long, but all the ram IO and jump instructions are 2 bytes long, one for the op-code and one for the address.
By using 6-bit opcodes i also have the 'luxury' of having 2 user operated inputs, 4 registers, special inputs (0 and one) and in addition to add and subtract I have also included logical AND as well as NOT (that is a true ALU :-) and increment and decrement value in A register (by loading the special value 1 in B register and doing sum or subtract).
I have also used 74LS194's for the A and B registers as they allow to do bitshifts.
I am now working on giving it a persistent storage (an 28C16A EEPROM like the ones Ben and I use for the OP-code decoding).
I'm also planning to do a 4 byte hardware stack (using 9 74LS194's) for subroutine calls.
man its cool you test your computer for turing hardness, barely anyone does that anymore in the hardware scene. I think its the entire dogma of if A is turing complete then another product that uses A -> A+B there is no need to test for turing completeness since A being turing complete suffices to say B is also turing complete. Tensorflow is a great example of this.....
A computer can compute the meaning of life. It's 42
So long, and thanks for all the fish.
Love you ben!!!, there's a paper out there explaining that the mov instruction its by itself turing complete. Keep doing your thing man, you're an inspiration
Oh, that's why Chrome consumes so much memory, because it was designed to be ran on a Turing machine!
Ben, this is some of the most illuminating video on system design I've ever seen. Please don't stop, it gets me so inspired to understand the machines I use and get paid for to use everyday.
3:36 "We don't really expect a computer to be able to compute the meaning of life? right?"
said the man who built a computer that outputs 42 every time
So... I FINALLY got done binge watching this series. What a ride! Now, without more to watch, I have nothing to do but give unsolicited commentary. I actually have a much larger wall of things to say, but I'm going to cut it down to just the two things I'm most focused on right now.
I know there is limited merit in talking about optimization, but a thought occurred to me early on that I think adds substantial value. With almost no effort, you could replace the current automatic reset of the microcode clock with that unused bit of microcode EEPROM. Do this and you gain two simultaneous advantages. First, you open up the fullest possible stride (number of micro-ops) for instructions. Second, any instruction will terminate itself as soon as it's able. This last part gives a (probably massive) speed improvement because the vast majority of instructions don't use most of the stride so the next instruction can begin immediately instead of waiting. Ultimately this produces a very clean dynamic instruction timing. The only trade-offs are every instruction must terminate with an "end" micro-op, and that there is increased complexity when quantifying execution timing, which brings me to my next point.
I'd love to see a run-time input segment. There are several good reasons to do this, I'll list a few. 1) You have an extensible bus, so why not demonstrate that? 2) Practical computers have external input, it's fairly essential. 3) Potential to tack-on interrupt, something practical systems need. Interesting things happen with this, more so than any other possible addition I think. Foremost is that it might be able to do real world work, making this a truly useful system at that point. It would be very reasonable to use it as a microcontroller, maybe a PID controller for something like a thermostat. This might even shut up people that like to insinuate that this project is useless. Yes, computational speed is great, but ACTUALLY the larger use of computers, their greatest value, is endlessly applying optimal solutions to real problems without making mistakes. These are typically "Embedded Systems" and it turns out that 9 out of 10 "computers" are made for this reason actually. The computer you have here is rapidly approaching that level of completeness, which I think is a critical milestone you owe yourself to cross.
It's Turing complete when you run Doom on it.
And you have a video card now, so...
If they managed to run Quake in an oscilloscope, Doom could have even lower system requeriments
@@Vlad-1986 They didn't "run" quake on an oscilloscope, they used the scope screen to output a representation of the output video - this is much different.
Wow, very well explained! I just understood what "Turing completeness" means. I mean intuitively, really, it just all clicked in my head. Thank you!
What a heartless creature gave this video a thumbs down.
A computer?
A turing complete machine?
WOW! Its great to see Ben's video after such long time. now we will be able to make it Turing complete with the JMP instruction. awesome stuff!
It's pronounced as entshydoongsproblem
Can't wait for the next episode, in fact I'm doing a cpu with transistors and I am following this tutorials as a guide but with a diferent alu with more options! Thanks for all!
is your computer susceptible to the spectre attack? xd
Gigabyte1337 shush :x
Just add 100-million transistors for speculative execution and yes!
+Gigabyte1337
It also doesn't have a cache or out-of-order execution, so.... :D
Gigabyte1337, So basically what you're saying is it's safe because it has no security.
+Huntracony
No, not really. Gigabyte1337 is saying that the computer is safe (from the spectre attack) because it does not have the security features which spectre would break. Spectre can't be performed on this computer -> It's safe from spectre.
Oh my god, never been so early. Amazing work, I got heavily inspired by you with my classmate and we are building our 16b computer now. I would love to see you do more of this stuff, like try to do pipelined CPU (based on this one), or extending RAM (and thus capabilities) by choosing active bank (similar to 6502) or making UI to it with LCD display and HEX keyboard to program it in assembly. I really love your work and everything you do for us. Huge support from Slovakia and Czech republic ❤
The meaning of life could be computed and I bet it is around 42.
Ben keep improveing this computer, i am still a student, your videos help me a lot to understand computer hardware. I love this series...
make a computer that uses Brainfuck instructions
Hunar Omar I got a headache just from programming the fibunacci sequence
Hunar Omar well basically brainfuck is turing machine language
So any turing machine will do
I saw a YT video somewhere that showed how to program in brainfuck without hurting yourself. The trick is to use macros to create useful instructions first.
Nice video! I'll have to check out the rest of the series as I just jumped to this one. Also (not really that important) there is a model of computing that is like a turing machine except with a finite tape, and that's an LBA (Linear bounded automaton) -- but I never really hear anyone talk about those because they're literally the same as turing machines but with markers on the left and right side of the tape that stop you from moving beyond them, essentially making the tape finitely sized.
If it can download porn, its a computer. End of story.
A TV and a VHS tape can download porn, but isn't a computer.
Sorry, but by the definition of computer, if it can download porn, it is a computer.
Oh, I see. OK.
Rozae Pareza most TVs have have a cpu though.
fakiirification What?
I just found this channel and I'm actually annoyed that I hadn't found it earlier. I love these videos.
A Mac?
The breadboard computer really is better...
You deserve to be up
underrated commnent
sigh there's always someone
You guys seriously underestimate the power of macOS and its subsystems: *BSD, GNU tools, gcc, Ruby/Python/Samba/SSH preinstalled, a proper firewall, OpenGL, OpenCL etc, all open-source and industry standard, unlike proprietory Microsoft crap.
Proprietory Microsoft crap. That's an Apple product.
If you want open-source, just use Linux; it, at least, is *actually* built on open-source.
a multiplication with these instructions is possible, it all depends on dynamically overwriting the address following the JMP instruction. you can just jump into a series of "ADD" instructions. You just have to calculate the entry point correctly and overwrite the address (self-modifying code, writeable text-segment) following the JMP opcode. Here's a C-code that multiplies two 4 bit numbers. With the exact knowledge about your CPU's machine code (which I don't have, but I'm sure it's explained in your previous videos), you can easily generate the correct machine code. The "trick" is that you don't "break" after each "case", but just "fallthrough" to the next case statement.
int imult4(int i,int j) {
int n=0;
switch(i) {
case 15: n+=j;
case 14: n+=j;
case 13: n+=j;
case 12: n+=j;
case 11: n+=j;
case 10: n+=j;
case 9: n+=j;
case 8: n+=j;
case 7: n+=j;
case 6: n+=j;
case 5: n+=j;
case 4: n+=j;
case 3: n+=j;
case 2: n+=j;
case 1: n+=j;
case 0: /* nothing */ ;
}
return n;
}
Unfortunately, I don't exactly know how the design of the machine code. I assume that LDI is a 'load immediate', STA / LDA / ADD operarte on memory addresses. An assembly code doing a multiplication without a MUL instruction could look like this:
; assumption:
; i and j are memory locations,
; they contain the numbers
; to be multiplied
imult4:
;
; calculate entry point:
;
ldi @_mul0 ; load immediate
sub i ; calculate the offset backwards
sub i ; accu = addr(mul0) - 2 x i
sta _jmp+1 ; overwrite jmp addr with entry to add table
ldi 0 ; sets accu to 0
_jmp: jmp 0x0000 ; 3 bytes; JMP, LSB, MSB (yes?)
;
; an add takes two bytes
;
add j ; x 15
add j ; x 14
add j ; x 13
add j
add j
add j ; x 10
add j
add j
add j
add j
add j ; x 5
add j
add j
add j
add j ; x 1
_mul0: out ; x 0 and output the accu
hlt
for an 8bit multiplication, the list of ADD instructions is times longer (255 instead of 15 ADDs)
do you see any issues with this approach?
In Technicality if you can jump and you can move memory around your machine is Turing complete. Take a look at a project called the Movfuscator.
In short the Movfuscator was a project to show that any program can be written with only a series of mov instructions (or in this computer's case loadA/StoreA instructions) With the caveat that there was but one jmp instruction to bring the execution back to the beginning.
And if you at first aren't convinced, consider for a second that this computer already utilizes the Turring completeness of memory in using it's eeprom as a hex decoder. Memory can emulate any discrete logical element and so if you can move that memory around and have the right program loaded you can build a Turring machine with it. So no you don't technically need a conditional jump instruction but it would cut down on the execution time of general programs by an exponential amount.
Haven't watched the next video yet because I wanted to try my hand at the MUL program given a JC instruction. Got it in exactly 16 instructions, if you load the operands a part of the program (here I'm multiplying 3 by 5). This assumes 0-1 sets the carry bit. If it doesn't you'd just have to set the second operand to 256-5+1=251, and ADD 0 instead of SUB 0. Also, because ram is only 16b I cannibalize the first bytes of the program to store the operands/temp values meaning you can't rerun it without rewriting it. I also output the value as it's calculated to save a crucial instruction...
0: LDI 1
1: STA 0 //we've read instruction 0 by now
2: LDI 3
3: STA 1 //we've read instruction 1 by now
4: LDI 4 //multiply by 5, load 4 to get JC to work
5: STA 2
6: LDA 2 //Start of loop
7: SUB 0
8: JC 15
9: STA 2
10: LDA 1
11: ADD 1
12: OUT
13: STA 1
14: JMP 6
15: HLT
Gonna watch the next video now to see how close I got :) (if you just initialize ram directly instead of what I've done you save 3 instructions, and you won't have to cannibalize your program, but where's the fun in that?
By watching this I feel almost like I'm back in my high school (it was a weird one that taught us stuff like truth tables, rll encoding, how to calculate carry for each bit without waiting for actual results from the register, and tons of other related things). I'm glad people are still playing with such internals as if they were some toys, not just a set of skills required by a big industry. Thank you.
I really liked how you integrated the math theory with the hardware and instruction set design here. Not many tutors could have done that so elegantly
WoW!!!!
I never ever in my whole life heard more good explanation about conditional jamp (branch)!
Another great video! I have already learned so much by watching some of your 8-bit computer videos. You explain things in a way I can understand and the videos really reinforce the concepts I am learning in school (going for Computer Systems Engineering). I can't wait until I have learned enough to build my own computer. Looking forward to your next video!
Hi Ben, Excellent explanation, thanks a lot for your time and efforts in making these wonderful videos.
Thanks for this video series. It’s been very enlightening. Can’t wait for the next video. Very inspiring!
You make it so clear, and you go from some hardware to explain fundamental concept: thank you
2:30 you can, actually. But it uses a technique despised by most. (Drumroll) self-modifying code!
Example:
(Goto $9 if $7 is zero, $d if it is 1)
--code--
0 lda $7 ; a is value to be checked
1 add $7 ; a is 0 or 2
2 sta $7
3 add $7 ; a is 0 or 4
4 add $7 ; a is $9 or $d
5 sta 6
6 halt ; this will be modified by the previous instruction
7: check value
8: #$69 ; opcode for 'jmp 9'
9-c: some code
d-f: some other code
This is a conditional instruction. With it it is Turing complete and can do anything.
This video was the one that made it make sense, you're invaluable to the CS community and so many students like myself
+Ben Eater I like your videos. It's just awesome. You explain the whole stuff really good. Please go on with computer videos. I'm really looking foreward vor the next Video. 👍