ChatGPT is great at sounding confident with terrible answers. A while ago I wrote a small program to test if it was currently British Summer Time (UTC + 1) based only on the current UTC datetime. A friend of mine asked if I was worried about ChatGPT taking my job so as a test we asked it to come up with the same program. It made every conceivable mistake. It assumed the last day of each month was always a Sunday. It assumed that BST starts at midnight. It assumed all months were 30 days long. I'm not worried about ChatGPT replacing programmers. I'm worried for the companies that think it can, and try it though.
AI really is the equivalent of the tube-capping machine from the Tim Burton _Charlie and the Chocolate Factory_ movie that's so unreliable they needed to hire the guy it replaced back as a full-time employee to keep it working.
"I'm not worried about ChatGPT replacing programmers. I'm worried for the companies that think it can, and try it though." --> THIS !!! To show just how much today's companies think chatgpt is the next best thing, Google search... "Levidow, Levidow & Oberman caught using chatgpt"... NYC lawyers who used ChatGPT for research in aviation injury claim were fined $5,000 in legal first after the judge found the AI chatbot cited legal cases that were FAKE and contained 'gibberish'. I mean FFS, what's next.
For all the new students of 6502 assembly/ML, keep in mind that you need to save all modified memory when writing a program in a monitor. When Robin shows how to initialize $C100 with "hello world", that was done manually -- your program does not initialize that memory. So when you save your assembly program, you need to save the memory it occupies AND the memory that it requires. Something like: S "my program",8,C000,C10B will do that. Keep in mind that this saves the program and all the junk between the program and data memory that isn't used. A symbolic assembler like Turbo Macro Pro will compact this for you since it handles the allocation of memory for text -- another reason to use TMP!
Good point, I like to initialize the entire memory space I'm using for my code with 0 so when looking through memory with a monitor, the garbage doesn't produce weird mnemonics. You could also manually move the "Hello world" data right at the end of the main code to prevent garbage and make it more compact when saving it to disk.
I learned ML by hand assembling - using the following format 1000 data000,000,000:rem assem code 1001 data000,000,000:rem assem code Then using a READ/POKE to put it in memory. Obviously, after I put actual code in the REM, I had to hand assemble (using the Complete Commodore Inner Space Anthology - with decimal opcodes) and replace the 000s with the code. The only problem was learning how branches worked and calculating JMPs. This technique worked for tiny fragments of code. I got BASSEM, a BASIC extension Assembler from Compute! Gazette - and never looked back!
Thank you for the video. I wish I learned 6502 assembly language programming back in 1983. In those days, I mainly played games and did some BASIC programming.
This takes me back to my childhood (early teens)! I started with a machine code monitor and would write my assembly on paper, then hand assemble it to machine code to punch in to the monitor (slightly less tedious than writing a BASIC loader to POKE it into memory, but not much :)) Some short time later I got an action replay cartridge which had a built in simple assembler (very much like SuperMON). A little later still I got the luxury of a 2-pass assembler which allowed me to use labels. Glad to see different approaches to the same goal being shown! I'd have gone with the speedy "write it direct to screen memory" route, though it's worth noting that $0400 is only the address of screen memory if the VIC-II is looking at bank 0. Thanks for the memories!
Those Transactor publications (mag and compendiums) are the best references ever. When I was living in Ontario in the mid-80's those were still available on the shelf. Back then I didn't understand about 60% of what I was reading, but that is why I am watching you now.
I now regret not bringing this up in this video! For anyone curious, watch my video: "Jim Butterfield's Machine Language for the Commodore 64, 128.. Revised": th-cam.com/video/OirYXXnCVlo/w-d-xo.html
@@8_Bit spending real time with Jim’s Machine Language book made me better at my day job. I don’t do “coulda shoulda” but I wonder what going through that book in the 80s would have done for my early career. I didn’t find it until I was in my 40s.
My gentle introduction to assembly language programming on the C64 was a Basic assembler called Bassem that was published in Compute!'s Gazette. You could just type mnemonic instructions into a Basic program, using all the features of Basic such as variables and loops to generate the machine code. I really think you should cover this at some point.
Or the modern c64 basic Vision which really looks intersting with all it's niceties added including using assembler in basic, full usage of REU ( a minimum of 128K required) actually i'm surprised that Robin hasn't shown anything about it at all.
Last time I've done ml coding back back in the 80's, but this video brought it all back ... guess my memory is still alright lol I used a commercial assembler called Merlin Wizard to write some demos in the good ol' days. :)
I really appreciate the Turbo Macro Pro Hello World program walkthrough. Really clear and concise, like all your videos, top-tier and very well put-together. Thank-you so much for doing this, your videos always make my day!
I got chatGPT to write me complex Polymorphic code in C++ after convincing it that I was only going to use it for good. However, the more I asked it to iterate, the more complex the code got, the less sense it made. I gave up when it invented its own library.
I find it's much better to start new sessions regularly and show it the code you have at the current point. When you keep one session going for followup discussion the token buffer overflows which is sort of equivalent to having a bad short term memory. It's a separate issue from hallucinating libraries and functions that don't exist.
I learned 6502 machine language on the Apple ][e back in the 80s. I had to enter the opcodes, since I had no assembler at all. Then I could disassemble what I wrote and see if I made any mistakes. I don't think I even knew the indexed accumulator commands, so I'd use self-modifying code instead. I was just a kid, so I didn't really know this was bad. :) All of my knowledge came from examples in Nibble magazine, and I was really just winging it. But the other kids at school were still blown away by my skills.
Self-modifying code *isn't* bad, as long as you know for sure you are running from RAM. And with so much absolute addressing going on, the 6502 cares very much about where it's running from.
"Hello world" wasn't yet a thing outside of academia in the C64 days. The canonical first program was 10 PRINT [usually something off-color... in the User's Guide it was "Commodore 64"] 20 GOTO 10 This can be seen referenced in the Church of Robotology in Futurama.
Yeah, it was specifically a C thing; the creators of the language introduced their language with it. It's a good introduction to C with its include files and main() function on Unix with its text io, but some languages are so different that it makes little sense to give a "hello world" example at all. In some languages, the string "hello world" is literally the whole program; the default action on a string is to print it. :) At the other extreme, many GUI operating systems weren't designed with simple text output; you had to put it in a GUI element which would be no trouble in a complex program, but makes a hello world example look huge.
This video is really awesome! I never knew anyone that had a commodore 64 (We were an IBM family, literally. My dad worked the AS/400 and system 36 tech support queue) so this is really neat to see. For follow up videos, there's lots of videos dunking on ChatGPT and AI being clunky. You might consider showing how to write a Prime number sieve, how to calculate Pi. My favorite video so far has been the "one bad byte" video. It'd be really neat to see either how you search for and implement cheats by poking memory, or how you might bypass old copy protection mechanisms.
Thanks. High and low-byte are referring to the two halves of a 16-bit value, rather than a nibble (I prefer the spelling "nybble") which is half of an 8-bit value.
I've written asm before for various architectures so none of this is challenging, but it is very enjoyable to watch, especially seeing what native development tools you have on the C64 and your very pleasant presentation style, you're an excellent teacher.
For me this is like a trip down memory lane (ha ha) to better days, to the Golden Age of computing. Today's computers lack all personality. I would gladly go back to the '80s and stay there if I could.
The problem with these so-called artificial intelligence programs if they have no creative ability other than what they can mimic. They have no consciousness which they can use to reflect on what they're doing and be creative in a truly honest sense.
Have you tried creating a Machine Program for the COMMODORE C64 in CODE::BLOCKS Cross Compiler using the GCC-6502 Tool chain but you can use the GCC-6510 Tool Chain.
@@weedmanwestvancouverbc9266 They may have no consciousness but you can get it to reflect on what they wrote and this can actually lead to significant improvement of the results. At least with GPT-4, not so much with 3.5 (where it tends to make it go off the rails even further in many cases)
It does QBasic pretty well, but not flawlessly. You have to put in some effort at times and learn to recognize when it's better to start over instead of trying to argue with it to make it fix its mistakes. Oh, and sometimes the code it writes is fine but the explanatory text accompanying it does not match it. The more complex the code becomes, the more this happens. That's one of the weirdest aspects of this whole thing, in my experience. Again, don't argue with it. Just take the part that is useful and ignore the rest.
Go try it. I’ve gotten mixed results with pretty much every language I’ve used, Python, 6502 Assembly, Atari Basic, and C#. I can’t imagine being successful using it to create anything complex from scratch and it still requires you to know what you’re doing in order to fix or tweak issues, or know what you need to suggest to fix specifically.
It would absolutely be interesting to see a video about ChatGPT's attempts. Either just trying the code and explaining what it did wrong, or trying to correct it and see if you could finagle it into outputting anything reasonable.
It would be difficult to capture all of the mistakes it makes since it does make a lot of them. One common mistake is that it fails to use the appropriate memory map. e.g. The C64 and Apple II use address 0x0400 for the text display but these two machines are organized a little differently from one another.
I've been using it to write C64 assembly for a little while now. If you know the hardware you can very quickly write big hunks of code using chatgpt that really works. Symbols are sometimes in the wrong place but that's what symbol tables are for. And there are bugs in the 64's design that aren't shared by other 65xx machines so sometimes you need to know a little bit more. It's especially powerful if you're able to describe what the code you want will look like. "How about if you organize this as a jump table instead?" for example. If you just say "give me assembly code for x," it will, and it might be kinda terribly organized. One thing I've noticed about AI is that it makes just as many mistakes as all the hack tutorials out there make. And a lot of textbooks from the 80s and 90s were very poorly written too. That, and there's no editors checking up on all the blogs that give AI all of its facts. It's absolutely worth a video, but hopefully from a perspective which understands what generative language model AI is and is not. We aren't quite to the point of Isaac Asimov's Galactic AC oracle yet. I think sometimes people tend to put more blame on the AI than the poorly-defined application of it.
@@jimmyzhao2673 that comes down to a couple of factors. 1) there's so much more example code targeting Arduino that it can be trained on. 2) Arduino code typically uses libraries, so one line of code might do one compartmentalized thing. As opposed to asm code which might require a more verbose approach and keeping track of register pressure among other things.
The fun is in not telling it exactly where it went wrong and watching it get deeper and deeper into merde. However, even telling it exactly where it went wrong almost certainly guarantees the next 'corrected' answer will also be wrong. In the end, bullsh#tting is bullsh#tting, and boy do we have the mother of all bullsh#tters in chatgpt (and now bard).
The (masochistic) method in "Example 3" was also the way I used to learn Assembly for quite some time. In fact I think most people did the same in the beginning. Then I found a Basic program implementing a simple Assembler (which I had to type, it was a long and error-prone process). But that was a huge upgrade already. When I finally got Turbo Macro Pro it was like Heaven on Earth. I don't like AI but I tried it for such a "Hello World" assembly program. It actually gave me a correct program - not optimal, but working code. But I saw it failing in many cases, and in different programming languages. I very much want to see a video about Ai failing to give correct assembly code. But no rush - whatever next video is about, I will watch, enjoy, and like anyway. 🙂
oh man...you just reminded me of my childhood... 1984 and my first steps into assembly on my c64... I too went the very same steps as you did... then, in 1986 I got me a C compiler for the c64. I still have some of the C programs I wrote during 1987 and 1988. Believe it or not, the C language I learned when I was young, is still very much in use today (2023) !!!
You must have been very patient! If it's the C compiler I tried, it's a very long process from editing and saving your program, to compiling, testing, and going back to editing. But it was no doubt educational, and the results would have been much faster than BASIC.
I had a C64 back then but I only coded in Basic. Now I was just curious how Hello World would look like in Assembly Language and OMG that's mind blasting. I've done Hello World in Perl, PHP, ColdFusion, Java, C, C++, C#, Kotlin, Rust, Javascript, CoffeeScript and surely some more I forgot about, but never in my wildest dreams could I have imagined how complicated Assembly Language is! Complicated but at the same time also very intriguing to learn more about how all high-level languages work behind the scenes. Thanks a lot!
Great content, Robin. Sometimes the simplest ideas produce the best discussions. I can still remember in 1980/81 how impenetrable machine code on the 4032 PETs seemed. The teacher kept one lesson ahead of us showing BASIC commands so, sadly, they were no help answering questions about this comparatively occult topic.
@@mechaform - I still have my entire collection of 2x bread-bin C64's ,2x C64-C's, along with numerous tool cartridges and hundreds of floppy disks stored in the basement. I used to run BBS'es on the C64 and Amiga back in the 80's and early 90's, so I have accumulated quite a collection. One day when I retire, I will set everything up again to relive my youth. heh
I was a little confused at the end, but figured it out. I was wondering how the AB1E routine knew just to print the string "hello world", and not continue on printing what was on line 20. then I realized that AB1E prints a string until it hits a $00, and the way basic lines are stored, each line ends in $00.
Not sure why, but I missed this one and I'm really diggin' it. Thanks for the shout-out and glad my exploration of AI piqued your interest. Looking forward to trying my first machine code "hello world" soon.
13:50 that's a smart way of copying and using screen codes at screen memory $0400! The input buffer with PETSCII codes at $200 could have been used with a $FFD2 also. Great video higly didactical, Robin.
Robin I'm glad you made this video. I'm never going to be able to make something in assembly, but I definitely like learning HOW people do these simple things on the platforms I love to use.
Excellent video, as always, Robin. I'm half tempted to fire up VICE from nostalgia. I like the order you explained it in; I did things the same as you back in the day - starting with poking straight to memory, and (at least for me) puzzling for hours when a bit of machine code just crashed because of some misconception I had about an opcode or its parameters. Keep making these cool vids!
I still have my Commodore 64 and my Commodore 64C. The colors on my profile banner are Commodore colors also. Man, the time I spent on that coding small games, using BBS's, and so on. Was quite the rig.
It's really interesting that GPT can even write something resemeling 6502 asm. I doubt it has a very large set of data for that particular problem but you are VERY correct that it's a garbage in - garbage out system lol. Thanks for the awesome video, even though I doublt I'll need to program a c64 any time soon, It's very interesting to get to see some of the underlying design disisions that inform how we work with computeres today.
It would also be interesting to show how chatGPT handles simple code requests that could be packaged together to make a working program (e.g., "code a timer that counts 30 seconds and puts a 1 into memory in c64 assembly," "code a program in c64 assembly that if memory address [whatever] is 1, move the cursor to the next line"). Then to review it's errors, see if they can be corrected, look at the better version, and talk about your thoughts. Then examine various chatGPT requests that increase in levels of sophistication.
I've seen some terrible sci-fi movies in my mis-spent youth. I wrote a PCB design program that runs on the BBC Micro, with just 32KB of RAM less 10KB for the frame buffer and the Machine Operating System and BASIC swallowing another 3KB or so. (It uses the MOS graphics routines. Unfortunately, that doesn't bode well for a C64 port .....) It takes in a wiring list representing the circuit schematic with comments indicating what PCB footprint to use for each part, has a graphical editor where you can position components and route tracks between connected pins, and can produce a suite of Gerber and drill files suitable to send to a PCB manufacturing service. And I had heard about a gorilla who kept a pet cat. So naturally, I tried chatting with ChatGPT about programming the 6502, in the hope that it might decide it wanted a pet 8-bit micro of its own ..... Anyway, not mincing words or anything, it sucked. I asked it about adding 2 numbers, and it gave me some code to add two *completely* *different* numbers than what I asked it. Then, as a test to see if it knew any common 6502 programming idioms, I asked it how to shift the accumulator left by one bit and set the least significant bit to 1. it wrote CLC ROL A ORA #$01 taking up twice as much space as, and many more clock cycles than SEC ROL A I didn't bother testing whether it knew about JMP instead of JSR followed immediately by RTS. It's the sort of thing that impresses people who are not as smart as they think they are.
When I saw the name Jim Butterfield I let out a little gasp. He was a legend in the Commodore programming world. I wish I still had his books including the Assembler book with the characters on a beach.
Yes, I'm also a big Butterfield fan, and I've done a video about his book here: th-cam.com/video/OirYXXnCVlo/w-d-xo.html And my most recent video looked at his Commodore Diary pocket book.
Back in the old days, I would write a program that consisted mostly of Basic, but you had a couple of routines the would be slower then a snail through jello, so you would write those routines in assembler, convert that to data statements, and use BASIC to stuff that into memory, and you would then, run that as needed. Today we still can do that kind of thing, by writing a program in Python, write the slow stuff in C, then call the C library from Python as required.
Loved this, very simple explanation of how to do this on C64. Whenever I write assembly code for a new system my 1st program is always to clear the screen to a certain color, then I make a hello world program. On most systems I do not have a font available to me in memory to use, so usually have to make my own custom font for a hello world program (which always teaches me how a video display stores pixel graphics on a given system). Thanks for your cool videos, always like seeing a new one turn up in my recommendations =D
I would love to see a follow on video where you compare the differences in the kernel based print routines in the other 6502-based computers of the era (Apple II, Atari 8 bit, etc.) - assuming that there might be interesting differences to talk about.
As much as I admire and respect the late Jim Butterfield, I prefer monitors that breakout the SR flags into separate columns and use an easy-to-remember SYS address. My recent favorite is StringMon because it does both, as well as allowing scrolling in both directions, like many convenience cartridges from back in the day.
I totally agree about the SR flags. The reason for the strange SYS addresses is actually pretty clever: SuperMon automatically figures out where the top of BASIC RAM is and installs itself right up there, correctly modifying BASIC pointers to protect itself so that BASIC can be used fully as well, with no wasted memory, and the largest possible contiguous block for BASIC. This feature was especially useful for the PET and VIC-20 versions with their varied RAM expansion capabilities, and it carried over to the C64. The downside is the strange SYS address.
For no particular good reason, I wanted to write a program in Microsoft BASIC II for the Atari 800 that evaluated a poker hand. I could find nothing online to use as a starter so I tried ChatGPT just to see what it could do. I had the same experience that you did. It actually presented some helpful code. But there were entire sections of the code that did nothing. And all over the place there were errors. In retrospect, it probably would have taken less time to figure it out from scratch and write it myself. But it was an interesting exercise. It's funny how ChatGPT apologizes for its mistakes (sometimes actually correcting them, sometimes not). And often after the apology it will just keep make the same exact mistake. After several iterations such as this, I knew that I had gotten out of it all that I could and went on my way. So, yes, a video showing ChatGPT would be interesting, probably helpful, and almost certainly amusing.
I always enjoy your videos where you go into coding on the Commodore. I tried using ChatGPT to jumpstart a project that I was transitioning into Python. I broke the project down into smaller routines, as I knew there was no way it would be capable of doing everything at one. The code it generated was garbage and I wasted way too much time going back and forth with it and never got anything usable.
It might bear mentioning that, besides saving a byte of program code, another advantage of using BNE instead of JMP in the print loop is that it makes the resulting ML relocatable (excluding the printed string itself).
I learned 6502 assembly language on a SYM-1 shortly before the C64 was released. While this video was nostalgic, I could not sit through the entire thing. I used the PAL assembler for the C64. A *fantastic* assembler. I also received my copy of SUPERMON directly from Jim Butterfield back at a Commodore Expo in Valley Forge in 1986. He and I were both speakers at the expo. I enjoyed the C64 for several years and made a lot of money from it. Anyone remember LOADSTAR? 😉 From 1986 through 1990, I practically lived in the GEOS environment.
For anyone wondering what the point of a BASIC loader would be-given that it's just being used here to do something BASIC can already do way simpler-this sort of thing is usually used when people are writing a program mostly in BASIC but need to incorporate a machine language routine in order to make something run faster. Or if the whole program needs to be machine code but they don't have a monitor or assembler handy-this is what magazines would do for their type-in programs, as they couldn't assume their readers had anything besides the built-in BASIC editor.
I started programing BASIC in 1979, started programming machine code around 1982/1983 but I didn't get my first assembler until at least 1993. I did all my machine code programming by hand assembling and poking decimal bytes into RAM. I was definitely not the only one. Though maybe I was the only one that went without any assembler for a decade (-:
Robin, per the book by Jim Butterfield "Machine Language For The Commodore 64", which I have a copy I bought in the early '90s, Jim states that if you want to restart the monitor from BASIC after the first launching, you can just type SYS 8 within the BASIC environment. This does work in VICE and on a real C64 for me. It's a LOT quicker and easier to remember than the SYS 38xxx number it generates at the start of the monitor.
Aha, that's an old PET trick. Location 8 usually contains a zero, which is the opcode for BRK. So a SYS 8 executes that BRK opcode, which jumps through the vector at $0316-$317 (locations 790-791 decimal) which SuperMon by default redirects to itself, I think. So yes, that should work as long as locations 8, 790, and 791 aren't changed after starting SuperMON.
@@8_Bit I feel I should also say here that the book does mention SYS 4 applies to some Commodore models, but the C64 is definitely SYS 8. Maybe SYS 4 for the 128? I'd have to go grab the book.
@@ericjumpelfeaturingjohnnyj4135 Yeah, it'll just be wherever a zero is normally located in low memory on a particular Commodore model. There's nothing particularly special about the SYS 4 or 8 or whatever; it's just a shorter number to memorize that has the effect of executing a BRK opcode.
Hello again, Robin. At the time I watched your video and commented (apparently 5 months ago), I was working on a SID 6581 project and was heavily immersed into Butterfield's book. I have, since that time, completed my code to a place that it's usable software, and have created a music video showing it in use. I wanted to say thank you for your video and the reply. I don't exactly remember what it was I needed to learn when your video surfaced in search results, but I really enjoyed watching your "hello world" demonstration. Here's my finished song and video (timestamped to the spoiler) of me playing my creation that began life just about six or seven months ago. Hope you enjoy the song and SID usage: th-cam.com/video/RJ4cpCT_-e8/w-d-xo.html
Very interesting. Back in the day when I had my first C64, I found the code for Supermon in the back of a magazine...I forget which one. I wanted to start learning machine code. Unfortunately the program had many pages of just data statements and your eyes go batty. To add insult to injury something killed a controller chip on the motherboard and I lost 2 full columns of keys. Needless to say, I never got it typed in...
First time I've heard Garbage Producing Technology - gold, at least in this instance. Nice work. Assembly is such a fundamental language. Everybody learning to code should learn at least some of this critical language.
Mighty funny how I came across this. I'm fortunate to have an emulator. I didn't know STYLE did that. WOW!! Oh, I'm WARNOCK...and I'm back. LOL I made snapshots of all the work I did for Style, Arson, RPG. I'll be releasing a solo soon. Look for PROGRESSIVE...the never released demo......until now. Everything is coming back to me. I'm using Vice and was wondering about Relaunch64 for the emulator. I use TMP and/or Turbo Assembler. Good Job!! 😁
There was no assembler in the ROM of Oric machines... But there was an opcode reference at the end of the manual, that's how I learned to assemble by hand - fortunately, there's a button for cold reset under the machine, negative relative jumps were a nightmare. I wrote a disassembler at the time... Using bne/beq to jump is limited to +/- 127 bytes (relative jump) and avoid jumping out of the page, while jmp/jsr are full 16bit addresses. Sometimes, there's no choice... The low byte and the high byte of the address are reversed because it's "big endian". Intel processors do that too, while Motorola (e.g. 68000) is "little endian". There is no "good" choice. Now, for the program itself, the funniest one to do is to PLA the return address in your subroutine so you can JSR to your routine, followed immediately with your text (0-terminated) and then the rest of the code. Just for the sake of making self-modifying code full of binary arithmetic (A, X and Y are 8bit, return addresses not so...). Something like: JSR $C000 ; print string subroutine .text "HELLO WORLD",0 ; the string with a zero at the end RTS ; return to BASIC *=$C000 ; arbitrary, can be anywhere PLA ; not sure any longer in what order the return address is pushed into the stack by JSR, may need to reverse the order of STAs STA $C010 ; character's address, low byte STA $C020 ; jump address, low byte PLA STA $C011 ; character's address, high byte STA $C021 ; jump address, high byte ... *=$C00F ; (or anything, to be calculated) LDA $0000 ; this address is self-modified PHA ; push the character code, need to increment counters first, will evaluate later INC $C020 ; increase jump address, low byte ADC $C021,0 ; add carry to high byte of jump address INC $C010 ; increase character address ADC $C011,0 ; add carry to high byte PLA ; get back the character BEQ $C018 ; go to exit code if null (address to be calculated) ... ; actual display routine, either calling the routine in ROM or writing directly to the screen ... *=C01F ; (or anything, to be calculated) JMP $0000 ; this address is self-modified Enjoy that version of "Hello World"! ChatGPT will never suggest anything like that, and it's a nightmare to disassemble when text is mixed with code.
The Sinclair ZX Spectrum also had an opcode reference at the end of the manual and was one of my main sources for learning machine code. I also learned from lots of magazine articles. I wrote several disassemblers too, in BASIC.
Fundamentally all GPT does is statistical analysis. It has a very large bank of data on which its generations are based, when the user prompts it for something it generates what this data suggests is the most likely response to the query given. Something exceptionally specific like writing "hello world" in 6502 assembly for use on the Commodore 64 is a perfect example of how to "break" a model like this because there's very little data in its bank that is relevant to such a specific query, but a lot of data to indicate how such a response should be formatted, so it generates a response that is well formatted but makes no sense.
No it doesn't actually. It's a common pattern that people repeating how GPT doesn't "understand" and just repeats what it's read actually don't understand what GPT really does but just repeat criticisms they read elsewhere. Understandable since it's not easy to understand what GPT is really doing. It actually builds an abstract model out of its training data and draws from that data at inference time. People assume it has a gigantic text "bank of data" inside. It actually has a gigantic (but less gigantic) batch of "weights" and "biases" that all of the text was abstracted into. It's much more compressed than the original due to this abstraction. If you've heard of "deep learning" and "hidden layers", the abstracting is done in the hidden layers and the more layers the more it can abstract. But it can't do it as well as we can and only works in a approximation of the way our brain does. Since abstracting is also generalizing, that where certain classes of mistakes come from, such as hallucinating library and API names that look plausible but don't exist.
22:23 confused me, because I had already forgotten that you'd earlier used JSR $FFD2 to clear the screen with character 147 ($93), so when you said _instead of calling JSR $FFD2,_ I went wait, that's right there. That took me a moment.
I did some videos on zig recently. For the second video I wanted zig to create NES Game Genie codes. So I wrote it up and then I asked ChatGPT if it could improve my code. It regenerated if and I saw syntax that I was like: “that isn’t zig, right?!?! But I’ve been doing zig for 2 weeks so what do I know.” I compiled it and suddenly I had 12 compilation errors. I instructed it that this wasn’t zig syntax. And did that 3 times it on my got worse. So I figured let’s just give it my bit wise operations to improve… nope… Oh then the print code, that should be easy. And a game genie code is basically a hex code that replaces the hex digits with some characters. So I removed my print code and asked it to print it out and gave it the translation list. The piece of horse manure, literally wrote a switch statement with 16 cases 😂I tried to instruct it to avoid jumps when finding a character, it just didn’t come to the conclusion that an error with index 0-15 and the matching character for each index is the most clean and efficient code. It is so stupid and useless that I truly don’t get the Chat Jippity hype
Robin, did you ever make a video of how Jim could have written his monitor? Will you please load that and show us the BASIC part, and then show us the assem and direct ML parts?
Just a video covering Jim Butterfields monitor and comparing it to machine language and their relation to assembly would be cool. Obviously we got a taste of that in this video, but I feel it could be an entire video on its own.
@HelloKittyFanMan. probably just brute force with a list of op codes. Write it all out by hand, then figure out addresses for jumps and offsets for branching. Then poke it all in.
What turned my brain to mush back in the day was accessing values on the stack. But on the Z80 we had enough registers to avoid that and the 6502 community had what I believe is called the "zero page" to avoid it.
When you next cover assembly language, is there any way you could supplement the discussion with some kind of flowchart? With BASIC, it's somewhat human-readable so you can follow what's on-screen. With assembly, if the viewer is unclear what's happening, reviewing the cryptic on-screen code doesn't enlighten at all.
Interesting video, and great to highlight the different ways to get the code into memory. I did laugh when you said you could modify the bytes directly before placing them at $C100 - my sense of humour automatically suggested JELLO WORLD instead of HELLO WORLD. A good example of the BASIC loader method of entering machine code is the cheat programs printed in UK games magazines. These POKEd values into memory and then changed the loader code so that you could cheat at games. And thank you for calling out ChatGPT - I have seen so many people say it is great for programming, but the evidence is overwhelming at present that it is still highly flawed and needs to learn more. (It's not artificial intelligence yet, it just has consumed huge datasets without the actual means of understanding it.)
Could you also cover the pal assembler. I have been using it since it acts like BASIC, meaning I don't need to learn another environment while also learning assembly and basic. But it has some different syntax than turbomacro pro or cl65.
This was a very interesting video. I really appreciate the Assembly explanation. Also, I'm not sure if the TH-cam algorithm overheard you criticizing its country cousin ChatGDP, but this video did not show up on my feed. I had to search for it. This is the first time your video hasn't shown up in my feed. Hopefully it's a one time issue.
Nice one Robin, another excellent upload and one touching on an important topic, chatgpt. I think a video on it's use would make a nice reminder/awareness session for any programmer. I like to have a laugh with chatgpt etc with the 'accuracy' of their answers. I recently asked it to multiply 2 bytes together, in 6502 assembly, and it miraculously went back in time and re-engineered the CPU to include a MUL instruction. When I questioned this it "apologised" and proceeded to give me another, "correct" answer.....which also included that fabricated instruction. Today, when I asked the same question, it now fabricates a "ROR NUM2, Y" instruction (Y indexing here is not legal, X indexing is), while blissfully proclaiming "Certainly! Here's an example of a 6502 assembly language program that multiplies two numbers together". There was a whole lot more wrong with the answer it gave, lol...or is that Rol :) The fun part is in telling it it's wrong, but not where it's wrong and watching it get more and more into deeper merde. Google Bard blissfully bullcraps also. The problem is, if you don't know the subject you might believe the answer. Lawyers in the USA were caught out recently when the judge noticed their case evidence included two 100% made up 'historical examples'. When questioned they admitted they used chatgpt. Truly shocking. They have their fun uses but that's all they should be used for. True AI is something entirely different and very far away...we really don't want to go there.
I agree, in its current form chat AI's are just a fun tools to kill time, but not for serious work (at least not accurate work). While it can spit out a skeleton code, it takes a great deal of time to fix it so that it actually runs. smh
In follow up to what I just wrote below, you can use the one-pass assembler in "Monitor$C000" in the CBM "C64 Macro Assembler Development System (1982)", to add the text data directly into memory $0360-$0364 as in the video. Then counting characters to display (11 = 0B) the code becomes... .A 033C LDX #$00 .A 033E LDA $0360,X .A 0341 STA $0400,X .A 0344 INX .A 0345 CPX #$0B .A 0347 BNEE $033E ,A 0349 BRK
ChatGPT , gpt-4, and to some extent gpt-3.5-turbo WILL absolutely produce valid Commodore BASIC with some clever prompting, and most importantly, providing some few-shot examples. (I've been messing around with a python and OpenAI based 'PETGPT' that writes PET BASIC 4.0 code for a few months now. Sometimes it even insists on generating assembly!) Another game changer besides providing few shot examples is telling it not use line numbers (use {l:semantically_relevant_labels} instead) and ignore short variable name restrictions. I then use a post-processor to replace labels with line numbers, and OpenAI's text-davinci-edit-001 to intelligently replace long variable names with 2 character names. edit: I should add, I don't know a bit of python. ChatGPT and gpt-4 are writing the entire program! Here's some output just from prompting and examples, before I started implementing the {l:labels} and long variable 'filter': Hello! I'm PETGPT, a Commodore PET expert. I'd be happy to help you with any questions or programming tasks related to the PET. Just let me know what you need assistance with! User Can you write me a simple, concise program that takes an angle, distance, character code from the user, and plots a line on screen starting from the origin (center of screen) at that angle with that distance using that character. Loop without clearing screen. Sure! We can do that with some simple trigonometry. Let’s think through the program step-by-step before writing: Declare necessary and useful variables. Screen memory, screen width and height, radian conversion factor, screen center, character. Clear the screen. Move cursor home and get user input angle, distance, character. Convert the angle to radians Loop through radius values from 0 to the given distance - calculating the new x and y coordinates based on the angle and radius, and pokeing the screen memory with character value. Loop to move the cursor home and get new input. Here's the program: ```basic 10 sm=32768:sw=40:sh=25:rc={pi}/180:x0=sw/2:y0=sh/2:c=42 20 print "{clear}" 30 input "{home}ang, dis, char:"; a,d,c: a = a * rc 40 for r=0 to d 50 x=x0+int(r*cos(a)+0.5):y=y0-int(r*sin(a)+0.5) 60 poke sm+y*sw+x+x0, c 70 next r:goto 30 ```
@@wesleyswafford2462 Yes, prompts, context, and examples are everything. gpt-4 in particular has a lot more knowledge about Commodore computers hidden in its... matrices, latent space, whatever, than it will let on just from giving it a cold prompt. It's not uncommon for it to get memory addresses and petscii codes correct without an explicit reference!
That's all fine, but I've seen actual beginners go to ChatGPT looking for good examples of C64 code and get the garbage I showed on screen. And the thing is, the beginners don't know it's garbage, so it's a complete waste. As I tried to make clear, this video is for them.
@@8_Bit I wonder what would happen if you asked ChatGPT to teach you how to program rather than asking it to write a program for you. I tried it once to teach me a foreign language and it was surprisingly good, but I didn't stick with it for long to see if it stayed good or went off the rails.
Great video Robin! How about just collecting your various examples into a tiny game template? I’m planning on referring to your demonstrations for joystick controlled sprites, background interrupt stuff, BCD for scores etc to try to make a game. A terrible game, but only because of my lack of abilities. 😀
great question. i was thinking the same. i think the basic line 10 when stored in memory has a byte of 0 to designate then end of that basic line of code. and then the printstring routine must be using a byte of 0 to terminate also. just like in the earlier example.
Yes, I'm sorry I didn't explain that in the video. Each line of BASIC already terminates with an unseen 0 byte, so it ends up marking the end of the string as well.
According to Mapping the C64, the AB1E routine prints the string until it runs into a zero byte. The way basic lines are stored, they end in a zero byte. I don't know it the developers of commodore basic planned it this way, but Robin was taking advantage of the stored zero byte at the end of the basic line to also be the zero byte to terminate the string print routine.
0:46 Generative AI reproduces the aggregated pattern of the content it's scanned, which is close enough for the purpose of "art", but not for something that needs to be rigidly correct. 1:25 I'd like to hear more about ChatGPT. "64-Bit Show And Tell". 14:20 Commodore should have used something closer to plain ASCII from the start, and they should have just used the character codes for the Screen Codes. I suppose Commodore wasn't the only one who assumed lowercase characters were unimportant. 16:42 There'll be a problem with LDA #$93 : JSR $FFD2 on the ROM version that fills the screen foreground color cells with the background color. 24:29 Oh yeah, there's an implicit $00 at the end of the string! The opening quotation mark is also important to avoid the tokenization of «OR». 28:42 If you use the SuperChart for decimal codes, you might have to search the whole table for the operation and addressing mode you want.
Yes, I believe the original Apple II didn't have lowercase at all. You're right re: the screen clear; it's KERNAL v2 that has that problem. D'oh, I meant to mention that implicit $00 that marks the end of each BASIC line, and serves as a handy string terminator for us here too. That's true about the SuperChart. Neither table (SuperChart nor the Programmer's Reference Guide's) is ideal for this "hand" assembly in BASIC idea.
@@8_Bit Yep neither the original Apple II nor the original TRS-80 model 1 had lowercase. The Apple IIe was the first model to get lowercase - in 1983! The TRS-80 model 3 got lowercase in 1980. Those were the machines I learned to program in BASIC. I didn't start machine code and assembly until I got my ZX Spectrum a couple years later.
Just a note about the version write direct to 1024, that code will not show anything on the ORG C64 (about first 3 years of production), and that is because background colour and letter colour default to the same.
Close, it's actually only KERNAL version 2 where the POKE 1024 won't show anything unless accompanied with a corresponding write to 55296. KERNAL v1 will display the character in white. As far as I know, KERNAL v2 was only in production for about a year before it was replaced with the final v3. I do a quick demonstration of all 3 revisions about 5 minutes into the video titled "43-Year-Old INPUT Bug Fixed: From C64 to VIC-20".
While this is a bit of a concern, and I have talked about it in a few videos before, it would be very uncommon nowadays to run into this and I didn't want to clutter up this video with all kinds of caveats. But I'm glad you know about it!
This is where BBC basic wins - you could incorporate assmbly language into a basic program without having to access a separate assembler. I was very neat.
ChatGPT is great at sounding confident with terrible answers. A while ago I wrote a small program to test if it was currently British Summer Time (UTC + 1) based only on the current UTC datetime.
A friend of mine asked if I was worried about ChatGPT taking my job so as a test we asked it to come up with the same program.
It made every conceivable mistake. It assumed the last day of each month was always a Sunday. It assumed that BST starts at midnight. It assumed all months were 30 days long.
I'm not worried about ChatGPT replacing programmers. I'm worried for the companies that think it can, and try it though.
Then the crafty developer can charge a premium for going in and cleaning up all the garbage code created by the AI 😏
AI really is the equivalent of the tube-capping machine from the Tim Burton _Charlie and the Chocolate Factory_ movie that's so unreliable they needed to hire the guy it replaced back as a full-time employee to keep it working.
I'm sure the consulting companies that the British government uses for its never ending list of IT failures will try it.
@@janmichael1262 hahaha you're not wrong there.
"I'm not worried about ChatGPT replacing programmers. I'm worried for the companies that think it can, and try it though."
--> THIS !!!
To show just how much today's companies think chatgpt is the next best thing, Google search...
"Levidow, Levidow & Oberman caught using chatgpt"...
NYC lawyers who used ChatGPT for research in aviation injury claim were fined $5,000 in legal first after the judge found the AI chatbot cited legal cases that were FAKE and contained 'gibberish'. I mean FFS, what's next.
For all the new students of 6502 assembly/ML, keep in mind that you need to save all modified memory when writing a program in a monitor. When Robin shows how to initialize $C100 with "hello world", that was done manually -- your program does not initialize that memory. So when you save your assembly program, you need to save the memory it occupies AND the memory that it requires. Something like: S "my program",8,C000,C10B will do that. Keep in mind that this saves the program and all the junk between the program and data memory that isn't used. A symbolic assembler like Turbo Macro Pro will compact this for you since it handles the allocation of memory for text -- another reason to use TMP!
Good point, I like to initialize the entire memory space I'm using for my code with 0 so when looking through memory with a monitor, the garbage doesn't produce weird mnemonics. You could also manually move the "Hello world" data right at the end of the main code to prevent garbage and make it more compact when saving it to disk.
I learned to code in ML back in the 80's and I can tell that you teach very well, I understood everything. Love your video.
I learned ML by hand assembling - using the following format
1000 data000,000,000:rem assem code
1001 data000,000,000:rem assem code
Then using a READ/POKE to put it in memory. Obviously, after I put actual code in the REM, I had to hand assemble (using the Complete Commodore Inner Space Anthology - with decimal opcodes) and replace the 000s with the code. The only problem was learning how branches worked and calculating JMPs. This technique worked for tiny fragments of code. I got BASSEM, a BASIC extension Assembler from Compute! Gazette - and never looked back!
Thank you for the video. I wish I learned 6502 assembly language programming back in 1983. In those days, I mainly played games and did some BASIC programming.
This takes me back to my childhood (early teens)!
I started with a machine code monitor and would write my assembly on paper, then hand assemble it to machine code to punch in to the monitor (slightly less tedious than writing a BASIC loader to POKE it into memory, but not much :))
Some short time later I got an action replay cartridge which had a built in simple assembler (very much like SuperMON).
A little later still I got the luxury of a 2-pass assembler which allowed me to use labels.
Glad to see different approaches to the same goal being shown! I'd have gone with the speedy "write it direct to screen memory" route, though it's worth noting that $0400 is only the address of screen memory if the VIC-II is looking at bank 0.
Thanks for the memories!
Oh yeah the "bank switching" trick, almost forgot. heh
Those Transactor publications (mag and compendiums) are the best references ever. When I was living in Ontario in the mid-80's those were still available on the shelf. Back then I didn't understand about 60% of what I was reading, but that is why I am watching you now.
Programming on a machine code monitor is how I learned assembly.
same.. bring up memories from.. i think this must have been around 1986-1988...
19:38 I think people who are serious about assembly should start here rather than TMP. First principles pays off in the long run.
I now regret not bringing this up in this video! For anyone curious, watch my video: "Jim Butterfield's Machine Language for the Commodore 64, 128.. Revised": th-cam.com/video/OirYXXnCVlo/w-d-xo.html
@@8_Bit spending real time with Jim’s Machine Language book made me better at my day job. I don’t do “coulda shoulda” but I wonder what going through that book in the 80s would have done for my early career. I didn’t find it until I was in my 40s.
@MichaelDoornbos another great book is the Assembly Language for kids book.
@@timpsensky5181 That counts as Feynman's "explain it to a kid" concept too.
My gentle introduction to assembly language programming on the C64 was a Basic assembler called Bassem that was published in Compute!'s Gazette. You could just type mnemonic instructions into a Basic program, using all the features of Basic such as variables and loops to generate the machine code. I really think you should cover this at some point.
Or the modern c64 basic Vision which really looks intersting with all it's niceties added including using assembler in basic, full usage of REU ( a minimum of 128K required) actually i'm surprised that Robin hasn't shown anything about it at all.
Last time I've done ml coding back back in the 80's, but this video brought it all back ... guess my memory is still alright lol
I used a commercial assembler called Merlin Wizard to write some demos in the good ol' days. :)
@@jan-erikkarlsson7875 Vision Basic rocks !
I really appreciate the Turbo Macro Pro Hello World program walkthrough. Really clear and concise, like all your videos, top-tier and very well put-together. Thank-you so much for doing this, your videos always make my day!
I got chatGPT to write me complex Polymorphic code in C++ after convincing it that I was only going to use it for good. However, the more I asked it to iterate, the more complex the code got, the less sense it made. I gave up when it invented its own library.
I find it's much better to start new sessions regularly and show it the code you have at the current point. When you keep one session going for followup discussion the token buffer overflows which is sort of equivalent to having a bad short term memory. It's a separate issue from hallucinating libraries and functions that don't exist.
I learned 6502 machine language on the Apple ][e back in the 80s. I had to enter the opcodes, since I had no assembler at all. Then I could disassemble what I wrote and see if I made any mistakes. I don't think I even knew the indexed accumulator commands, so I'd use self-modifying code instead. I was just a kid, so I didn't really know this was bad. :) All of my knowledge came from examples in Nibble magazine, and I was really just winging it. But the other kids at school were still blown away by my skills.
Self-modifying code *isn't* bad, as long as you know for sure you are running from RAM. And with so much absolute addressing going on, the 6502 cares very much about where it's running from.
"Hello world" wasn't yet a thing outside of academia in the C64 days. The canonical first program was
10 PRINT [usually something off-color... in the User's Guide it was "Commodore 64"]
20 GOTO 10
This can be seen referenced in the Church of Robotology in Futurama.
Men of science and wisdom liked to add a semicolon after the closing quotation mark.
Yeah, it was specifically a C thing; the creators of the language introduced their language with it. It's a good introduction to C with its include files and main() function on Unix with its text io, but some languages are so different that it makes little sense to give a "hello world" example at all. In some languages, the string "hello world" is literally the whole program; the default action on a string is to print it. :) At the other extreme, many GUI operating systems weren't designed with simple text output; you had to put it in a GUI element which would be no trouble in a complex program, but makes a hello world example look huge.
Great video! You reminded me of all the hours I spent using SuperMON back in the ‘80s. 😂
This video is really awesome! I never knew anyone that had a commodore 64 (We were an IBM family, literally. My dad worked the AS/400 and system 36 tech support queue) so this is really neat to see.
For follow up videos, there's lots of videos dunking on ChatGPT and AI being clunky. You might consider showing how to write a Prime number sieve, how to calculate Pi. My favorite video so far has been the "one bad byte" video. It'd be really neat to see either how you search for and implement cheats by poking memory, or how you might bypass old copy protection mechanisms.
What's crazy is I remember still how to write this in hex... Without the index of instructions.
had an atari 130xe, wrote my own BBS in basic back in the day, but your explanation is the first time really understanding assembly code, thanks.
GCC-6502 and GCC-6510 are both available and will work with Code::Blocks Cross Compiler.
Nicely done. Back in the 1960's we didn't use the term "High or Low Byte" we called half a byte a "Nibble". LoL
Thanks. High and low-byte are referring to the two halves of a 16-bit value, rather than a nibble (I prefer the spelling "nybble") which is half of an 8-bit value.
I forgot about the spelling thanks. I didn't realize you were referring to a Word and not a byte. Sorry😃@@8_Bit
I've written asm before for various architectures so none of this is challenging, but it is very enjoyable to watch, especially seeing what native development tools you have on the C64 and your very pleasant presentation style, you're an excellent teacher.
You are a very talented Lady Neodonkey.
For me this is like a trip down memory lane (ha ha) to better days, to the Golden Age of computing. Today's computers lack all personality. I would gladly go back to the '80s and stay there if I could.
I think it would be really interesting to see just how ChatGPT does at developing assembly or even basic code.
The problem with these so-called artificial intelligence programs if they have no creative ability other than what they can mimic. They have no consciousness which they can use to reflect on what they're doing and be creative in a truly honest sense.
Have you tried creating a Machine Program for the COMMODORE C64 in CODE::BLOCKS Cross Compiler using the GCC-6502 Tool chain but you can use the GCC-6510 Tool Chain.
@@weedmanwestvancouverbc9266 They may have no consciousness but you can get it to reflect on what they wrote and this can actually lead to significant improvement of the results. At least with GPT-4, not so much with 3.5 (where it tends to make it go off the rails even further in many cases)
It does QBasic pretty well, but not flawlessly. You have to put in some effort at times and learn to recognize when it's better to start over instead of trying to argue with it to make it fix its mistakes.
Oh, and sometimes the code it writes is fine but the explanatory text accompanying it does not match it. The more complex the code becomes, the more this happens. That's one of the weirdest aspects of this whole thing, in my experience. Again, don't argue with it. Just take the part that is useful and ignore the rest.
Go try it. I’ve gotten mixed results with pretty much every language I’ve used, Python, 6502 Assembly, Atari Basic, and C#. I can’t imagine being successful using it to create anything complex from scratch and it still requires you to know what you’re doing in order to fix or tweak issues, or know what you need to suggest to fix specifically.
It would absolutely be interesting to see a video about ChatGPT's attempts. Either just trying the code and explaining what it did wrong, or trying to correct it and see if you could finagle it into outputting anything reasonable.
It would be difficult to capture all of the mistakes it makes since it does make a lot of them. One common mistake is that it fails to use the appropriate memory map. e.g. The C64 and Apple II use address 0x0400 for the text display but these two machines are organized a little differently from one another.
I've been using it to write C64 assembly for a little while now. If you know the hardware you can very quickly write big hunks of code using chatgpt that really works. Symbols are sometimes in the wrong place but that's what symbol tables are for. And there are bugs in the 64's design that aren't shared by other 65xx machines so sometimes you need to know a little bit more.
It's especially powerful if you're able to describe what the code you want will look like. "How about if you organize this as a jump table instead?" for example. If you just say "give me assembly code for x," it will, and it might be kinda terribly organized.
One thing I've noticed about AI is that it makes just as many mistakes as all the hack tutorials out there make. And a lot of textbooks from the 80s and 90s were very poorly written too. That, and there's no editors checking up on all the blogs that give AI all of its facts.
It's absolutely worth a video, but hopefully from a perspective which understands what generative language model AI is and is not. We aren't quite to the point of Isaac Asimov's Galactic AC oracle yet. I think sometimes people tend to put more blame on the AI than the poorly-defined application of it.
I've seen demos where ChatGPT writes code for Arduino, and it does a mostly descent job.
@@jimmyzhao2673 that comes down to a couple of factors. 1) there's so much more example code targeting Arduino that it can be trained on. 2) Arduino code typically uses libraries, so one line of code might do one compartmentalized thing. As opposed to asm code which might require a more verbose approach and keeping track of register pressure among other things.
The fun is in not telling it exactly where it went wrong and watching it get deeper and deeper into merde. However, even telling it exactly where it went wrong almost certainly guarantees the next 'corrected' answer will also be wrong. In the end, bullsh#tting is bullsh#tting, and boy do we have the mother of all bullsh#tters in chatgpt (and now bard).
Good video Robin! Very clearly and thoroughly explained as always :) Cheers!
10 PRINT "REMEMBER TO INCLUDE THE COMMA BETWEEN 'HELLO' AND 'WORLD', AND THEN THE PERIOD OR EXCLAMATION POINT AT THE END."
RUN
Your line 10 is too long.
@@Okurka. Not on the C-128. 😁
"@@Okurka.": Oh, ha!
The (masochistic) method in "Example 3" was also the way I used to learn Assembly for quite some time. In fact I think most people did the same in the beginning. Then I found a Basic program implementing a simple Assembler (which I had to type, it was a long and error-prone process). But that was a huge upgrade already. When I finally got Turbo Macro Pro it was like Heaven on Earth.
I don't like AI but I tried it for such a "Hello World" assembly program. It actually gave me a correct program - not optimal, but working code. But I saw it failing in many cases, and in different programming languages.
I very much want to see a video about Ai failing to give correct assembly code. But no rush - whatever next video is about, I will watch, enjoy, and like anyway. 🙂
Garbage Producing Technology 🤣
Great video!
I love how we never run out of topics for things to do on the C-64! 😎
oh man...you just reminded me of my childhood... 1984 and my first steps into assembly on my c64... I too went the very same steps as you did... then, in 1986 I got me a C compiler for the c64. I still have some of the C programs I wrote during 1987 and 1988. Believe it or not, the C language I learned when I was young, is still very much in use today (2023) !!!
You must have been very patient! If it's the C compiler I tried, it's a very long process from editing and saving your program, to compiling, testing, and going back to editing. But it was no doubt educational, and the results would have been much faster than BASIC.
I had a C64 back then but I only coded in Basic. Now I was just curious how Hello World would look like in Assembly Language and OMG that's mind blasting. I've done Hello World in Perl, PHP, ColdFusion, Java, C, C++, C#, Kotlin, Rust, Javascript, CoffeeScript and surely some more I forgot about, but never in my wildest dreams could I have imagined how complicated Assembly Language is! Complicated but at the same time also very intriguing to learn more about how all high-level languages work behind the scenes. Thanks a lot!
That 0400 trick was gold. I never knew to do that. Thanks!
Great content, Robin. Sometimes the simplest ideas produce the best discussions. I can still remember in 1980/81 how impenetrable machine code on the 4032 PETs seemed. The teacher kept one lesson ahead of us showing BASIC commands so, sadly, they were no help answering questions about this comparatively occult topic.
Nowadays with modern tools like Super-Snapshot and other program freeze cartridges, it make writing and debugging ML code much easier and fun.
@@BillAnt I’m (re)collecting the tools now! Wish I’d kept everything I had back in the day…
@@mechaform - I still have my entire collection of 2x bread-bin C64's ,2x C64-C's, along with numerous tool cartridges and hundreds of floppy disks stored in the basement. I used to run BBS'es on the C64 and Amiga back in the 80's and early 90's, so I have accumulated quite a collection. One day when I retire, I will set everything up again to relive my youth. heh
I was a little confused at the end, but figured it out. I was wondering how the AB1E routine knew just to print the string "hello world", and not continue on printing what was on line 20. then I realized that AB1E prints a string until it hits a $00, and the way basic lines are stored, each line ends in $00.
Yes, sorry, I should have pointed that out in the video.
@@8_Bit No worries at all. You make great videos.
Not sure why, but I missed this one and I'm really diggin' it. Thanks for the shout-out and glad my exploration of AI piqued your interest. Looking forward to trying my first machine code "hello world" soon.
I'm glad you found it, and sorry I didn't ping you when it was released. My AI assistant must have forgotten to remind me!
@@8_Bit Those blasted AI assistants. Can never rely on them when you need'em. 🤖
13:50 that's a smart way of copying and using screen codes at screen memory $0400! The input buffer with PETSCII codes at $200 could have been used with a $FFD2 also. Great video higly didactical, Robin.
My first monitor was Jim's Supermon as well, so your example was very familiar. I've never used Turbo Macro but, as you say, it seems like luxury.
Robin I'm glad you made this video. I'm never going to be able to make something in assembly, but I definitely like learning HOW people do these simple things on the platforms I love to use.
Robin, this helped me to understand the accumulator and the X register a bit better. Thanks!
Excellent video, as always, Robin. I'm half tempted to fire up VICE from nostalgia. I like the order you explained it in; I did things the same as you back in the day - starting with poking straight to memory, and (at least for me) puzzling for hours when a bit of machine code just crashed because of some misconception I had about an opcode or its parameters. Keep making these cool vids!
Great video. Love this channel. Thanks Robin. Greetings from Denmark.
I still have my Commodore 64 and my Commodore 64C. The colors on my profile banner are Commodore colors also. Man, the time I spent on that coding small games, using BBS's, and so on. Was quite the rig.
I like the use of -1 to terminate the data statements instead of having to count the bytes and set up a for next loop.
It's really interesting that GPT can even write something resemeling 6502 asm. I doubt it has a very large set of data for that particular problem but you are VERY correct that it's a garbage in - garbage out system lol.
Thanks for the awesome video, even though I doublt I'll need to program a c64 any time soon, It's very interesting to get to see some of the underlying design disisions that inform how we work with computeres today.
ChatGPT would be a great used car salesman
It would also be interesting to show how chatGPT handles simple code requests that could be packaged together to make a working program (e.g., "code a timer that counts 30 seconds and puts a 1 into memory in c64 assembly," "code a program in c64 assembly that if memory address [whatever] is 1, move the cursor to the next line"). Then to review it's errors, see if they can be corrected, look at the better version, and talk about your thoughts. Then examine various chatGPT requests that increase in levels of sophistication.
Make sure you include a JMP to $FCE2 while you're in there! 😉
Thank you for the great video!
I wish I had a teacher like you back in the day.
Best regards.
I've seen some terrible sci-fi movies in my mis-spent youth. I wrote a PCB design program that runs on the BBC Micro, with just 32KB of RAM less 10KB for the frame buffer and the Machine Operating System and BASIC swallowing another 3KB or so. (It uses the MOS graphics routines. Unfortunately, that doesn't bode well for a C64 port .....) It takes in a wiring list representing the circuit schematic with comments indicating what PCB footprint to use for each part, has a graphical editor where you can position components and route tracks between connected pins, and can produce a suite of Gerber and drill files suitable to send to a PCB manufacturing service. And I had heard about a gorilla who kept a pet cat. So naturally, I tried chatting with ChatGPT about programming the 6502, in the hope that it might decide it wanted a pet 8-bit micro of its own .....
Anyway, not mincing words or anything, it sucked.
I asked it about adding 2 numbers, and it gave me some code to add two *completely* *different* numbers than what I asked it. Then, as a test to see if it knew any common 6502 programming idioms, I asked it how to shift the accumulator left by one bit and set the least significant bit to 1. it wrote
CLC
ROL A
ORA #$01
taking up twice as much space as, and many more clock cycles than
SEC
ROL A
I didn't bother testing whether it knew about JMP instead of JSR followed immediately by RTS.
It's the sort of thing that impresses people who are not as smart as they think they are.
When I saw the name Jim Butterfield I let out a little gasp. He was a legend in the Commodore programming world. I wish I still had his books including the Assembler book with the characters on a beach.
Yes, I'm also a big Butterfield fan, and I've done a video about his book here: th-cam.com/video/OirYXXnCVlo/w-d-xo.html
And my most recent video looked at his Commodore Diary pocket book.
Back in the old days, I would write a program that consisted mostly of Basic, but you had a couple of routines the would be slower then a snail through jello, so you would write those routines in assembler, convert that to data statements, and use BASIC to stuff that into memory, and you would then, run that as needed. Today we still can do that kind of thing, by writing a program in Python, write the slow stuff in C, then call the C library from Python as required.
Loved this, very simple explanation of how to do this on C64. Whenever I write assembly code for a new system my 1st program is always to clear the screen to a certain color, then I make a hello world program. On most systems I do not have a font available to me in memory to use, so usually have to make my own custom font for a hello world program (which always teaches me how a video display stores pixel graphics on a given system). Thanks for your cool videos, always like seeing a new one turn up in my recommendations =D
I'd love to see that ChatGPT C64 video
I would love to see a follow on video where you compare the differences in the kernel based print routines in the other 6502-based computers of the era (Apple II, Atari 8 bit, etc.) - assuming that there might be interesting differences to talk about.
As much as I admire and respect the late Jim Butterfield, I prefer monitors that breakout the SR flags into separate columns and use an easy-to-remember SYS address. My recent favorite is StringMon because it does both, as well as allowing scrolling in both directions, like many convenience cartridges from back in the day.
I totally agree about the SR flags. The reason for the strange SYS addresses is actually pretty clever: SuperMon automatically figures out where the top of BASIC RAM is and installs itself right up there, correctly modifying BASIC pointers to protect itself so that BASIC can be used fully as well, with no wasted memory, and the largest possible contiguous block for BASIC. This feature was especially useful for the PET and VIC-20 versions with their varied RAM expansion capabilities, and it carried over to the C64. The downside is the strange SYS address.
For no particular good reason, I wanted to write a program in Microsoft BASIC II for the Atari 800 that evaluated a poker hand. I could find nothing online to use as a starter so I tried ChatGPT just to see what it could do.
I had the same experience that you did. It actually presented some helpful code. But there were entire sections of the code that did nothing. And all over the place there were errors. In retrospect, it probably would have taken less time to figure it out from scratch and write it myself.
But it was an interesting exercise. It's funny how ChatGPT apologizes for its mistakes (sometimes actually correcting them, sometimes not). And often after the apology it will just keep make the same exact mistake. After several iterations such as this, I knew that I had gotten out of it all that I could and went on my way.
So, yes, a video showing ChatGPT would be interesting, probably helpful, and almost certainly amusing.
I always enjoy your videos where you go into coding on the Commodore.
I tried using ChatGPT to jumpstart a project that I was transitioning into Python. I broke the project down into smaller routines, as I knew there was no way it would be capable of doing everything at one. The code it generated was garbage and I wasted way too much time going back and forth with it and never got anything usable.
I really like that you demonstrate multiple ways to do something
wow! How I /wish/ my EE ASSY class - MANY years ago - leveraged C64... This is so cool!
It might bear mentioning that, besides saving a byte of program code, another advantage of using BNE instead of JMP in the print loop is that it makes the resulting ML relocatable (excluding the printed string itself).
I learned 6502 assembly language on a SYM-1 shortly before the C64 was released. While this video was nostalgic, I could not sit through the entire thing.
I used the PAL assembler for the C64. A *fantastic* assembler. I also received my copy of SUPERMON directly from Jim Butterfield back at a Commodore Expo in Valley Forge in 1986. He and I were both speakers at the expo. I enjoyed the C64 for several years and made a lot of money from it. Anyone remember LOADSTAR? 😉 From 1986 through 1990, I practically lived in the GEOS environment.
For anyone wondering what the point of a BASIC loader would be-given that it's just being used here to do something BASIC can already do way simpler-this sort of thing is usually used when people are writing a program mostly in BASIC but need to incorporate a machine language routine in order to make something run faster. Or if the whole program needs to be machine code but they don't have a monitor or assembler handy-this is what magazines would do for their type-in programs, as they couldn't assume their readers had anything besides the built-in BASIC editor.
I started programing BASIC in 1979, started programming machine code around 1982/1983 but I didn't get my first assembler until at least 1993. I did all my machine code programming by hand assembling and poking decimal bytes into RAM. I was definitely not the only one. Though maybe I was the only one that went without any assembler for a decade (-:
Robin, per the book by Jim Butterfield "Machine Language For The Commodore 64", which I have a copy I bought in the early '90s, Jim states that if you want to restart the monitor from BASIC after the first launching, you can just type SYS 8 within the BASIC environment. This does work in VICE and on a real C64 for me. It's a LOT quicker and easier to remember than the SYS 38xxx number it generates at the start of the monitor.
Aha, that's an old PET trick. Location 8 usually contains a zero, which is the opcode for BRK. So a SYS 8 executes that BRK opcode, which jumps through the vector at $0316-$317 (locations 790-791 decimal) which SuperMon by default redirects to itself, I think. So yes, that should work as long as locations 8, 790, and 791 aren't changed after starting SuperMON.
@@8_Bit I feel I should also say here that the book does mention SYS 4 applies to some Commodore models, but the C64 is definitely SYS 8. Maybe SYS 4 for the 128? I'd have to go grab the book.
@@ericjumpelfeaturingjohnnyj4135 Yeah, it'll just be wherever a zero is normally located in low memory on a particular Commodore model. There's nothing particularly special about the SYS 4 or 8 or whatever; it's just a shorter number to memorize that has the effect of executing a BRK opcode.
Hello again, Robin. At the time I watched your video and commented (apparently 5 months ago), I was working on a SID 6581 project and was heavily immersed into Butterfield's book. I have, since that time, completed my code to a place that it's usable software, and have created a music video showing it in use. I wanted to say thank you for your video and the reply. I don't exactly remember what it was I needed to learn when your video surfaced in search results, but I really enjoyed watching your "hello world" demonstration.
Here's my finished song and video (timestamped to the spoiler) of me playing my creation that began life just about six or seven months ago. Hope you enjoy the song and SID usage:
th-cam.com/video/RJ4cpCT_-e8/w-d-xo.html
this is a really great primer! thank you for making it and sharing your knowledge.
Really enjoyed this Robin, great video. Thanks 👍
Very interesting. Back in the day when I had my first C64, I found the code for Supermon in the back of a magazine...I forget which one. I wanted to start learning machine code. Unfortunately the program had many pages of just data statements and your eyes go batty. To add insult to injury something killed a controller chip on the motherboard and I lost 2 full columns of keys. Needless to say, I never got it typed in...
20:58: The appearance of C013 and C014 really threw me here. I guess that's just whatever was there in memory, because it prints 12 lines at a time?
Thanks for making this nice and helpful tutorial. It was useful.
First time I've heard Garbage Producing Technology - gold, at least in this instance. Nice work. Assembly is such a fundamental language. Everybody learning to code should learn at least some of this critical language.
Mighty funny how I came across this. I'm fortunate to have an emulator. I didn't know STYLE did that. WOW!! Oh, I'm WARNOCK...and I'm back. LOL I made snapshots of all the work I did for Style, Arson, RPG. I'll be releasing a solo soon. Look for PROGRESSIVE...the never released demo......until now. Everything is coming back to me. I'm using Vice and was wondering about Relaunch64 for the emulator. I use TMP and/or Turbo Assembler. Good Job!! 😁
There was no assembler in the ROM of Oric machines... But there was an opcode reference at the end of the manual, that's how I learned to assemble by hand - fortunately, there's a button for cold reset under the machine, negative relative jumps were a nightmare. I wrote a disassembler at the time...
Using bne/beq to jump is limited to +/- 127 bytes (relative jump) and avoid jumping out of the page, while jmp/jsr are full 16bit addresses. Sometimes, there's no choice...
The low byte and the high byte of the address are reversed because it's "big endian". Intel processors do that too, while Motorola (e.g. 68000) is "little endian". There is no "good" choice.
Now, for the program itself, the funniest one to do is to PLA the return address in your subroutine so you can JSR to your routine, followed immediately with your text (0-terminated) and then the rest of the code. Just for the sake of making self-modifying code full of binary arithmetic (A, X and Y are 8bit, return addresses not so...). Something like:
JSR $C000 ; print string subroutine
.text "HELLO WORLD",0 ; the string with a zero at the end
RTS ; return to BASIC
*=$C000 ; arbitrary, can be anywhere
PLA ; not sure any longer in what order the return address is pushed into the stack by JSR, may need to reverse the order of STAs
STA $C010 ; character's address, low byte
STA $C020 ; jump address, low byte
PLA
STA $C011 ; character's address, high byte
STA $C021 ; jump address, high byte
...
*=$C00F ; (or anything, to be calculated)
LDA $0000 ; this address is self-modified
PHA ; push the character code, need to increment counters first, will evaluate later
INC $C020 ; increase jump address, low byte
ADC $C021,0 ; add carry to high byte of jump address
INC $C010 ; increase character address
ADC $C011,0 ; add carry to high byte
PLA ; get back the character
BEQ $C018 ; go to exit code if null (address to be calculated)
...
; actual display routine, either calling the routine in ROM or writing directly to the screen
...
*=C01F ; (or anything, to be calculated)
JMP $0000 ; this address is self-modified
Enjoy that version of "Hello World"! ChatGPT will never suggest anything like that, and it's a nightmare to disassemble when text is mixed with code.
The Sinclair ZX Spectrum also had an opcode reference at the end of the manual and was one of my main sources for learning machine code. I also learned from lots of magazine articles. I wrote several disassemblers too, in BASIC.
6502 and intel are little endian, motorola 68k and PPC are big endian, you switched them around. :)
Commodore's full-screen editor was so much better than the line editor on the TRS-80; however, I did get quite good at using that line editor.
Having turbo assembler built in to the snapshot cartridge “back in the day” would have been pretty awesome…!
Yes to chat gpt code vid❤ also part 2 of this??
'"Hello World" on Commodore 64 in Assembly Language' ... a 30 minute video. I think that says it all ;)
Fundamentally all GPT does is statistical analysis. It has a very large bank of data on which its generations are based, when the user prompts it for something it generates what this data suggests is the most likely response to the query given. Something exceptionally specific like writing "hello world" in 6502 assembly for use on the Commodore 64 is a perfect example of how to "break" a model like this because there's very little data in its bank that is relevant to such a specific query, but a lot of data to indicate how such a response should be formatted, so it generates a response that is well formatted but makes no sense.
So it's basically a bullshit generator.
No it doesn't actually. It's a common pattern that people repeating how GPT doesn't "understand" and just repeats what it's read actually don't understand what GPT really does but just repeat criticisms they read elsewhere. Understandable since it's not easy to understand what GPT is really doing.
It actually builds an abstract model out of its training data and draws from that data at inference time. People assume it has a gigantic text "bank of data" inside. It actually has a gigantic (but less gigantic) batch of "weights" and "biases" that all of the text was abstracted into. It's much more compressed than the original due to this abstraction. If you've heard of "deep learning" and "hidden layers", the abstracting is done in the hidden layers and the more layers the more it can abstract. But it can't do it as well as we can and only works in a approximation of the way our brain does. Since abstracting is also generalizing, that where certain classes of mistakes come from, such as hallucinating library and API names that look plausible but don't exist.
22:23 confused me, because I had already forgotten that you'd earlier used JSR $FFD2 to clear the screen with character 147 ($93), so when you said _instead of calling JSR $FFD2,_ I went wait, that's right there. That took me a moment.
I did some videos on zig recently.
For the second video I wanted zig to create NES Game Genie codes.
So I wrote it up and then I asked ChatGPT if it could improve my code.
It regenerated if and I saw syntax that I was like: “that isn’t zig, right?!?! But I’ve been doing zig for 2 weeks so what do I know.”
I compiled it and suddenly I had 12 compilation errors. I instructed it that this wasn’t zig syntax. And did that 3 times it on my got worse.
So I figured let’s just give it my bit wise operations to improve… nope…
Oh then the print code, that should be easy.
And a game genie code is basically a hex code that replaces the hex digits with some characters.
So I removed my print code and asked it to print it out and gave it the translation list.
The piece of horse manure, literally wrote a switch statement with 16 cases 😂I tried to instruct it to avoid jumps when finding a character, it just didn’t come to the conclusion that an error with index 0-15 and the matching character for each index is the most clean and efficient code.
It is so stupid and useless that I truly don’t get the Chat Jippity hype
I really like this vid, thanks dude. One to save. (the gpt bit was hilarious as well! :p)
Robin, did you ever make a video of how Jim could have written his monitor? Will you please load that and show us the BASIC part, and then show us the assem and direct ML parts?
Just a video covering Jim Butterfields monitor and comparing it to machine language and their relation to assembly would be cool. Obviously we got a taste of that in this video, but I feel it could be an entire video on its own.
@@mattks1001: What I mean is: If Jim wrote the first assembler for the C64, then how did he do it? (Chicken-or-egg scenario.)
@HelloKittyFanMan. probably just brute force with a list of op codes. Write it all out by hand, then figure out addresses for jumps and offsets for branching. Then poke it all in.
Now show us how to use indirect addressing. My brain turns to mush every time I look at it, for some reason. Love your vids, Happy Canada Day, Robin.
What turned my brain to mush back in the day was accessing values on the stack. But on the Z80 we had enough registers to avoid that and the 6502 community had what I believe is called the "zero page" to avoid it.
When you next cover assembly language, is there any way you could supplement the discussion with some kind of flowchart? With BASIC, it's somewhat human-readable so you can follow what's on-screen. With assembly, if the viewer is unclear what's happening, reviewing the cryptic on-screen code doesn't enlighten at all.
I recently bought a copy of the programmers reference guide just so I can follow along better.
Interesting video, and great to highlight the different ways to get the code into memory. I did laugh when you said you could modify the bytes directly before placing them at $C100 - my sense of humour automatically suggested JELLO WORLD instead of HELLO WORLD.
A good example of the BASIC loader method of entering machine code is the cheat programs printed in UK games magazines. These POKEd values into memory and then changed the loader code so that you could cheat at games.
And thank you for calling out ChatGPT - I have seen so many people say it is great for programming, but the evidence is overwhelming at present that it is still highly flawed and needs to learn more. (It's not artificial intelligence yet, it just has consumed huge datasets without the actual means of understanding it.)
Could you also cover the pal assembler. I have been using it since it acts like BASIC, meaning I don't need to learn another environment while also learning assembly and basic. But it has some different syntax than turbomacro pro or cl65.
This was a very interesting video. I really appreciate the Assembly explanation. Also, I'm not sure if the TH-cam algorithm overheard you criticizing its country cousin ChatGDP, but this video did not show up on my feed. I had to search for it. This is the first time your video hasn't shown up in my feed. Hopefully it's a one time issue.
Nice one Robin, another excellent upload and one touching on an important topic, chatgpt. I think a video on it's use would make a nice reminder/awareness session for any programmer.
I like to have a laugh with chatgpt etc with the 'accuracy' of their answers. I recently asked it to multiply 2 bytes together, in 6502 assembly, and it miraculously went back in time and re-engineered the CPU to include a MUL instruction. When I questioned this it "apologised" and proceeded to give me another, "correct" answer.....which also included that fabricated instruction. Today, when I asked the same question, it now fabricates a "ROR NUM2, Y" instruction (Y indexing here is not legal, X indexing is), while blissfully proclaiming "Certainly! Here's an example of a 6502 assembly language program that multiplies two numbers together". There was a whole lot more wrong with the answer it gave, lol...or is that Rol :)
The fun part is in telling it it's wrong, but not where it's wrong and watching it get more and more into deeper merde. Google Bard blissfully bullcraps also.
The problem is, if you don't know the subject you might believe the answer. Lawyers in the USA were caught out recently when the judge noticed their case evidence included two 100% made up 'historical examples'. When questioned they admitted they used chatgpt. Truly shocking.
They have their fun uses but that's all they should be used for. True AI is something entirely different and very far away...we really don't want to go there.
I agree, in its current form chat AI's are just a fun tools to kill time, but not for serious work (at least not accurate work). While it can spit out a skeleton code, it takes a great deal of time to fix it so that it actually runs. smh
Plus4 C16 C116 can write string using JSR $FF4F and a text after that jump. This routine can set PC to the first byte after $00 in that text string :)
Kinda wish you could give some commentary on the ChatGPT output. Like, wtf is it trying to do? Can any of it remotely work? curious!
In follow up to what I just wrote below, you can use the one-pass assembler in "Monitor$C000" in the CBM "C64 Macro Assembler Development System (1982)", to add the text data directly into memory $0360-$0364 as in the video. Then counting characters to display (11 = 0B) the code becomes...
.A 033C LDX #$00
.A 033E LDA $0360,X
.A 0341 STA $0400,X
.A 0344 INX
.A 0345 CPX #$0B
.A 0347 BNEE $033E
,A 0349 BRK
The only difference with the BASIC version is the lack of carriage return ($13) before the 0-terminator in the text string.
Hello Robin! ;)
ChatGPT , gpt-4, and to some extent gpt-3.5-turbo WILL absolutely produce valid Commodore BASIC with some clever prompting, and most importantly, providing some few-shot examples. (I've been messing around with a python and OpenAI based 'PETGPT' that writes PET BASIC 4.0 code for a few months now. Sometimes it even insists on generating assembly!) Another game changer besides providing few shot examples is telling it not use line numbers (use {l:semantically_relevant_labels} instead) and ignore short variable name restrictions. I then use a post-processor to replace labels with line numbers, and OpenAI's text-davinci-edit-001 to intelligently replace long variable names with 2 character names.
edit: I should add, I don't know a bit of python. ChatGPT and gpt-4 are writing the entire program!
Here's some output just from prompting and examples, before I started implementing the {l:labels} and long variable 'filter':
Hello! I'm PETGPT, a Commodore PET expert. I'd be happy to help you with any questions or programming tasks related to the PET. Just let me know what you need assistance with!
User
Can you write me a simple, concise program that takes an angle, distance, character code from the user, and plots a line on screen starting from the origin (center of screen) at that angle with that distance using that character. Loop without clearing screen.
Sure! We can do that with some simple trigonometry. Let’s think through the program step-by-step before writing:
Declare necessary and useful variables. Screen memory, screen width and height, radian conversion factor, screen center, character.
Clear the screen.
Move cursor home and get user input angle, distance, character.
Convert the angle to radians
Loop through radius values from 0 to the given distance - calculating the new x and y coordinates based on the angle and radius, and pokeing the screen memory with character value.
Loop to move the cursor home and get new input.
Here's the program:
```basic
10 sm=32768:sw=40:sh=25:rc={pi}/180:x0=sw/2:y0=sh/2:c=42
20 print "{clear}"
30 input "{home}ang, dis, char:"; a,d,c: a = a * rc
40 for r=0 to d
50 x=x0+int(r*cos(a)+0.5):y=y0-int(r*sin(a)+0.5)
60 poke sm+y*sw+x+x0, c
70 next r:goto 30
```
@@wesleyswafford2462 Yes, prompts, context, and examples are everything. gpt-4 in particular has a lot more knowledge about Commodore computers hidden in its... matrices, latent space, whatever, than it will let on just from giving it a cold prompt. It's not uncommon for it to get memory addresses and petscii codes correct without an explicit reference!
That's all fine, but I've seen actual beginners go to ChatGPT looking for good examples of C64 code and get the garbage I showed on screen. And the thing is, the beginners don't know it's garbage, so it's a complete waste. As I tried to make clear, this video is for them.
@@8_Bit I wonder what would happen if you asked ChatGPT to teach you how to program rather than asking it to write a program for you. I tried it once to teach me a foreign language and it was surprisingly good, but I didn't stick with it for long to see if it stayed good or went off the rails.
28:50: They found decimals so nice, they printed them twice.
That was a great intro, thanks for that!
I sometimes let ChatGPT write simple javascript code for websites. It's pretty good at that.
Probably because the internet is full of other people's Javascript for it to plagiarize. Not quite as much 8-bit micro source code floating around.
Hey that’s me. :)
Awesome video and thanks for making it!
Great video Robin! How about just collecting your various examples into a tiny game template? I’m planning on referring to your demonstrations for joystick controlled sprites, background interrupt stuff, BCD for scores etc to try to make a game. A terrible game, but only because of my lack of abilities. 😀
ChatGPT output at 1:10 I almost wet myself when I see a CMP #0 in the code. Rookie alert.
Great video. In the last example, How does the print string routine know where the end of the string is?
great question. i was thinking the same. i think the basic line 10 when stored in memory has a byte of 0 to designate then end of that basic line of code. and then the printstring routine must be using a byte of 0 to terminate also. just like in the earlier example.
Yes, I'm sorry I didn't explain that in the video. Each line of BASIC already terminates with an unseen 0 byte, so it ends up marking the end of the string as well.
According to Mapping the C64, the AB1E routine prints the string until it runs into a zero byte. The way basic lines are stored, they end in a zero byte. I don't know it the developers of commodore basic planned it this way, but Robin was taking advantage of the stored zero byte at the end of the basic line to also be the zero byte to terminate the string print routine.
0:46 Generative AI reproduces the aggregated pattern of the content it's scanned, which is close enough for the purpose of "art", but not for something that needs to be rigidly correct.
1:25 I'd like to hear more about ChatGPT. "64-Bit Show And Tell".
14:20 Commodore should have used something closer to plain ASCII from the start, and they should have just used the character codes for the Screen Codes. I suppose Commodore wasn't the only one who assumed lowercase characters were unimportant.
16:42 There'll be a problem with LDA #$93 : JSR $FFD2 on the ROM version that fills the screen foreground color cells with the background color.
24:29 Oh yeah, there's an implicit $00 at the end of the string! The opening quotation mark is also important to avoid the tokenization of «OR».
28:42 If you use the SuperChart for decimal codes, you might have to search the whole table for the operation and addressing mode you want.
Yes, I believe the original Apple II didn't have lowercase at all.
You're right re: the screen clear; it's KERNAL v2 that has that problem.
D'oh, I meant to mention that implicit $00 that marks the end of each BASIC line, and serves as a handy string terminator for us here too.
That's true about the SuperChart. Neither table (SuperChart nor the Programmer's Reference Guide's) is ideal for this "hand" assembly in BASIC idea.
@@8_Bit Yep neither the original Apple II nor the original TRS-80 model 1 had lowercase. The Apple IIe was the first model to get lowercase - in 1983! The TRS-80 model 3 got lowercase in 1980. Those were the machines I learned to program in BASIC. I didn't start machine code and assembly until I got my ZX Spectrum a couple years later.
Just a note about the version write direct to 1024, that code will not show anything on the ORG C64 (about first 3 years of production), and that is because background colour and letter colour default to the same.
Close, it's actually only KERNAL version 2 where the POKE 1024 won't show anything unless accompanied with a corresponding write to 55296. KERNAL v1 will display the character in white. As far as I know, KERNAL v2 was only in production for about a year before it was replaced with the final v3. I do a quick demonstration of all 3 revisions about 5 minutes into the video titled "43-Year-Old INPUT Bug Fixed: From C64 to VIC-20".
While this is a bit of a concern, and I have talked about it in a few videos before, it would be very uncommon nowadays to run into this and I didn't want to clutter up this video with all kinds of caveats. But I'm glad you know about it!
I met Mr Butterfield at a World of Commodore show in the 1980s.
He was a really special guy. I was fortunate enough to meet him a number of times in the early 2000s before he passed away.
This is where BBC basic wins - you could incorporate assmbly language into a basic program without having to access a separate assembler. I was very neat.