With the film database, the line 120 error was your fault, you had missed the closing quote on the INPUT line. The ChatGPT code had it. But the reset would have happened regardless because of the CALL lines. They're effectively CALL 0 commands which jumps to the reset vector. It's trying to define and call the subroutines using some other basic dialect (or basic like dialect) syntax. They should be replaced with GOSUB with a line number to the subroutines instead. (where the SUB lines are. This syntax is also wrong for CPC.)
I tried it on QB64 which is 100% Quick Basic 4.5 compatible and I couldn't get it to run. QB64 appears to support SUB and SUB END commands but I kept getting an error. But that program had far bigger concerns. There was no structure to the database at all. There was no effort to track the current record or the last record or save or open a file. Just a total fail.
Some of what the AI produced looked more like Pseudo code rather than BASIC. And does that BASIC actually accept comments on the same line as code? If so, does it use the ; or the ' ?
@@javabeanz8549 In one of the old versions of basic I used to use, the ? character was a shortcut for PRINT. I know they had a shortcut for REM, possible ! but I am not sure.
@@Robert08010 There were a lot of tokenized BASIC versions out there. I had a Tandy PC-1 Pocket computer, it had tokenized BASIC, and I think that it used like the backtick for comments, and the question mark for print ( it had a whole 1.2KB available for your BASIC program. ) The C-64 had some shortcuts as well. Seems like the Atari had some too, but it's only been 35 years since I last coded anything for one.
My dad never invested in a Model 3. He had Level II BASIC with tape and how I learned to code. I still have my tapes of my programs. Now that my dad has passed I found the boxes of his TRaSh-80 and plan on setting it up and see if I can get anything to load. I typed in a lot of programs from SoftSide Magazine back in the day.
Usually, programs were typed lowercase on the Amstrad CPC. That way, it was easier to notice some syntax errors. They remained lowercase whereas the recognized token were uppercased by the basic when listed. For example, the endif would have remained lowercase, since this is not a token available.
Back in the day, I asked ChatGPT to create a MSX Basic program to draw a circle on the screen, and it worked fine. If you ask the exact same thing TODAY, it looks like ChatGPT has gone dumb for some unknown reason.
The UK picture on the CPC clearly shows a map of the British Islands at night during a countrywide power outage. But I could be a bit mistaken, as my map of Moscow looks the same...
This pretty much mirrors my own experience with ChatGPT and code. It makes simple syntax errors, struggles with basic geometric shapes, and will always agree with you if you point out an error - but will never actually understand the error you're describing. It's a Large Language Model, not a General Artificial Intelligence, and I think more and more people understand it now that the initial novelty faded away.
have you even tried gpt4? if all you've used is 3.5 i'd expect nothing less...this video used 3.5 too and it sucked as much as i expected. gpt4 would likely do considerably better. also, this tech is so young, to think this wont improve significantly over the next 10 years is pretty naive.
Most people who have never used ChatGPT before (or any of its rival versions) will use the free version before diving into monthly paid subscriptions. I may do a less-sucky version of this video using 4.0 in the future.
@@Cara.314 Honestly I'd rather see other approaches developed - ones emphasising actual _understanding_ of concepts, rather than informational regurgitation. That is not an impossibility for machine learning - it just isn't something a Large Language Model as a concept is designed for.
@14:14, you need to close the quotes before the semicolon. 120 INPUT " Choose an option (1-4)" ; choice Just for clarity, " ; Choice " wasn't a comment. It was the variable that would receive the result of the input statement. The interpreter didn't see it as a variable because you forgot the closing quote. But the program is not likely to run for a very different reason. The design of the database has major flaws. When you dim out a new variable (in most versions of basic) you don't tell it how many characters you want; you tell it how many instances of that variable you intend to have. Seeing DIM Year$(4) was a dead give away that this was not designed properly. You would decide how many films you want your data base to hold (say 50 or 100) and then dim each variable with that number of film titles and years and genres so your dims would be DIM Film$(100) Year$(100) and so on. But that is not the worst issue. When it goes to add a film, it never checks for the next available location in the array. In fact it forgets to reference the array at all. So each time you add a film, your are just over writing the first entry. To have a valid data base, you have to read in a file (or at least check for a blank file) and see how many entries are in the file before you offer to add a new entry or anything else for that matter. As you read in the file, you count how many entries are in the file and then you have to track your total number of records and which is your current record. Its not very complicated. But you're right; this program was a big fail and not just because of your type-o either.
@@TheRetroStuffGuy It also demonstrates the potential shortcoming of current AI tech. IDK, but Artificial Intelligence (AI) always seems to come along with a massive hype train and then fall somewhat short of the promised outcome. Advances have certainly been made in getting an AI that can handle natural language input, but I think rather old school algorithmic approaches probably produce better results when the knowledge domain is narrow and well defined. ChatGPT is probably better at producing boilerplate-esque language like resumes, cover letters, and introductory text than it at other takes. E.g. creating halfway decent code in a particular programming language that is both functionally and syntacticay correct. People might be better off asking it for pseudo-code for the task they want to complete and then go write the actual code themselves. Or maybe asking it for a particular construction in code (6 case switch statement on integers/strings) if they find typing it too tedious.
@@jnharton That's the reason I never call these AI, even when nearly everyone else does and even though it can be argued they belong into the category of AI research. The initialism AI in today's world comes with the connotation of some kind of human-like intelligence, whether one wants that connotation or not. But large language models don't live up to this connotation, so I just call them LLMs, as that's what they are, without any implication of intelligence.
@@uNiels_Heart Yeah, I agree with you. It's more in the realm of a simulation of speech than actual AI. Its does not think in any way. But it is amazingly useful anyway.
@@Robert08010 Yeah, I agree. I sometimes use them myself (through Gemini, LMSYS Chatbot Arena and auto completion plugins for VSCode) and I guess we're only at the beginning of finding out in which areas they can be positively utilized.
I quickly entered the Film Database into a CPC 464 ... using MAME. You can paste code using Shift+Scroll Lock [on Windows] But you will need to pad the beginning with some blank lines to give you time to release the Shift key so the pasted code isn't SHIFTed!
Recently, I tried a few online browser AI's that claimed to do coding. "Produce Z80 code for unsigned multiply of a pair of 16-bit values producing a 32-bit result." This should have been trivial for them; there are MANY Z80 coding forums with examples of multiply routines, some highly optimized. But these AI's just couldn't manage to produce working code. Their first attempt used MUL instructions!, so I had to tell them: "The Z80 CPU does not have a MUL instruction." So they produced code that called a subroutine 4 times, which on the surface looked like it was on the right track ...but ... "You are changing HL in the subroutine when the value in HL is still needed." Several iterations of this went on: [ I tell it that a part wasn't working It produces code that fixed [not all the time] that problem, while breaking something else ] until I ran out of free-use quota!
Weird - someone on a discord I'm on was experimenting with an AI (not sure which one) for generating 6502 code, and it also conjured a MUL instruction out of nowhere.
@@twoowls4829 This just shows how the AI doesn't UNDERSTAND anything. "Artificial INTELLIGENCE" is just the wrong term to use! ... "Deep Learning" is better; There is no intelligence, it just outputs what fits the patterns from its training.
@@twoowls4829 Yeah, that's horrible. Not exactly out of nowhere, though, as they have it in their training corpus, of course. Probably from x86, as this is ubiquitous. They're not smart enough to properly compartmentalize the different instruction set architectures. The same shit can happen for high level languages. I once had a really abysmal exchange with Google's Gemini about C# concepts where it made up untrue things.
whoa i've never seen or heard of amstrad -- i'm digging this. critique: you're going in the right direction, and the editing's fine especially for starting out. but the jumps in volume are harsh. even considering that, keep at it, i'm subscribed now so i'll be here letting you know how you're doing on audio 😛
I was able to have Copilot (preview) write a basic Astroids game in LUA (Love2d) and Python (PyGame) all by itself. First, I had it help me write a game design doc, then used that doc to tell it wanted the game to do and how it should look. I then simply copied and pasted back and forth between Visual Code Studio and Copilot (preview) until the game was running without syntax and logic errors. It took maybe 10-16 hours altogether. Also, I made sure to have it use the graphic draw features of LOVE2D and PyGame, so draw all the game assests (player's spaceship, astroids of varying sizes, UFO, Score, Lives, Title, Game Over, etc.). It did get to a point that it wasn't able to code the leaderboard right, but it got the physics and collisions right. Overall, it was a good learning experience for me.
Knight Rider can be "fixed" on the Amstrad CPC by: 1) Remove the semi-colon on line 90 as otherwise you get a syntax error. If you want to keep the comment you can either stick a colon and then an REM statement, or put the REM statement on its own line. 2) IF -> THEN -> ELSE are limited in Locomotive Basic, you can basically condense lines 270 and 280 into one line on line 260 and then get rid of 270 and 280, otherwise it will drop straight through to the following lines. Also you need to do distance=distance+1, as IIRC LET doesn't allow you to alter variables this way, its just to define them (not that you need LET anyway to set them up or any need to define any variables at all on the Amstrad). 3) Line 290 can simple become "if option > 3 then GOTO 300 ELSE 380". This will have an unintended consequence as if you enter 0 or more than 3, your car will crash. As a result you don't need lines 340 or 360 (or line 350 for that matter as it'll never fall through that far) 4) The WEND loop has nothing to check against, so it just ends. Easiest solution is just say "WHILE a$=0", as It'll loop until somewhere else changes this variable. As there is nothing to do that, it will never end. There is no significance to a$ being the variable, its just something to check. All that happens with this code as it stands is that you can just keep going forward and rack up a score; there is nothing to generate the obstacles, nothing to check for obstacles and all turning left/right does is cause the car to crash. But that's what was generated so... The bones of a starting point but still needs human input to put the meat on I feel.
It probably doesn't like the string literal and semicolon before the variable. I'd use a separate print statement for the prompt and then the input statement. The two separate statements can be combined with a colon. 120 PRINT "Choose an option (1-4): "; : INPUT Choice The semicolon here will prevent the cursor from going to the next line for the input statement.
Been a while since I did any BASIC, and I used to code Amstrad with a lovely assembler pack that stuck in the back - instantly there when switch on - lovely. I would have always ditched BASIC and the entire ROM and directly coded the hardware. Anyway quick glance of the code suggests that Call executes code at an address, it's not "gosub" - thus whatever is in your variable, probably zero if undefined, is where it's jumping to, and thus doing a nice reset as it's jumping to 0. Also, off the top of my head, the seperator for input is comma not semicolon, and the demarkation for comment would be rem not semilcolon, which I can only remember being used in Autoit in that way.
@@geoffphillips5293 an input is basically a print statement with an additional get. The semi colon works together with the print part. That way you can decide to accept the input on the same line or on the next line.
I Use Chat GPT for writing code to run on my ATARI 800XL. Using Altirra the ATARI Emulator you can copy and paste direct from Chat GPT Direct into Altirra. Got it to write loads of stuff to help me write a game.
I asked it to write a Cosine routine in RCA1802 assembler. If it can actually *think* as opposed to pattern match it should be able to do this. No clue whatsoever.
@@paulscottrobsonIf you asked it for an implementation of the CORDIC algorithm, it probably would give you sincos code (which you can then use to make secant and cosecant and tangent).
2:17 For a moment there I thought it had actually given you a straight answer 😆 The thing that annoys me most about ChatGPT is responding with an essay when yes or no would suffice.
you can tune the models to respond to 1 words answers with 1 word. just specify as much in "Custom Instructions" under "customize gpt" you can also build other GPT's that you specify such things. but something tells me you didnt know that and just assumed you have no control over how it responds.
I lot of the time, I find that if I tell it what it didn't do, it will make adjustments to the code, if you are patient about asking and reasking it can get things done. It is horrible at understanding art or graphic designs, though. So I stick to text-based utilities.
Pure guess as I don't know Amstrad, but my guess is that AddFilm is zero in the beginning, so it's CALL 0 which resets the program counter and reboots the CPC...?
Looking at the Knight Rider program, my old memory of CPC BASIC detected a problem: multi-line IF .. THEN ... ELSE. I just tested it using MAME with a simple test: 10 IF 0 THEN 20 PRINT "TRUE" 30 ELSE 40 PRINT ""FALSE" both these lines produces syntax error: 50 endif ' LIST did not capitalize this -> BASIC doesn't know it 50 END IF ' treated as END with garbage [IF] following running the program outputs: TRUE FALSE CPC BASIC only knows about the single line IF .. THEN ... ELSE
regarding the database source code... yeh I not sure why GPT is using a memory call address instead of using a basic command, yeh indeed ithe call address 0& (0H) which is the start of the rom address is executed like machine code thus restart like as if you hit reset (warm boot). .. thus rebooting the interpreter clearing ram addressing.. takes me back I used to develop software for that micro infact databases but mostly for the 6128 and PCW line ... I wrote disaassembler once in machine code for the CPC launched within the CP/M system cannot remember if it got a boot rom cart??? so far back now...
It did what you asked. You asked for an outline that represented the UK, chatGPT decided to use the outline of a box to represent the UK. Try asking for a program that displays the chartographic outline of the British Isles.
I'm not even remotely a programmer but have found that feeding the flawed output from ChatGPT into Perplexity to fix it can make the really simple stuff I'm trying actually work. Don't know if that might be any use for gamer type things though.
Just noticed your code listing so I ran it into Perplexity and this is what it said; The main changes are: Changed ELSE IF to ELSEIF to fix the syntax. Removed the unnecessary semicolon after the CALL &BB18 statement. Formatted the code for better readability. This code should now run correctly and display the Knight Rider game as intended. 10 PRINT "WELCOME TO KNIGHT RIDER" 20 PRINT "-----------------------" 30 PRINT 40 PRINT "You are Michael Knight, driving K.I.T.T., the intelligent car." 50 PRINT "Your mission is to navigate through the city and reach the destination." 60 PRINT "Be careful, obstacles may appear on your way!" 70 PRINT 80 PRINT "Press a key to start..." 90 CALL &BB18 ; Wait for a key press 100 CLS 110 PRINT "MISSION STARTED" 120 PRINT "----------------" 130 REM Game variables 140 LET score = 0 150 LET distance = 0 160 REM Main game loop 170 WHILE TRUE 180 PRINT "Score: "; score 190 PRINT "Distance: "; distance 200 PRINT 210 PRINT "1. Drive forward" 220 PRINT "2. Turn left" 230 PRINT "3. Turn right" 240 PRINT 250 INPUT "Choose an option: "; option 260 IF option = 1 THEN 270 LET distance = distance + 10 280 LET score = score + 5 290 ELSEIF option = 2 OR option = 3 THEN 300 PRINT "Obstacle ahead! You crashed!" 310 PRINT "Mission Failed." 320 PRINT "Score: "; score 330 END 340 ELSE 350 PRINT "Invalid option. Try again." 360 ENDIF 370 PRINT 380 WEND Is that correct?
You could probably get the same results without using AI though, since the syntax, grammar of most programming languages is pretty well specified. Most modern IDEs can (using analysis of some sort) tell you when the code you wrote isn't valid or doesn't actually execute. E.g. If( false ) { /* do something */ } Is perfectly valid code, but will never be executed.
Yep, my experience was the same when i tried to get it to do some simple Quick Basic 4.5 code - that is it keep giving me gibberish and non-sensical code.
In this scenario, ChatGPT is very much just gathering info online, scooping it into a bowl & spitting it out. It's certainly not "thinking". But with that said, I have had some helpful use from it for some Python coding. Most likely because there is a wealth of info on Python online compared to BASIC.
Ive built a whole web site with the help of Chat GPT (I even surprised myself a bit) but yeah, its not so good at BASIC. I am a former BASIC power user with QB64 which is QuickBasic 4.5 compatible and after several fails I gave up on Chat GPT. I only managed to get somewhere after I gave up on chat GPT. I mean, it may be useful at providing an outline but that's about it.
I have found that ChatGPT works quite well for Python coding. Possibly the limitations (and many varieties) of old BASIC languages is too much for the poor AI!
Tried to get examples for programming the ESP32 co-processor. Not too many examples available. I got only garbage. It was on the right track here and there but everything was far from a correct program, which was obvious without knowing anything about the topic.
Despite the enthusiasm around the ESP SoCs, I think most people end up using an Arduino-esque approach rather than really learning to code for the specific chip/platform.
Question is can it write universal opengl driver for nvidia or ati/amd for windows 98se/me for newer cards or improve kernelex and smp capabilities . That would be worthy task .
The Knight Rider games looks like it should work like that in Microsoft QBasic (the INPUT command will ask for 1 2 3), but I assume Amstrad did not support structural control flow (IF without GOTO or WHILE), so it stopped on the WHILE line with an error.
@@TheRetroStuffGuy Seems that my own reply got filtered for containing a hyperlink. The real reason (as validated in an Amstrad emulator) is that TRUE is 0 (as every undefined variable) , so WHILE TRUE is actually WHILE 0 and skips the rest of the program. Try WHILE 1.
you need be more details questions, think like speaking to someone dark library (one if real books), and only small torch to read the books, has no windows doors and per in the library has never anything that was not in the library ?
@2:34 The words read "Thank Tank". That appears to be the name of a donations website. Why it thought that was the logo for ghost busters is beyond me.
Dude. You are way behind the curve. I used chatGPT about a year ago to try and write a bit of code for my Apple ][e in 8 bit assembler and machine language and it was able to whip it out in 3 seconds. I asked it to write code to change the screen color and cycle through the 16 colors available as fast as possible. It was amazing. It actually used a look up table to do th dirty work. Very cool.
You are either a troll or not particularly bright. ChatGPT doesn't actually know how to write a computer program. If you are lucky it will show you an answer it has already been trained on. For a trivial problem, such as you posited, there are many examples available on the internet that would have been used to train ChatGPT. Which is why it was able to give you a reasonable solution.
I had pretty much the same issues with Python for Blender. ChatGPT claims to know it but does not. I suppose you could say it's mimicking human behaviour and claiming it can do something it can not. We are just catching it out by asking it to do simple programming, it fails every time! Often times it uses functions and commands that do not exist in the version of BASIC or Python you are asking it to program for. I have always said ChatGPT is NOT a real AI, a real AI would be able to do these simple tasks. It is just a glorified search engine that uses machine learning to interpret your requests.
so if I ask it to create a compiler for the C language with some new feutures, its not going to help, well according to the hype industry and nvidia, I thought it was all over for engineers
you didn't give it a chance to correct it's mistakes.. when i use GPT for code i have to tell it a few times what is going wrong and it will correct it (most of the time)
I have done with other projects with success. I didn't include it in the video, but for the Knight Rider game & UK map when I told it the code didn't work, it just rehashed variations of the same BASIC code.
Such old basic doesn't have things as "sub" or "function" stuff, you had to "gosub" to a line-number and "return" after the code... ChatGPT seems to have mixed up all basic languages thinking they are all the same. Just stupid AI...
Actually some do. I don't know Amstrad Basic but I still use QB64 which is Quick Basic 4.5 compatible. It fully supports that command but glitched for several other reasons.
A lot of them do look fairly similar until you start trying to use them. In some cases the syntax might be nearly identical, but the actual behavior may be different.
I did that.. not long code.. but it can.. it even can write accents and variations of Basic. or convert Basic code into C or Python.. or Perl or convert Python code into Basic..
8:41 Modifying the program and ignoring that the very next command to CLS and being upset that it does what you tell it to is a bit disingenuous, innit? Especially when you fixed the syntax error in the previous attempt instead of erasing the line. 😒
@@jnharton well if it was a voice activated bot I could imagine using this whilst in the middle of code typing, and by not having to look it up by hand you would save a lot on concentration, especially when multitasking troubleshooting :)
@@RichMye-wx1ob If they're fixed address subroutines from ROM, you might as well just print out a nice reference sheet. Adding AI into this is overcomplicating things. Especially when we have standalone voice recognition tech that could probably be used to search a local computer reference.
Why are you asking ChatGPT to write a program given that it can only synthesize an answer based on programs it has been trained on? ChatGPT doesn't actually know how to write programs. It simply matches your question to programs it has been trained on to synthesize a program that looks similar to those it has already seen (i.e., been trained on). It has not knowledge of the actual behavior of those programs or its answer. These types of videos drive me nuts which is why I downvoted it.
ChatGPT can generate usable code based on a written specification. I have used (the paid version of) ChatGPT to develop Typescript and Python code. Sure, it makes mistakes, but if you feed it the error messages it will correct the code (explaining what it got wrong and why). It’s not going to produce perfect code or even well-optimised code in the first instance, but I estimate that I can create sophisticated applications (e.g. React websites and React Native apps) at least 4 times faster with AI. I have discovered that different AI agents are better / worse than others at solving certain problems and find that using a combination of (e.g.) ChatGPT, Claude, Gemini, etc, gives best results. Once the prototype code is working, using appropriate prompts can generate very specific guidance for optimisations. I suspect the main problem with the experiment in this video was that the prompting was too vague and did not include enough information about the specific variant of BASIC. About 6 months ago, I experimented with getting ChatGPT to write some code in Sinclair Basic, and after some appropriate prompting it was able to produce working code that I could copy and paste into a ZX Spectrum emulator. In the first instance, it was making errors such as naming variables with words rather than single letters, but it ultimately got that right.
why do videos like this always use gpt 3.5? it's notoriously bad compared to 4. please try again with the actually decent gpt model. how do i know he used 3.5? at the start of the video it shows 3.5 in the top left.
With the film database, the line 120 error was your fault, you had missed the closing quote on the INPUT line. The ChatGPT code had it. But the reset would have happened regardless because of the CALL lines. They're effectively CALL 0 commands which jumps to the reset vector. It's trying to define and call the subroutines using some other basic dialect (or basic like dialect) syntax. They should be replaced with GOSUB with a line number to the subroutines instead. (where the SUB lines are. This syntax is also wrong for CPC.)
I tried it on QB64 which is 100% Quick Basic 4.5 compatible and I couldn't get it to run. QB64 appears to support SUB and SUB END commands but I kept getting an error. But that program had far bigger concerns. There was no structure to the database at all. There was no effort to track the current record or the last record or save or open a file. Just a total fail.
Some of what the AI produced looked more like Pseudo code rather than BASIC. And does that BASIC actually accept comments on the same line as code? If so, does it use the ; or the ' ?
@@javabeanz8549 In one of the old versions of basic I used to use, the ? character was a shortcut for PRINT. I know they had a shortcut for REM, possible ! but I am not sure.
@@Robert08010 There were a lot of tokenized BASIC versions out there. I had a Tandy PC-1 Pocket computer, it had tokenized BASIC, and I think that it used like the backtick for comments, and the question mark for print ( it had a whole 1.2KB available for your BASIC program. ) The C-64 had some shortcuts as well. Seems like the Atari had some too, but it's only been 35 years since I last coded anything for one.
I used to write to files on the TRS-80's floppy drive via TRS-DOS and use it for a crude database way back then. I still have the floppies, lol!
Old, but not obsolete.
My dad never invested in a Model 3. He had Level II BASIC with tape and how I learned to code. I still have my tapes of my programs. Now that my dad has passed I found the boxes of his TRaSh-80 and plan on setting it up and see if I can get anything to load. I typed in a lot of programs from SoftSide Magazine back in the day.
Usually, programs were typed lowercase on the Amstrad CPC. That way, it was easier to notice some syntax errors. They remained lowercase whereas the recognized token were uppercased by the basic when listed. For example, the endif would have remained lowercase, since this is not a token available.
Back in the day, I asked ChatGPT to create a MSX Basic program to draw a circle on the screen, and it worked fine. If you ask the exact same thing TODAY, it looks like ChatGPT has gone dumb for some unknown reason.
Maybe it had the touch of the Mondays. :)
The UK picture on the CPC clearly shows a map of the British Islands at night during a countrywide power outage. But I could be a bit mistaken, as my map of Moscow looks the same...
😆
This pretty much mirrors my own experience with ChatGPT and code. It makes simple syntax errors, struggles with basic geometric shapes, and will always agree with you if you point out an error - but will never actually understand the error you're describing.
It's a Large Language Model, not a General Artificial Intelligence, and I think more and more people understand it now that the initial novelty faded away.
have you even tried gpt4? if all you've used is 3.5 i'd expect nothing less...this video used 3.5 too and it sucked as much as i expected. gpt4 would likely do considerably better.
also, this tech is so young, to think this wont improve significantly over the next 10 years is pretty naive.
Most people who have never used ChatGPT before (or any of its rival versions) will use the free version before diving into monthly paid subscriptions.
I may do a less-sucky version of this video using 4.0 in the future.
@@Cara.314 Honestly I'd rather see other approaches developed - ones emphasising actual _understanding_ of concepts, rather than informational regurgitation. That is not an impossibility for machine learning - it just isn't something a Large Language Model as a concept is designed for.
@14:14, you need to close the quotes before the semicolon. 120 INPUT " Choose an option (1-4)" ; choice Just for clarity, " ; Choice " wasn't a comment. It was the variable that would receive the result of the input statement. The interpreter didn't see it as a variable because you forgot the closing quote.
But the program is not likely to run for a very different reason. The design of the database has major flaws. When you dim out a new variable (in most versions of basic) you don't tell it how many characters you want; you tell it how many instances of that variable you intend to have. Seeing DIM Year$(4) was a dead give away that this was not designed properly. You would decide how many films you want your data base to hold (say 50 or 100) and then dim each variable with that number of film titles and years and genres so your dims would be DIM Film$(100) Year$(100) and so on. But that is not the worst issue. When it goes to add a film, it never checks for the next available location in the array. In fact it forgets to reference the array at all. So each time you add a film, your are just over writing the first entry. To have a valid data base, you have to read in a file (or at least check for a blank file) and see how many entries are in the file before you offer to add a new entry or anything else for that matter. As you read in the file, you count how many entries are in the file and then you have to track your total number of records and which is your current record. Its not very complicated. But you're right; this program was a big fail and not just because of your type-o either.
Thanks for your response. I think it demonstrates the importance of human input & knowledge and not to be so reliant on A.I. such as ChatGPT.
@@TheRetroStuffGuy It also demonstrates the potential shortcoming of current AI tech.
IDK, but Artificial Intelligence (AI) always seems to come along with a massive hype train and then fall somewhat short of the promised outcome.
Advances have certainly been made in getting an AI that can handle natural language input, but I think rather old school algorithmic approaches probably produce better results when the knowledge domain is narrow and well defined.
ChatGPT is probably better at producing boilerplate-esque language like resumes, cover letters, and introductory text than it at other takes.
E.g. creating halfway decent code in a particular programming language that is both functionally and syntacticay correct.
People might be better off asking it for pseudo-code for the task they want to complete and then go write the actual code themselves. Or maybe asking it for a particular construction in code (6 case switch statement on integers/strings) if they find typing it too tedious.
@@jnharton That's the reason I never call these AI, even when nearly everyone else does and even though it can be argued they belong into the category of AI research.
The initialism AI in today's world comes with the connotation of some kind of human-like intelligence, whether one wants that connotation or not.
But large language models don't live up to this connotation, so I just call them LLMs, as that's what they are, without any implication of intelligence.
@@uNiels_Heart Yeah, I agree with you. It's more in the realm of a simulation of speech than actual AI. Its does not think in any way. But it is amazingly useful anyway.
@@Robert08010 Yeah, I agree. I sometimes use them myself (through Gemini, LMSYS Chatbot Arena and auto completion plugins for VSCode) and I guess we're only at the beginning of finding out in which areas they can be positively utilized.
I quickly entered the Film Database into a CPC 464 ... using MAME.
You can paste code using Shift+Scroll Lock [on Windows]
But you will need to pad the beginning with some blank lines to give you time to release the Shift key so the pasted code isn't SHIFTed!
Recently, I tried a few online browser AI's that claimed to do coding.
"Produce Z80 code for unsigned multiply of a pair of 16-bit values producing a 32-bit result."
This should have been trivial for them; there are MANY Z80 coding forums with examples of multiply routines, some highly optimized.
But these AI's just couldn't manage to produce working code.
Their first attempt used MUL instructions!, so I had to tell them: "The Z80 CPU does not have a MUL instruction."
So they produced code that called a subroutine 4 times, which on the surface looked like it was on the right track ...but ...
"You are changing HL in the subroutine when the value in HL is still needed."
Several iterations of this went on:
[
I tell it that a part wasn't working
It produces code that fixed [not all the time] that problem, while breaking something else
]
until I ran out of free-use quota!
Weird - someone on a discord I'm on was experimenting with an AI (not sure which one) for generating 6502 code, and it also conjured a MUL instruction out of nowhere.
@@twoowls4829 This just shows how the AI doesn't UNDERSTAND anything. "Artificial INTELLIGENCE" is just the wrong term to use! ... "Deep Learning" is better; There is no intelligence, it just outputs what fits the patterns from its training.
My feelings exactly.
@@twoowls4829 Maybe proof that it simply doesn't know what the valid instruction set and parameters are.
@@twoowls4829 Yeah, that's horrible. Not exactly out of nowhere, though, as they have it in their training corpus, of course. Probably from x86, as this is ubiquitous. They're not smart enough to properly compartmentalize the different instruction set architectures. The same shit can happen for high level languages. I once had a really abysmal exchange with Google's Gemini about C# concepts where it made up untrue things.
whoa i've never seen or heard of amstrad -- i'm digging this.
critique: you're going in the right direction, and the editing's fine especially for starting out. but the jumps in volume are harsh. even considering that, keep at it, i'm subscribed now so i'll be here letting you know how you're doing on audio 😛
I was able to have Copilot (preview) write a basic Astroids game in LUA (Love2d) and Python (PyGame) all by itself. First, I had it help me write a game design doc, then used that doc to tell it wanted the game to do and how it should look. I then simply copied and pasted back and forth between Visual Code Studio and Copilot (preview) until the game was running without syntax and logic errors. It took maybe 10-16 hours altogether. Also, I made sure to have it use the graphic draw features of LOVE2D and PyGame, so draw all the game assests (player's spaceship, astroids of varying sizes, UFO, Score, Lives, Title, Game Over, etc.). It did get to a point that it wasn't able to code the leaderboard right, but it got the physics and collisions right. Overall, it was a good learning experience for me.
An Amstrad CPC lover in the wild! Instant subscribe, sir!
Thank you! I've got some dedicated Amstrad CPC videos in the pipeline.
Yes great. I once had one but it died, luckily I'd kept my VIC 20 as a backup.
Knight Rider can be "fixed" on the Amstrad CPC by:
1) Remove the semi-colon on line 90 as otherwise you get a syntax error. If you want to keep the comment you can either stick a colon and then an REM statement, or put the REM statement on its own line.
2) IF -> THEN -> ELSE are limited in Locomotive Basic, you can basically condense lines 270 and 280 into one line on line 260 and then get rid of 270 and 280, otherwise it will drop straight through to the following lines. Also you need to do distance=distance+1, as IIRC LET doesn't allow you to alter variables this way, its just to define them (not that you need LET anyway to set them up or any need to define any variables at all on the Amstrad).
3) Line 290 can simple become "if option > 3 then GOTO 300 ELSE 380". This will have an unintended consequence as if you enter 0 or more than 3, your car will crash. As a result you don't need lines 340 or 360 (or line 350 for that matter as it'll never fall through that far)
4) The WEND loop has nothing to check against, so it just ends. Easiest solution is just say "WHILE a$=0", as It'll loop until somewhere else changes this variable. As there is nothing to do that, it will never end. There is no significance to a$ being the variable, its just something to check.
All that happens with this code as it stands is that you can just keep going forward and rack up a score; there is nothing to generate the obstacles, nothing to check for obstacles and all turning left/right does is cause the car to crash. But that's what was generated so... The bones of a starting point but still needs human input to put the meat on I feel.
a crash from questionable code and retype... the true BASIC experience.
Yep...it brought back fond memories typing out code from magazines.
what about closing " (text strings should have a begining and the end) on line 120?
Yes - ChatGPT had specified the closing quote.
It probably doesn't like the string literal and semicolon before the variable. I'd use a separate print statement for the prompt and then the input statement. The two separate statements can be combined with a colon.
120 PRINT "Choose an option (1-4): "; : INPUT Choice
The semicolon here will prevent the cursor from going to the next line for the input statement.
@@SlideRSB I'm pretty sure this works in a numbers of BASIC implementations:
10 INPUT "Choose an option (1-4): ";C
@@jnharton Some BASIC interpreters work that way, but not all.
I am an avid GFA-BASIC 32 programmer.. Love this retro stuff !
I was playing around with it, producing QuickBasic code for DOS.
Been a while since I did any BASIC, and I used to code Amstrad with a lovely assembler pack that stuck in the back - instantly there when switch on - lovely. I would have always ditched BASIC and the entire ROM and directly coded the hardware. Anyway quick glance of the code suggests that Call executes code at an address, it's not "gosub" - thus whatever is in your variable, probably zero if undefined, is where it's jumping to, and thus doing a nice reset as it's jumping to 0. Also, off the top of my head, the seperator for input is comma not semicolon, and the demarkation for comment would be rem not semilcolon, which I can only remember being used in Autoit in that way.
The semicolon in basic is to have the cursor remain on the same line. Without it print or input would do a CR/LF.
Autoit 🧐
Thanks for the pointers! I'll have another look through soon & give it a go.
@@davemillan3360 But not after an input... I think
@@geoffphillips5293 an input is basically a print statement with an additional get. The semi colon works together with the print part. That way you can decide to accept the input on the same line or on the next line.
Good work keep the grind on👍🏽
Thank you, I will
ChatGPT won't replace any programmers any time soon. 🤣😂👍
Chat GPT's attempt at the UK Outline program was laughable! ... 'outline' --> here is a bounding box! :D
ChatGTP is probably an American!
I Use Chat GPT for writing code to run on my ATARI 800XL. Using Altirra the ATARI Emulator you can copy and paste direct from Chat GPT Direct into Altirra. Got it to write loads of stuff to help me write a game.
AI chatbots can only seem to create incredibly scuffed ASCII art. So I wouldn't judge its coding capabilities too heavily based on that.
I asked it to write a Cosine routine in RCA1802 assembler. If it can actually *think* as opposed to pattern match it should be able to do this. No clue whatsoever.
@@paulscottrobsonIf you asked it for an implementation of the CORDIC algorithm, it probably would give you sincos code (which you can then use to make secant and cosecant and tangent).
2:17 For a moment there I thought it had actually given you a straight answer 😆 The thing that annoys me most about ChatGPT is responding with an essay when yes or no would suffice.
you can tune the models to respond to 1 words answers with 1 word. just specify as much in "Custom Instructions" under "customize gpt"
you can also build other GPT's that you specify such things.
but something tells me you didnt know that and just assumed you have no control over how it responds.
@@Cara.314 No I didn't know that - thank you!
I lot of the time, I find that if I tell it what it didn't do, it will make adjustments to the code, if you are patient about asking and reasking it can get things done. It is horrible at understanding art or graphic designs, though. So I stick to text-based utilities.
Pure guess as I don't know Amstrad, but my guess is that AddFilm is zero in the beginning, so it's CALL 0 which resets the program counter and reboots the CPC...?
That makes a lot of sense. It saw the subroutine name as a 0 value variable. Sounds reasonable although I am not knowledgeable about Amstrad either.
Think Tank?
It's obvious now looking at it with fresh eyes. 👍
Looking at the Knight Rider program, my old memory of CPC BASIC detected a problem: multi-line IF .. THEN ... ELSE.
I just tested it using MAME with a simple test:
10 IF 0 THEN
20 PRINT "TRUE"
30 ELSE
40 PRINT ""FALSE"
both these lines produces syntax error:
50 endif ' LIST did not capitalize this -> BASIC doesn't know it
50 END IF ' treated as END with garbage [IF] following
running the program outputs:
TRUE
FALSE
CPC BASIC only knows about the single line IF .. THEN ... ELSE
regarding the database source code... yeh I not sure why GPT is using a memory call address instead of using a basic command, yeh indeed ithe call address 0& (0H) which is the start of the rom address is executed like machine code thus restart like as if you hit reset (warm boot). .. thus rebooting the interpreter clearing ram addressing.. takes me back I used to develop software for that micro infact databases but mostly for the 6128 and PCW line ... I wrote disaassembler once in machine code for the CPC launched within the CP/M system cannot remember if it got a boot rom cart??? so far back now...
It did what you asked. You asked for an outline that represented the UK, chatGPT decided to use the outline of a box to represent the UK. Try asking for a program that displays the chartographic outline of the British Isles.
Your 100% right here. ChatGPT will churn out exactly what you ask. Work on your prompts and you'll get your desired results. 👍🏻
@@samclacton Or become a prompt engineer.
You don't suppose it was saying all Brits are squares do you??!?!?! LOL JK.
Save the program, before running it.
It would save time!
You could try GPT-4, it's way more consistent with a lot of things.
You might get significantly better results and scripts to use.
You are probably right!
I'm not even remotely a programmer but have found that feeding the flawed output from ChatGPT into Perplexity to fix it can make the really simple stuff I'm trying actually work. Don't know if that might be any use for gamer type things though.
Just noticed your code listing so I ran it into Perplexity and this is what it said; The main changes are:
Changed ELSE IF to ELSEIF to fix the syntax.
Removed the unnecessary semicolon after the CALL &BB18 statement.
Formatted the code for better readability.
This code should now run correctly and display the Knight Rider game as intended.
10 PRINT "WELCOME TO KNIGHT RIDER"
20 PRINT "-----------------------"
30 PRINT
40 PRINT "You are Michael Knight, driving K.I.T.T., the intelligent car."
50 PRINT "Your mission is to navigate through the city and reach the destination."
60 PRINT "Be careful, obstacles may appear on your way!"
70 PRINT
80 PRINT "Press a key to start..."
90 CALL &BB18 ; Wait for a key press
100 CLS
110 PRINT "MISSION STARTED"
120 PRINT "----------------"
130 REM Game variables
140 LET score = 0
150 LET distance = 0
160 REM Main game loop
170 WHILE TRUE
180 PRINT "Score: "; score
190 PRINT "Distance: "; distance
200 PRINT
210 PRINT "1. Drive forward"
220 PRINT "2. Turn left"
230 PRINT "3. Turn right"
240 PRINT
250 INPUT "Choose an option: "; option
260 IF option = 1 THEN
270 LET distance = distance + 10
280 LET score = score + 5
290 ELSEIF option = 2 OR option = 3 THEN
300 PRINT "Obstacle ahead! You crashed!"
310 PRINT "Mission Failed."
320 PRINT "Score: "; score
330 END
340 ELSE
350 PRINT "Invalid option. Try again."
360 ENDIF
370 PRINT
380 WEND
Is that correct?
Thanks for that. I've not come across Perplexity before, but i'll certainly check that out!
You could probably get the same results without using AI though, since the syntax, grammar of most programming languages is pretty well specified.
Most modern IDEs can (using analysis of some sort) tell you when the code you wrote isn't valid or doesn't actually execute.
E.g.
If( false ) { /* do something */ }
Is perfectly valid code, but will never be executed.
Yep, my experience was the same when i tried to get it to do some simple Quick Basic 4.5 code - that is it keep giving me gibberish and non-sensical code.
In this scenario, ChatGPT is very much just gathering info online, scooping it into a bowl & spitting it out. It's certainly not "thinking".
But with that said, I have had some helpful use from it for some Python coding. Most likely because there is a wealth of info on Python online compared to BASIC.
Ive built a whole web site with the help of Chat GPT (I even surprised myself a bit) but yeah, its not so good at BASIC. I am a former BASIC power user with QB64 which is QuickBasic 4.5 compatible and after several fails I gave up on Chat GPT. I only managed to get somewhere after I gave up on chat GPT. I mean, it may be useful at providing an outline but that's about it.
@@TheRetroStuffGuy From what I've read, its not so much "Gathering" as it gathered from the internet a few years ago so its info may not be current.
I have found that ChatGPT works quite well for Python coding. Possibly the limitations (and many varieties) of old BASIC languages is too much for the poor AI!
I have had success with Python coding too. Nothing complicated, but good enough for my small project.
Tried to get examples for programming the ESP32 co-processor. Not too many examples available. I got only garbage. It was on the right track here and there but everything was far from a correct program, which was obvious without knowing anything about the topic.
Despite the enthusiasm around the ESP SoCs, I think most people end up using an Arduino-esque approach rather than really learning to code for the specific chip/platform.
Question is can it write universal opengl driver for nvidia or ati/amd for windows 98se/me for newer cards or improve kernelex and smp capabilities . That would be worthy task .
The Knight Rider games looks like it should work like that in Microsoft QBasic (the INPUT command will ask for 1 2 3), but I assume Amstrad did not support structural control flow (IF without GOTO or WHILE), so it stopped on the WHILE line with an error.
I'll check that out!
@@TheRetroStuffGuy Seems that my own reply got filtered for containing a hyperlink. The real reason (as validated in an Amstrad emulator) is that TRUE is 0 (as every undefined variable) , so WHILE TRUE is actually WHILE 0 and skips the rest of the program. Try WHILE 1.
I would say that's a solid first approximation of the United Kingdom.
you need be more details questions, think like speaking to someone dark library (one if real books), and only small torch to read the books, has no windows doors and per in the library has never anything that was not in the library ?
@2:34 The words read "Thank Tank". That appears to be the name of a donations website. Why it thought that was the logo for ghost busters is beyond me.
ya, gpt 3.5 sucks, to the shock of nobody. maybe use try the more powerful models?
I Did A Pong game with Chat GPT written in pure gcc and it works on terminal
2:26 It doesn't look right because the backslashes are escaped with a backslash. Replace \\ with \.
ChatGPT is sending a subliminal message. The UK is a bunch of squares. America is where the hip and cool people are.
Dude. You are way behind the curve. I used chatGPT about a year ago to try and write a bit of code for my Apple ][e in 8 bit assembler and machine language and it was able to whip it out in 3 seconds. I asked it to write code to change the screen color and cycle through the 16 colors available as fast as possible. It was amazing. It actually used a look up table to do th dirty work. Very cool.
You are either a troll or not particularly bright. ChatGPT doesn't actually know how to write a computer program. If you are lucky it will show you an answer it has already been trained on. For a trivial problem, such as you posited, there are many examples available on the internet that would have been used to train ChatGPT. Which is why it was able to give you a reasonable solution.
No. AI is advanced algorithm analysis. It can only repeat what is being stored in its database.
I had pretty much the same issues with Python for Blender. ChatGPT claims to know it but does not. I suppose you could say it's mimicking human behaviour and claiming it can do something it can not. We are just catching it out by asking it to do simple programming, it fails every time! Often times it uses functions and commands that do not exist in the version of BASIC or Python you are asking it to program for.
I have always said ChatGPT is NOT a real AI, a real AI would be able to do these simple tasks. It is just a glorified search engine that uses machine learning to interpret your requests.
Agreed!
Looks like today's AI isn't good at coding, songs, and etc. but some day may be. AI may make major changes but it won't replace creatives.
so if I ask it to create a compiler for the C language with some new feutures, its not going to help, well according to the hype industry and nvidia, I thought it was all over for engineers
Now try for Amiga ocs/ecs/aga in Amos, amiga basic, gfa basic ans assembly 😊
I really wonder what would be the result (bad 😂)
Drawing the UK as a square is actually correct....since BREXIT ; )
you didn't give it a chance to correct it's mistakes.. when i use GPT for code i have to tell it a few times what is going wrong and it will correct it (most of the time)
I have done with other projects with success. I didn't include it in the video, but for the Knight Rider game & UK map when I told it the code didn't work, it just rehashed variations of the same BASIC code.
Computers are STILL stupid! No worries of a "Terminator" yet! Or ever IMHO!
It's only a matter of time. 😀
I LOVE MY C64 ❤❤❤❤ FOREVER ❤❤❤❤
Such old basic doesn't have things as "sub" or "function" stuff, you had to "gosub" to a line-number and "return" after the code... ChatGPT seems to have mixed up all basic languages thinking they are all the same. Just stupid AI...
Actually some do. I don't know Amstrad Basic but I still use QB64 which is Quick Basic 4.5 compatible. It fully supports that command but glitched for several other reasons.
A lot of them do look fairly similar until you start trying to use them.
In some cases the syntax might be nearly identical, but the actual behavior may be different.
I did that.. not long code.. but it can..
it even can write accents and variations of Basic.
or convert Basic code into C or Python.. or Perl
or convert Python code into Basic..
it does need a strong review - for sometimes i see.. flaws .. that are hidden or unseen.
Drawing a square when asked to draw the UK, and you give it a pass? No, no, no! Seriously! Just NO!
Haha!
8:41 Modifying the program and ignoring that the very next command to CLS and being upset that it does what you tell it to is a bit disingenuous, innit? Especially when you fixed the syntax error in the previous attempt instead of erasing the line. 😒
Would die of embarrassment asking a bot to write basic code for me. Asking a bot about some syntax or what hex address in Rom a routing is, then ok. 😊
Why ask a bot when you could just look it up and write it down?
@@jnharton well if it was a voice activated bot I could imagine using this whilst in the middle of code typing, and by not having to look it up by hand you would save a lot on concentration, especially when multitasking troubleshooting :)
@@RichMye-wx1ob If they're fixed address subroutines from ROM, you might as well just print out a nice reference sheet.
Adding AI into this is overcomplicating things. Especially when we have standalone voice recognition tech that could probably be used to search a local computer reference.
Why are you asking ChatGPT to write a program given that it can only synthesize an answer based on programs it has been trained on? ChatGPT doesn't actually know how to write programs. It simply matches your question to programs it has been trained on to synthesize a program that looks similar to those it has already seen (i.e., been trained on). It has not knowledge of the actual behavior of those programs or its answer. These types of videos drive me nuts which is why I downvoted it.
ChatGPT can generate usable code based on a written specification. I have used (the paid version of) ChatGPT to develop Typescript and Python code. Sure, it makes mistakes, but if you feed it the error messages it will correct the code (explaining what it got wrong and why). It’s not going to produce perfect code or even well-optimised code in the first instance, but I estimate that I can create sophisticated applications (e.g. React websites and React Native apps) at least 4 times faster with AI. I have discovered that different AI agents are better / worse than others at solving certain problems and find that using a combination of (e.g.) ChatGPT, Claude, Gemini, etc, gives best results. Once the prototype code is working, using appropriate prompts can generate very specific guidance for optimisations.
I suspect the main problem with the experiment in this video was that the prompting was too vague and did not include enough information about the specific variant of BASIC. About 6 months ago, I experimented with getting ChatGPT to write some code in Sinclair Basic, and after some appropriate prompting it was able to produce working code that I could copy and paste into a ZX Spectrum emulator. In the first instance, it was making errors such as naming variables with words rather than single letters, but it ultimately got that right.
why do videos like this always use gpt 3.5? it's notoriously bad compared to 4. please try again with the actually decent gpt model. how do i know he used 3.5? at the start of the video it shows 3.5 in the top left.
love videos with 1 view
Ah well you better subscribe as there will be plenty more this year 😄
900 1 day later hater
I've tried these so-called AI programs a few times and found them to be completely useless.
Who gives a flying f***?