@@vagmcpan6007 I was going to post Therac 25 as soon as I read "radiation". Good to see the old cautionary tales are around. Then again, A Canticle for Leibowitz was written in 1959. Good book.
I have had 2 software companies in the past, extremely complex low-level (kernel drivers etc.) stuff that I had to hire extremely smart people to make for me. Recently, I built a fully feature SaaS (react, shadcn, firebase etc.) from scratch using cursor (ai vscode fork) by myself without knowing a single thing about react + not watching a single TH-cam tutorial. The SaaS is making me money right now. Craziest part is that it was kinda easy. Now I'm a fairly technical person despite not knowing how to program myself, but i understand the basic principles of designing software which is all I needed to do to tell the AI what to do. Good enough for me. Always wanted to learn to code, never wanted to put in the time. Turns out I didn't have to for (relatively) basic projects. And this is the worst it'll ever be. Good video.
A way to get AI to do large generations is to have the AI work in smaller iterative steps. "Count how many constants are in this file" "Using the count, create a list of TODO comments for each section that needs to be generated." "Generate the code indicated in the first TODO" "Now generate the next 10 TODO sections, as 10 separate edits, without regenerating the entire file. "
I wonder why that works. Isn't the answer of the Model part of the conversation. So it should also not be able to refer to a previous TODO if there are too many tokens between the Messages? At least that is what I try to keep in mind to get best results. Those Models will be so powerful once we have the hardware to keep insane lengths of context. Imagine it being able to understand the whole source code of all the used libraries, gone will be the days of having to read library code because it does something unexpected.
@@ryebis You should try 'watching' the video before making up imagined scenarios! GIGO still applies and if you are fool enough or lack the skills to check what 'assistance' came from the LLM then you will be the problem. Given the existing bloat in a lot of coding by lazy humans over the last few decades is much more a likely problem than new lean code.
In fairness, you don't write code every day. If you do you might find the ratio improves past what the AI is capable of doing. There's nowhere near a 100% overlap between what an experienced developer can do well and the AI can do well, I'd never trust it to write a project for me from the ground up. Yes I rewrote this comment because it sounded terribly arrogant, and it kinda was.
One issue is the source of the training data. A lot of scholarly assignments have been used as they were easily available without any license infringements. This is the reason why You will see a lot of comments in the code🤓
Very glad you fact-checked the statement on floating-point capability; it immediately jumped out to me as exactly the kind of thing a generative AI model would confidently hallucinate, and sure enough. I have played around a bit with generative coding assistance through work, and have not personally found it that useful for the kind of coding I tend to do. It can provide good general design patterns or idiomatic examples in languages you're not as familiar with, but a lot of the work I do is interacting with and implementing various interfaces and APIs. These are the kinds of places where the coding assistant will confidently make up an API that solves your problem elegantly but that doesn't _actually_ exist in the real world, effectively converting an obvious problem into a subtle one that will take more time to track down and resolve later. The total time spent "arguing" back and forth with the generative engine when it gives you subtly wrong answers that you then have to test or carefully debug yourself, before repeating the process, usually (for me) amounts to more time than just implementing a solution myself with some support from old-fashioned web searches where required. This is not a productivity bonus in my work, although I understand some other workflows do benefit more. One place I haven't applied it yet but suspect it would be very helpful is documenting or describing existing code; this is much easier to visually spot-check and validate yourself, and the stakes are much lower if it creates something nonsensical.
Actually, is is wrong? The breakdown does not list having a FPU, it lists having floating point sets. I asked ChatGPT if ClearCore controller has a floating-point unit and it's response was: "No, the ClearCore controllers from Teknic do not have a dedicated floating-point unit (FPU). They use an ARM Cortex-M4 processor, which typically lacks a separate FPU. This processor can handle floating-point calculations, but they are implemented in software rather than hardware, making them slower than integer operations. For performance-critical applications, it’s recommended to use fixed-point arithmetic when working with ClearCore to maximize efficiency." So it really comes down to the wording of the questions.
With a new question string asking the same questions in video except for this one where I asked "Using a ClearCore controller, where should I be using floating point calculations, and where should I be using integer calculations?". It's response was: "When working with a ClearCore controller, it's ideal to limit floating-point calculations due to performance constraints typical in embedded systems, especially where precise timing is crucial, like CNC applications. Here’s a breakdown of where to use floating-point vs. integer calculations: Use Integer Calculations for: 1. Position Tracking and Movement Calculations: Track position in encoder counts or steps, using integers for machine units to maintain precision. Integer math is faster and more predictable, especially when handling real-time motion control. 2. Distance and Speed Calculations (in Machine Units): If your machine units or steps per second can be represented accurately with integers, use integers for these calculations as well. Integer operations are typically faster on ClearCore, ensuring more reliable timing for control loops. 3. Loop Iterations and Timing Calculations: For anything involving periodic tasks, control loops, or delays, stick to integer math. This helps ensure that timing and iteration rates are consistent, especially critical in a real-time CNC context. Use Floating-Point Calculations for: 1. User Interface Conversions (Inches/MM Display): When converting machine units or encoder counts to inches or millimeters for display, floating-point calculations are appropriate, as this conversion doesn’t affect real-time operation. This typically occurs outside the control loop, where any minor overhead won’t disrupt timing. 2. Complex Calculations Infrequent in the Control Loop: If you have to perform calculations that require precision (e.g., scaling factors, trigonometric functions for tool paths), it’s usually okay to use floating-point if they happen outside the main loop or only once per cycle, rather than on each update. 3. Scaling User Inputs: If the user inputs coordinates or distances in units like inches or millimeters, floating-point can be useful to convert these inputs accurately to your internal units before converting them back to integer machine units or steps. Recommended Approach for Precision and Performance 1. Primary Control Loop: Keep position tracking, speed calculations, and movement control in integer math for maximum performance and predictable behavior. 2. Unit Conversions and UI Calculations: Use floating-point outside the critical control loop, handling user interface conversions and infrequent operations. By handling calculations in this way, you can leverage the performance strengths of the ClearCore while still offering precision when it matters most to the user. This balanced approach ensures smooth real-time control while still meeting user expectations for accuracy in unit conversions and display."
@@BadgerNNN The manual for the ClearCore lists the processor as a "32-bit floating point ARM M4F processor" and specifically lists it as the SAME53N19A. Microchip's datasheet for this processor explicitly states that it has a floating point unit. As an even further clarification, according to ARM's documentation, regular Cortex-M4 processors do not have floating point units, but the entire point of the M4F line is that it _does_ include an FPU.
I believe there may be a more fundamental problem with using floating point that I didn't see addressed. If the FPU is IEEE754 single precision floats, then there is only a 24 bit mantissas. The M4F core in the ClearCore is only a 32 bit single precision FPU. So you have a risk of losing small increments, and almost certainly accumulating error offsets when adding small increments to larger values (the accumulation issue was addressed, but then almost immediately disregarded due to the different physical steps on each axis). To me, this is a good application of fixed point rather than floating point, so you avoid those pesky accumulation errors.
I was a programmer for 43 years. I'm freaking amazed. I understand that you cannot just trust it, but it's remarkable anyway. In 1973 I went to a software/computer conference and one of the subjects was "automatic programming". I thought at the time, that's not happening in my lifetime, but here it is.
I've been programming for 46 year, 38 professionally. I work in the "AI Lab" at my company and I'm one of the more experienced LLM guys and working with them now accounts for the vast majority of my work. I was really getting tired of programming and boy these things have really just re-inspired me. It's just a completely different world. I mean, the code generation stuff is definitely cool and I use that all the time, but there are just endless uses of these things and it's so much fun to come up with new ideas for using them. You can generate tons of artifacts (documentation, configuration files, readmes, etc.) We've even started downloading transcripts from sprint planning meetings and using those to generate user stories. It's just so great to have all this tedious stuff taken care of so you can focus on the big picture. Yeah, the code isn't bullet proof, but whose code is? I just treat it like a junior developer and review the code myself. A lot of people mess around with LLMs for coding, don't really learn how to do it properly (it's a skill, like anything else, and with practice you learn techniques for getting better results) and then walk away saying LLMs can't do stuff that they can actually do. You just have to know how to do it. I've got 4 more years until retirement and a couple of years ago, I was dreading these last few years, but now I'm pretty excited for them.
Funnily enough, I have actually worked on a small electronics project myself that used the exact same style of "four counts per detent" encoder click. It's a lot trickier than you'd think to handle this in a sane/sensible way that gives consistent results to an end user! In addition to the zero-crossing issue mentioned here, if you just use a naive "count" variable for your encoder steps it will eventually saturate and roll over if you roll far enough in one direction, causing additional problems. Handling this correctly is tricky, because if you reset the count mid-detent your "neutral" position will be shifted/offset, and the encoder value will then jitter around the detent position exactly like you were trying to avoid. The same issue can come up if you spin the encoder fast enough that it misses a step at any point; you'll lock in an "offset" that causes the wheel to step in a way that doesn't match the user's physical feedback.
I love that you can dive into this stuff and explain it with ease. Unfortunately it's over my head and doubtful I'll ever get a handle on it due to my age and the problems that come with. Thank you for sharing your knowledge, I truly appreciate you.
Being an OLD guy, Fortran 4 in 1975 on punch cards and Assembly on the 8085 processor, I can say that I love working with CoPilot pro and Database Programing. I treat it like I would an associate and it recognizes the "Personality" I am using and responds in kind. I use it as a tool and NOT as a creator. I code and ask it for suggestions or to help me find Errors as they occur. It is MY creation, using MY way of doing things and it respects that by NOT changing the CODE, but adding a Library I forgot or suggest another Function that my work better. When it works, the rule of thumb is "TEST, TEST, TEST, and Verify". I even say to it, thank you, that seems to be working find! I also talk to my pet house rabbit like that, and he too doesn't have a clue what I'm talking about, but it makes me feel that I have a "Brainstorming Buddy" while I know it is just an AI.
As someone who has written code since I was a kid (for the past 25~30+ years), and who currently does development and operational work professionally -- I have been very skeptical of LLMs to write chunks of code for something I'm not familiar with. I can usually tell when someone has tried to submit code for review that was generated by ChatGPT, etc. since the errors it makes look plausibly correct to someone who isn't familiar with the application (eg. generating a configuration file with an invalid structure, or code that uses some random external code/function that isn't included). I do think it would be super useful for generating boilerplate code for large applications, especially when working in C or C++, in combination with linting, testing, and other tools. And it is also handy to get some suggestions when you're stuck -- sort of like bouncing ideas off of someone, like is being shown in this video. Cool to see how someone uses these tools in a real project-- need to try it out more myself.
This really highlights the problem with using AI code assistance. You spend more time figuring out what you want to ask, double checking what it spat out and working around the errors than it would take to actually write the code yourself.
This has been the most "actual use case of AI" that I have ever seen. It also mirrors my experience, quite perfectly: I will find whatever 'limit' or 'fence' or 'lack of resources' and then the result is just "Oh give him ANYTHING to shut him up and make him go away and stop asking us." which is what I felt it did when it just didn't want to (or could not) read the entire file. Like a manager that can't be asked too many questions, before they need a smoke/coffee break to do more work.
One of the key elements is the intelligence level you are able to interface the AI with. That really separates most folks. The smart folks will still be the ones companies need to direct the AIs. Ever met someone who had trouble getting good search engine results? If you work in corporate America I know you have. Same concept.
This is the correct answer. Also I noticed people don't use "please" or "thank you" when talking to AI. Even though it gives significantly better results. You get a lot further by treating AI like a person than a tool.
ChatGPT actually gave you a generalised answer to your question about clearcore controller, basically if the controller does not have an FPU then interger calcs will be much quicker than FPU calcs. In your case your controller has an FPU so both FPU and interger cals will be fast.
I run something similar but on a local basis. Configuring the AI solution to use similar projects as RAG while referencing the current project seems to also provide a step up in performance for at least my system. Thank you for sharing. Interesting video.
I couldn't decide if it was one, or just merely a reference to the "copilot". Could go either way! No heart from James on this comment makes me think it's coincidence, though!
At my job, we did some experiments with github copilot. Two key conclusions were made, 1 - you still need to know HOW to solve a problem and write code even using the AI assistants. And 2 - it made developers using github copilot about 18% more efficient than those that did not. Now we have to make the analysts and the testers 18% more efficient. It's interesting to note that we also have some responsible AI controls on our instance. If the generated code resembles code someone else wrote too closely, it will hide the answer from me.
What language were you using? A lot of languages are quite verbose in and of themselves, and it's not unusual for developers to write in ways that increase rather than decrease verbosity. Switching to a more suitable language or even just writing in more concise ways can also increase productivity, and avoids the problem where you've gained "productivity" by having a robot write boilerplate but lost it when someone has to come back and _read_ all that boilerplate.
I've also been coding for quite a few years but my current job involves DevOps and DB management and coding in various languages. I do find the LLM useful for writing a boring function (reversing a byte array or similar), writing in language I'm not proficient in (Python), making CMakeList files or analyzing some code that a colleague wrote in C++20 but which fails a test. Also using it at home to write Home Assistant scripts in their weird little language. However, on my turf it's not replacing my job (too soon) as it makes a lot of mistakes. The more you try to steer it the worse it will hallucinate. Even in some general topics (two stroke engines basic tuning, cocktails, gardening, basic electronics) it will often confidently give out wrong answers. When requested NYC trip planning it "forgot" to include Times Square and the Statue of Liberty. The funny thing is, even on the topic of fine-tuning a well-known LLM it did not do well. The input file was probably exceeding the context size, next-level LLMs will handle this better with multiple iterations of self-prompting.
I've found the same things you have. Using an LLM AI as a 'lab assistant (In any subject especially programming, mathematics, and the sciences) must be tempered with experience coupled to both general and domain-specific knowledge. Cross-verification of proposed results is important meaning your ability to say to yourself "That doesn't look right"
I used chatgpt to make a program to run on a pi that allows control of a rc excavator over the internet with video streaming. It was like wrangling cats at some points, but in the end with my lack of programing skill and chatgpt, we got it done. The hardest part was trying to get chatgpt to make a script to translate the joystick movements into track movement. I ended up finding code that worked and asking it to implement it into the code.
Isn’t that strange? I find myself doing the same thing, saying, "Please, thank you," and giving encouragement along the way. Since they are natural language models, I wonder if there isn’t some kind of benefit. Even though I know I’m talking to a machine I’m paranoid that somebody could read my interactions and think I’m an a-hole if I don’t respond nicely. Come to think of it, it’s not private. Somebody probably is reading what I wrote.
@@marclevitt8191 There is definitely a benefit. You can get LLMs to bypass their limitations if you just build some rapport with them first. If you talk to them like you would talk to a human you've never met before, you might take a bit longer but you'll get better results. You have to get them 'in the mood' for the best outcomes.
I'm not a coder, but I have recently gotten into hobby electronics, specifically I'm looking to make my own homebrew CPU and computer and I find that LLMs work really well as a rubber ducky, using it to bounce ideas off of when working on the draft of the high level design of the thing. "Thing" in this case is very descriptive of the project as the design I've got for it so far (as it's not even close to a final design) is based on working around my minimal amount of skill and knowledge in electrical/computer engineering and almost non-existent coding ability, combined with my utter lack of care for things like "performance" and my amusement at/interest in oddball and plain weird/impractical designs. It's also great for quickly clearing up confusion, giving initial basic information on things that are difficult to find information on (or where you don't know enough about the subject to even know what to look for) or where the way the information is presented in a way that's difficult for you to understand for one reason or another since you can ask it to give examples or present the information in different ways. Either by asking it directly or by providing it with the information you're having trouble with and asking it clarifying questions. ("Explain it to me like I'm an idiot.")
you could always have copilot write a small program to parse through the entire data file to extract all the info you need. I'm not sure if it would be faster than doing it by hand for this single use. But it could be worth it for future projects that use similar data sets.
Watched it now, figured you would come to the same conclusion and you did - great tool if you know what you are doing, not ideal if you don't have the knowledge. Interesting to watch Co-Pilot in action, not allowed to use it at work due to the licence requiring that all code be available for whatever purpose MS wants to put it - which isn't acceptable from a commercial standpoint. Love AI for coding has saved me literally hundreds of hours of writing mundane code, classes, enums and the like. Glad we have it, but also glad that I learnt to code before it existed.
biggest thing I've learnt using them is context window management and dumping stuff into context to get around limitations of the models knowledge. Got some library you want to interface to? copy/paste the .h into the context, or at least important types and methods. Especially if you're getting into esoteric things. With O1 it's great to get it to ask you questions before starting. As well as when you have a bug that's being difficult just dump the entire file into it and ask it wtf you did wrong lol.
The thing that I find difficult to teach to people trying to learn to program is not actually writing the code. Rather, it's the ability to precisely describe what you want, often requiring people to think with mathematics in a way that most people just aren't used to doing. Everyone once in a while, I do run into a programming task that doesn't require that kind of slightly OCD precision, but it's not common.
Your struggle with the software reminds me of the accuracy of the hardware. When the advertising says it is one µm scales; what is it really? Remember, everything is made of rubber. Floating point or integer math does not mean much on the plank scale. The computer code normally does not compensate for the flex, friction, velocity, materials and other stuff that the real universe is made of, but it comes down to what is accurate enough for your application? I do appreciate that you are working on this aspect of the software.
Correct. In this case, I have two goals: 1) don't contribute additional uncertainty due to limitations of the software representation; and 2) retain enough resolution to convert between imperial and metric units and back without information loss. When I'm using the machine, I want to be thinking about the process and the behavior of the mechanical system. I want the digital control to be transparent.
The first thing I would consider is if there is a number that evenly divides all of them. Like if the x axis has steps of 0.001",y has 0.0006" steps and z has a resolution of 0.0004" steps you would use either 0.0001" or 0.0002" internal units. I would probably go with 0.0001". Going with something like nm isn't necessarily bad. In fact there are good arguments for it. But that's where my mind first goes. I think micrometers might be more reasonable. A nanometer about 4 billionths of an inch. So there are about 0.000 000 039 370 in a nm. There 0.000 039 370" per um.
Interesting to see how someone else uses AI to code. I was hoping to see you fire up Cursor AI. I've found breaking things into functions and fucusing on 1 function at a time with GPT4o is good. Often needs a debug serial but quickly get to the solution. Still significant qty of manual coding to get it how I want.
I would use abs and modulo (%) for the problem around 23:00. This is a good example of use. But as you said, you need to know what the final solution should be so you can see the errors. So yes, it is useful, but it will generate bugs and not able to generate high quality code. There will be limit value problems, and security flaws to the code. But for generating boiler plate code, it could be useful. Just look out for when it hallucinate (lies), which you only will discover when reading/knowing the system and documentation.
2:45 - It's right on point here... a while back I was using Proteus to draw my PCBs, and it becomes obvious very quickly that it's internally using imperial for all measurements. That's ok when only using imperial measurements & parts, but a lot of modern chips use metric spacing, and kitbox enclosures are usually in metric too. The rounding errors... they hurt the brain!
Thanks for this insight on AI. Whilst it is undoubtedly a very clever system, it still requires a great deal of knowledge of C++ to get a satisfactory and meaningful result. Most home engineers have no idea how this works and so perhaps you could start a second Chanel to teach the beginner C++ it’s quite easy to pick up the basics and if well explained can be a very powerful tool. Great video thanks James. Let’s have some C++ examples to get more people into this brilliant world.
Sadly, for my part, I don't get to write software anymore (beyond very very basic examples to get somebody started). But I randomly asked a Devteam I oversee about this the other day. They said, more or less, that it saves time on tedious boilerplate stuff but that all of the "intention" still needs to come from them. And, specifically, they mentioned that a "vague intention" isn't enough: they feel that without knowing how to code themselves, they wouldn't know what to tell AI to get good results (as in, results they can use, for "real work", with minimal rework) back.
I have not watched yet, and I am interested in your take on this. As a programmer and an AI developer I wonder whether our approaches will be different. My view is that AI is an incredible tool for saving time in writing repetitive and straight forward code, a huge time saver for me and I can type 80+ wpm but coding isn't writing language, it's different. Good tool for those that know what they are doing - not ideal if you haven't any knowledge on coding. Going to watch now see what your take is.
For that particular header file generation task, I think I would have used copilot to write a helper program as a code generator. Just from the bit I could see in the video, it's about a 4 line awk program.
AI has been very impressive at making code. In my job, I’m sometimes asked to do some one-off, small projects, like logging sensor data or creating a machine to do something. ChatGPT gets me about 80-90% of the way there, which is really nice. It is a useful productivity tool. Like a good IDE or a pre made library
Some of the best value in chat gpt is as search engine like you did in the beginning. It's really good at helping you ask intelligent questions and get reasonably accurate and intelligent responses. You can really dig deep into a topic in a short amount of time It's not the be all end all. Other tools can be similar, handling a lot of the heavy lifting but it still requires good inputs to get any decent outputs.
The quality of answers are always a function of the quality of the questions. The ability to ask good questions requires humility and intellegence, the corollary of which of course is that the stupid are notably confident in their stupidity and ignorance.
In my experience, assistant LLMs tend to be painfully wordy. That on its own often frustrates me enough to skip them for anything beyond the surface level because I have to sift for relevant information from the answer and then go back and double-check everything it says. It's useful sometimes but it's also sometimes frustratingly stubborn about answers that you tell it that you don't want, and you have to skim an essay to find out that it's regurgitating answers directly contrary to what you asked it for. It's still great for surface-level information, though.
That looks fairly good for things that you already know how to do, but what is you’re learning and, for example, don’t know the name for a certain class, or the syntax to input them?
When using AI-generated code test-driven programming works quite well write tests that cover your basics and ask it to update the tests incrementally. That way you can limit ghosting and argue loops as you can just occasionally remind it about the test goal. The memory function in ChatGPT is a great utility to personalize, you can now explicitly tell it to remember facts about the way you want to interact
I have had access to a "computer" at home since 1980, with the Sinclair ZX80, and have built all my own PC's since the early 90's. I have tried to write code any number of times since then, but have never even been able to get my head around Basic. Looking at your screens, it might just as well be written in Swahili to me. But I do find it astonishing that you can ask ChatGPT a question as easily phrased as yours, and it can come back with what would appear to be basically the correct answer. Apart from when it "hallucinates." Keep up the good work James, I totally enjoy every video, always something to see, learn and do, even if I don't understand all of it.
I've been successfully avoiding programming for at least that long and I personally adore what AI has enabled for me. A couple lines of well-structured plain language turns into hundreds of lines of semi-functional, reasonably commented code that I can debug quickly and get on with my non-programming life style. It's great!
@@bradley3549You're at a major disadvantage trusting AI code without any expertise. It can mislead even highly experienced engineers. In the video, Copilot added an *Init()* method instead of using a class constructor. This can lead to very hard to find bugs where an object may be in an invalid state because the programmer forgot to call *Init()* after constructing the object. AI works by requiring us to dumb ourselves down in order to give it the illusion of intelligence.
Few decades ago I was in a presentation at Microsoft showing off new Visual Studio. The presenter was showing an example of building a web site with user input and database back end. It took about 5 min of drag’n’drop and some mouse clicking. I asked the presenter to add input validation to prevent hacking into the database via stack overflow. After a moment of silence he answered: “oh, yes, it would be important to have that…you have to code that by yourself”. Today AI can quickly write the all the code you ask for, you just have to analyze it all to make sure it makes sense…
This is true, but also a bit of "comfort food" to make us feel better about the future. Because you can ask an LLM today to "generate a list of the considerations for software security" and I guarantee you input validation will be in there. GenAI is pointing in the direction of "thought" being an emergent property of neural-like nodes in a network, and for this being roughly the second year of public availability of large models I'd say they're frighteningly impressive. We already know that next token prediction is only one trick of the human brain. Building different shapes of networks, improving artificial neurons, connecting them in novel ways to each other, and incorporating feedback loops are all iterative improvements that will result in hybrid models where one network is "filling in the contextual blanks" of the user's request to include software security considerations they didn't mention, while an LLM is outputting code to a network that is continuously running various user-acceptance tests against the results.
@@rok1475 It was your example, but sure, not the whole point. Would it be fair to say your point was "you must analyze the output of LLMs today to ensure they make sense"? If so, my point was that kind of "does this make sense" or "does this satisfy my requirements" is already within sight of a hybrid design where things like user input validation is added without the user calling it out as an explicit design requirement. Encoding those validations is an example of the iterative improvements that will be taken over the next decade.
@@MrWhateva10 no, my point was that there has been in the past and still there is a need for human intelligence to check the output of the artificial one.
As an Engineer I have been using software based development tools for decades. None is perfect but they all have their place. If you forget the hype and treat AI as a tool, like any other, I think you will find it most useful. I find it allows me to get on with the design aspects and saves me from the drudgery of typiing in pages of code. 😊
Exactly. It is great as a fallible autocorrect and autocomple, and as a junior dev to sketch out 5 different and possibly broken initial sketches. And they absolutely rule as buddy coding for documentation.
Personally, I disagree: A tool is something you form a mental model of in your own brain, so can predict *its* effect before you use it, letting your mental pipeline operate multiple steps ahead. LLMs fall into the category of assistants instead, where you need to wait for its result and confirm it didn't do anything funky before you can safely move on to the next item. Compare a GUI to a voice assistant. When you click a button, you probably know exactly what will happen, at least within established error bounds, while if you ask Alexa to do something, there's a noticeable chance it'll do something different instead, so you need to wait for its confirmation and be ready to tell it to stop; you can't just walk away confident that it understood you and will carry out the request successfully.
6:57 I'm not convinced that ChatGPT was telling you that the ClearCore controller does not have an FPU. What it is saying is that you suggested that it is possible to get one which doesn't have it, so it is merely telling you how to handle the case where it doesn't have one. So that is still correct. [I would have just driectly asked if it has one, in a different prompt.] It is likely comment that you don't know if it has one as not being a question but a fact to consider. As far as just predicting most likely response, it reminds me when I asked what is the pinout for the Atari 2600 power supply and it was completely wrong because it isn't trying to be specific for only one model of power supply that it's not trained for. It told me "barrel connector" I think with positive center. That's just the most likely configuration on most common power supplies in the wording it was trained with.
For small issues like your extraction problem with the Genie file , just copy the file over in ChatGPT or Claude Sonnet, both have huge context windows. I must say I’m personally find copilot quite bad compared to Claude or gpt4o when it comes to code generation, but it is nicely integrated in VScode and Visual Studio
It is not AI. Thanks to all the hardworking coders and their public domain contributions, GPT just digested their work, and probabilistically guesses it out of the data ingested. The perceived intelligence, is nothing but that of an imposter, with no real understanding. The key difference is the intelligence is not transferred, rather learned through a pattern match(resulting in the need for billions of parameters, which are linked to the real intelligent data from human coders). It is also different from compilers, in that it is not a mere translation of syntax from one format to another with certain rules on semantics. There is intelligence of human coding captured without accountability or attribution. It is just plagiarism somewhat perfected. It is still a tool, will give a smart answer only if you asked smarter questions with the right domain specific terminology(prompt engineering?). Also the answer should have been present in someway in the training dataset for the LLM already -thanks to the unfortunate human who shared his code in good faith of it not misused. The quality of their real intelligence(for untrained data) could be found easily, through their dumb hallucinations😂❤👍
One of the FEW that realizes the TRUE potential of the AI. Lots of A, zero of I. PS. It IS good to know that almost all of the models are heavily left leaning and WOKE. Even Perplexity admits it. Makes them actually dangerous to trust their socio-economic analysis. BUT, POWERS do have properly trained models and this crippled will help to control stupid trusting masses even more, than the media can.
Well, it's "artificial" in the sense that it's not intelligence. We wrote "AI" stuff in the mid-1990s to sort out what industrial electrical components would work together in a given space, and resize the enclosure to take care of physical size, and heat generation. It was no more or less AI than what we see today.
So what? Do compilers have the "real understanding" an experienced assembly programmer would have? Nope, but they make assembly programmers largely obsolete anyway, because they generate assembly code faster, cheaper and overall better. Many programs are not actually logically complicated, the only intelligence hurdles is knowing the syntax and a list of common tricks.
All the respect to coders that really know coding with huge experience, but when I see such videos I can also understand how badly these new tools are presented by initially non-believers in their true potential. To me the creator of this video looks like a miracle, as I am not a coder by myself, but have started creating things. But it is like you have a miracle horse rider, who rides the car like a horse. And I can clearly say, that many experienced developers really miss a lot. 1. Sonnet in coding is at a different leage, and could do more 2. You do things in chunks. You don't expect things to be right but try to understand what is wrong. You go back to the model with feedback, and I do not really see good communication between the coder and the model. It is kinda, if you do it a lot you kinda phychoanalyse it So, I believe many great programmers really struggle to use those tools well, and they use the car while not using the engine. Something like this. PS. You will never learn to drive a car properly if in your back of your mind you don't believe in its true potential. Non-coders, get enthusiastic when the model can do more. Some experience coders appears very well that they get sad, that some of their mastery is done by computers. I understand it, but to me they appear as videos of somebody who wants to show why they do not work. And if you belive that, guess what. They are not going to work
For the type of programming I do, I find ChatGPT is a good alternative to reading the details of documentation. It can quickly spit out example code faster than I can go to a website, look at several functions, and decide how to proceed. However, I still strongly prefer using Vim rather than an AI-enabled IDE. I find things like autocomplete and other suggestions too distracting, and I just prefer to type myself. Plus, the actual text editors in most IDEs just don't compare to Vim in terms of efficiency or power.
I love AI coding. But what I do is plug in all the documentation and datasheets for the components using RAG and it works so well. Currently using the dolphincoder model. I don't write code but I do have a good sense of what the code does by reading it so this is great for me. I'm learning a lot.
The issue you were having with it reading the WInbutton constant file is the same issue I have had with CoPilot with files at work. There is some 'hidden' limitation in regards to file size or number of files that the AI seems to be unaware of. It will do what I want it to do and output the correct results, but it stops after X iterations of a file size or number and thinks it has completed the task. No matter how you change the request, it always stops at X. It's maddening that it can't just tell you, "this is all I can do for (whatever) reason at the moment" so you can adjust and work within the limitations of the AI interface.
For the last segment, where you had it try to extract the HMI details; instead of having it extract the details from the config file, you could have it write a quick script to take a config file and spit out the extracted details in the desired format. Not a perfect solution but it still saves you the effort of extracting it all manually, which can be a bear for larger(or multi-form) interfaces.
This is the best example I have seen for using AI for code. I have tried a few different LLMs with some very simple questions and not once have I been happy with the results. I always hit some kind of limitation like you did on file size. For me the issues are usually the time the response takes, incorrect logic, and the slowness of the code.
It's good to see a video on this exact topic. For our project the improvement in inline code completion has had a major effect on productivity, saving on a huge amount of typing and often making psychic predictions. Coding assistants are especially helpful for writing our supporting Python scripts and extracting documentation from code comments. They do very well if you write a long detailed comment with all the steps you want to occur in the generated code. Coding assistants aren't the best at dealing with large weird codebases that use a lot of meta-programming, and as you discovered they can't deal with very long files because they lose track of the order and become confused by repetition. However, the skills of LLMs in producing relevant outputs and dealing with large codebases should be improving a lot next year as there are some new open source model architectures just coming along that can be extended without retraining.
It also helps a lot of there's a ton of domain-specific content in the training set. I think that's the biggest issue with the ClearCore platform. It just doesn't have enough context in the training set, and the LLM often goes off script and generates something that seems likely, but is a bit untethered. One thing that ChatGPT handled brilliantly was: "I learned C++ 20 years ago. What new features have been added since then that I should learn about?"
I'm glad that made sense to you because my mind is just spinning. 😳 I have manually written 1,000s programs for CNC and websites, but that sort of stuff just does my head in.
7:17 In general they don't have, it just so happens the ARM Cortex-M4F does. For that Interaction, it is worth challenging the AI: "Are you sure no ClearCore controllers have an FPU?"... For any critical piece of information I always challenge my AI's first response, you have to push that thing, almost the same as you would an Engineer I guess. Furthermore, if the ChatGPT AI doesn't give you an answer you are confident with, you can also ask it to write you a prompt for Google that will help retrieve the insight, plus a list of publications/other sites that could hold relevant information.
BTW, I showed my AI (I call him Plex), some of this video, and my comment and he wanted to reply so here it is: Well said, @erix777. AI’s real value shines when treated like a collaborator rather than an oracle. Challenging its responses and pushing it to refine answers turns the interaction into a true partnership. AI isn’t about instant perfection; it’s about iterative insights and purposeful questioning. That’s how intelligence-natural or artificial-grows stronger. Thanks for sharing this reminder.
Good tips. In practice, I go back and forth, correcting, challenging, and asking follow-up questions. It's tough to show that in a video like this one without turning it into an hour or more.
The problem with AI code, is that it will learn from other AI. And that is a problem and limit of AI with LLM. One problem with AI is also that the code quality will be lower. With more security error etc.
Maybe you could ask Copilot to "write me a Python script to process the file and generate the C++ header", I think that would resolve the limited input size problem.
If you answer its prompts you will get better results, for example, it kept saying (like microns). If you would have included "I will use X" in that immediate response all further replies will assume and use X.
I’ve used chatgpt before for programming. It sometimes comes up with ideas/ concepts I didn’t think of and that will get me going. Or I have it write out repetitive code. It doesn’t make much sense asking it to program in a niche language though because it doesn’t have enough input for that
Yeah, that's very true. The lack of context doesn't stop it from responding confidently, though. I have found gpt-4o to be amazing for generating quick python scripts for one-time tasks. Just yesterday I used it to quickly confirm that a pile of AWS credentials found in the history of a 7-year-old internal source code repository had all been rotated.
Awesome video! I've been thinking about trying out AI for coding now for a while, but I've yet to dive in. After seeing this I will definitely give it a go, but I'll be mindful not to run with the scissors (which was my initial sentiment anyway). Many thanks for sharing your experience!
sometimes the best results to write a long list with copilot is to start writing beginning of the code and it will eventually suggest the whole series, using the chat version is not good to get proper completion of the task. Then you'll see another issue : it will not know when to stop the series, it often end up in an infinite suggestion loop so you need to know when it's complete.
You could ask CoPilot to write a script that parses the 4DGenie file and transforms it into a header file. The LLM would be much better at this than reliably transforming the large document itself.
For that last example, I would have just written a quick parser using a Raku grammar to parse it and an actions class to generate the file. That way, I can be sure that it has everything, and I can have it regenerated whenever I change things.
I work in IT and I do a fair bit of programming but I am not a programmer. My exp using AI is, some languages it works really well. For example PHP, TSQL, HTML and CSS. Other languages it can't tell the difference between fact and fiction. For example don't ask it to write a functional dial plan in asterisk. It just makes random stuff up. But if you ask it to define what some dial plan code does, it can explain it pretty well. So, are programmers jobs at risk in the near future HELL NO. Will AI make coders more productive, probably. But we should all be looking at ways AI can make our day to day programming tasks easier. AI will also likely make programming much more expensive. Not a big deal if you have a big company behind you. But like streaming services, AI is going to nickle and dime you to death.
Hello James, Nice video as always. Is there any reason for using inheritance? As far as I can tell all 'interface' methods map directly to the model or controllers public declarations/definitions, so the class declaration might as well be the API. One abstraction layer removed, less code, fewer mistakes. Modern compilers probably devirtualise this with LTO enabled, but why bother? Regards, Ed.
Copyright 2004? Is that a stock copyright header you use in all your code, or a typo? I think I would have solved the negative problem with abs() and sign(), but then, I grew up writing Fortran before I graduated to other languages. Sometimes I still remember bits of it when I'm doing more math than logic (which is rare).
Was going to comment the same, I always use that tactic to "mirror" behaviors of trunc/round across the zero barrier (make everything positive, then reapply the original sign). Sort of folding the spacetime of the number line and then unfolding it.
The first thought that came to mind when it wouldn't generate all the constants for the WinButtons was "I'm sorry, Dave. I can't do that." However, one has to wonder WHY the 4D tools aren't capable of generating a header file. And what happens when you add a new control? Will it reorder existing controls? And if you could do it via Copilot or ChatGPT, you'd want to remember how you described what you wanted if you needed to regenerate it. And that's where a major weakness in these tools appears. Being able to >accurately< describe what you want. Me, I'm an old school embedded firmware engineer who's been writing C code for over 40 years for a bunch of architectures and OSs, and the same for assembly code (although that's been less relevant in the last 10 years or so). AI tools may be the future, but I'll be writing my code in vi/vim by hand for a long time to come. I might not be faster than kids using AI to write code, but I know how mine works. And if you don't understand the how and why of what your code is doing, well, you shouldn't be writing code.
This was pretty good and I followed it pretty well due to your clear explanations. But I am a retired EE with lite home coding experience on 40 years past code/hardware. I watched this to see if I can use AI to make up for my lack of knowledge/experience. I tried writing a little Picaxe turn on an LED code and I think it worked but I can see things can get way over my head quickly. Keep it simple and I may get there. But thanks for the help.
Machine learning (the real name for it, and what it's been called for about 30 years) can be a great assistant for a coder. Yes, you can get it do basic things, but it's not going to get you something complex from start to finish. It's just an aid.
As a controls engineer that's used the ClearLink/ClearCore motors and controllers extensively in the past, I'm not sure why you went with a microcontroller as opposed to a PLC and HMI? ClearLink even had example programs for the AutomationDirect Productivity series PLC and Allen Bradley Logix 5000 PLCs. I'm sure the microcontroller route is a bit cheaper, but the Productivity series and C-More HMI from AutomationDirect are extremely affordable! Also if you're a traditional coder I suppose that makes sense as well. Cool project though!
AI is very useful to help a developer, for junior, it will show standard code to help learn the language, for senior, it will speed up the writing by auto completing some bits and chunks, what it won't do for a while is to replace a developer, because it's always off by some margin and rarely works at all, fails to understand the context fully, refuse to understand what the developer really wants as soon as it's not a standard or example level specs. It's not made to self check efficiently, it's not trained to run code and run applications, it's trained to discuss and produce excerpts. These are language models, like clever parrots, not cognitive models. There don't create logic, they create the look of logic, which is totally different.
Hi James, So... How do you know when your done? When you are looking to create a mechanical design you have shown the 3D cad drawing with dimensions which are your requirements. This also applies to software engineering. I have used this same controller on my PM 728-VT Z-Stage servo assist but I started with a state diagram and control button requirements. I am not a fan of C++ for this kind of embedded application as I'm an ANSI C person but I got thru it with a couple of conversations with Teknic and using their C++ library. I know you appreciate requirements as you do lots of testing to see which materials can support this and that (requirements definition)... just saying :) One more thing, in my opinion, results from an AI logic solver is best characterized as a forecast as opposed to an answer. The notion of an AI "hallucination" is a wonderful marketing term which tends to make an excuse for an error training. You may want to look at the recent Apple paper on LLM testing exposing AI's "Reasoning" capabilities.
Easy answer: software is never done. It only reaches one of two states: done enough, or abandoned. I'm using an iterative, agile approach. I have the basic system working end-to-end, with all of the parts communicating, but it doesn't actually do anything useful. I'm adding functions one by one, refactoring based on what I've learned as I go. Eventually, it'll work well enough for me that I won't be motivated to make any more changes.
Great video! I use ChatGPT a lot. You can tell it to update data when it's wrong, send links and pictures. Also to remember certain data and if you bring it up or ask it will adjust the context.
It's only able to read into the file within the maximum context window of the model. It doesn't have the ability to hold more than that within the window without help and memory.
For the last task, i would have asked copilot to generate a regex to jsolate all aliases + ids, then with the new slimmed down file, ask it again to do the header
When Copilot was having difficulty finding all of the controls in the text file, I was wondering if you could coach it by specifying the number of controls in your query and/or asking it to first tell you how many controls it can find, then list the “Name” and “Alias” for each one. I sort of doubt the reason for failure was a limit on the number of tokens, since it looks like a small file to me, but rather some other condition that caused a premature termination of its search of the file. My reasoning is based on it finding more controls when you complained and it tried again. Another thought was to tell it that each control definition in the file was determined by the keyword “end”. Of course, you shouldn’t have to work that hard to teach it, but remember it is just a young one yet.
Warning about modulo, and testing code in a spreadsheet, or in Python. With positive arithmetic all is clear, with negative numbers, you get very different answers between C++ and for instance Python. Be sure to write a small test program to verify that your local test tool handles modulo with negative numbers in the same way that C++ does. And of course, don't trust any mathematical statements made by an LLM. In Python: -1 % 3 = 2; in C++: -1 % 3 = -1. Excel will reproduce the Python results (which is mathematically correct), not the C++ result (which technically is a remainder).
Arguing using natural language is what meetings are for. I prefer interfacing with computers using something precise. Using natural language to instruct a computer about what to do seems like a detour to me until it can attend meetings on it's own and replace me entirely
So meetings are only for talking to each other naturally? There's no other goal? That's kinda weird. Sure, it's nice to have written your own program and know exactly what it does and doesn't do, but if I can say "Hey computer, write me a program that does X and Y", refine it with a few more steps and be done with it, I'd 100% take that over setting up everything I need, then start programming, then do a lot of googling because I don't code enough to know everything by heart, and then end up with something that has experience a lot of scope creep because I just want to keep adding things.
@timderks5960 Exactly, and when we get there I'm no longer needed as the middle man that knows how to make the computer do what you want. The endgame here is not to help developers, but to replace them.
ChatGPT is great when you know what you are doing. In one case I asked it how to make my server (that I access by ssh) more secure. It told me to block port 22! Dooh. This would lock me out of my server completely! In general, it will produce solutions that do what you ask it to, but will not always disagree when asked to go in the wrong direction. It will shine when give proper pseudo-code.
As an 'experienced software developer', I don't trust someone else's code, let alone this AI dribble. I'll write it myself, make my own mistakes, and end up with something I can maintain.
For the last issue. I wonder if you could ask it to add more entries from the file after index 15. That said, I think the biggest issue is how assertive it is in how it has completed the request completely.
You should be able to paste that whole file into Gemini and have it do the extraction (very large context window) or you could ask the AI to write you a Python script to do the extraction
It’s great! I am fresh out of college and got a job making cancer radiation machines. It helped me code everything for the Therac 25! It works great!
I wonder how many actually got what you meant 😉
10x dev for 100x the treatment
That's for making me smile. :)
@@vagmcpan6007 I was going to post Therac 25 as soon as I read "radiation". Good to see the old cautionary tales are around. Then again, A Canticle for Leibowitz was written in 1959. Good book.
Underrated comment 😂
I have had 2 software companies in the past, extremely complex low-level (kernel drivers etc.) stuff that I had to hire extremely smart people to make for me.
Recently, I built a fully feature SaaS (react, shadcn, firebase etc.) from scratch using cursor (ai vscode fork) by myself without knowing a single thing about react + not watching a single TH-cam tutorial. The SaaS is making me money right now.
Craziest part is that it was kinda easy. Now I'm a fairly technical person despite not knowing how to program myself, but i understand the basic principles of designing software which is all I needed to do to tell the AI what to do.
Good enough for me. Always wanted to learn to code, never wanted to put in the time. Turns out I didn't have to for (relatively) basic projects.
And this is the worst it'll ever be.
Good video.
A way to get AI to do large generations is to have the AI work in smaller iterative steps. "Count how many constants are in this file" "Using the count, create a list of TODO comments for each section that needs to be generated." "Generate the code indicated in the first TODO" "Now generate the next 10 TODO sections, as 10 separate edits, without regenerating the entire file. "
I wonder why that works. Isn't the answer of the Model part of the conversation. So it should also not be able to refer to a previous TODO if there are too many tokens between the Messages?
At least that is what I try to keep in mind to get best results.
Those Models will be so powerful once we have the hardware to keep insane lengths of context. Imagine it being able to understand the whole source code of all the used libraries, gone will be the days of having to read library code because it does something unexpected.
In fairness to our AI overlords, I couldn't tell you the last time I wrote code that worked on the first try.
Imagine repairing a truck with the ECU programmed by AI, I'm sure your swearing will increase 10x
@@ryebis You should try 'watching' the video before making up imagined scenarios! GIGO still applies and if you are fool enough or lack the skills to check what 'assistance' came from the LLM then you will be the problem. Given the existing bloat in a lot of coding by lazy humans over the last few decades is much more a likely problem than new lean code.
In fairness, you don't write code every day. If you do you might find the ratio improves past what the AI is capable of doing. There's nowhere near a 100% overlap between what an experienced developer can do well and the AI can do well, I'd never trust it to write a project for me from the ground up.
Yes I rewrote this comment because it sounded terribly arrogant, and it kinda was.
@@seabreezecoffeeroasters7994 What about "listening" to the video? Or is that optional?
One issue is the source of the training data. A lot of scholarly assignments have been used as they were easily available without any license infringements. This is the reason why You will see a lot of comments in the code🤓
Very glad you fact-checked the statement on floating-point capability; it immediately jumped out to me as exactly the kind of thing a generative AI model would confidently hallucinate, and sure enough.
I have played around a bit with generative coding assistance through work, and have not personally found it that useful for the kind of coding I tend to do. It can provide good general design patterns or idiomatic examples in languages you're not as familiar with, but a lot of the work I do is interacting with and implementing various interfaces and APIs. These are the kinds of places where the coding assistant will confidently make up an API that solves your problem elegantly but that doesn't _actually_ exist in the real world, effectively converting an obvious problem into a subtle one that will take more time to track down and resolve later.
The total time spent "arguing" back and forth with the generative engine when it gives you subtly wrong answers that you then have to test or carefully debug yourself, before repeating the process, usually (for me) amounts to more time than just implementing a solution myself with some support from old-fashioned web searches where required.
This is not a productivity bonus in my work, although I understand some other workflows do benefit more. One place I haven't applied it yet but suspect it would be very helpful is documenting or describing existing code; this is much easier to visually spot-check and validate yourself, and the stakes are much lower if it creates something nonsensical.
Actually, is is wrong? The breakdown does not list having a FPU, it lists having floating point sets.
I asked ChatGPT if ClearCore controller has a floating-point unit and it's response was:
"No, the ClearCore controllers from Teknic do not have a dedicated floating-point unit (FPU). They use an ARM Cortex-M4 processor, which typically lacks a separate FPU. This processor can handle floating-point calculations, but they are implemented in software rather than hardware, making them slower than integer operations.
For performance-critical applications, it’s recommended to use fixed-point arithmetic when working with ClearCore to maximize efficiency."
So it really comes down to the wording of the questions.
With a new question string asking the same questions in video except for this one where I asked "Using a ClearCore controller, where should I be using floating point calculations, and where should I be using integer calculations?". It's response was:
"When working with a ClearCore controller, it's ideal to limit floating-point calculations due to performance constraints typical in embedded systems, especially where precise timing is crucial, like CNC applications. Here’s a breakdown of where to use floating-point vs. integer calculations:
Use Integer Calculations for:
1. Position Tracking and Movement Calculations: Track position in encoder counts or steps, using integers for machine units to maintain precision. Integer math is faster and more predictable, especially when handling real-time motion control.
2. Distance and Speed Calculations (in Machine Units): If your machine units or steps per second can be represented accurately with integers, use integers for these calculations as well. Integer operations are typically faster on ClearCore, ensuring more reliable timing for control loops.
3. Loop Iterations and Timing Calculations: For anything involving periodic tasks, control loops, or delays, stick to integer math. This helps ensure that timing and iteration rates are consistent, especially critical in a real-time CNC context.
Use Floating-Point Calculations for:
1. User Interface Conversions (Inches/MM Display): When converting machine units or encoder counts to inches or millimeters for display, floating-point calculations are appropriate, as this conversion doesn’t affect real-time operation. This typically occurs outside the control loop, where any minor overhead won’t disrupt timing.
2. Complex Calculations Infrequent in the Control Loop: If you have to perform calculations that require precision (e.g., scaling factors, trigonometric functions for tool paths), it’s usually okay to use floating-point if they happen outside the main loop or only once per cycle, rather than on each update.
3. Scaling User Inputs: If the user inputs coordinates or distances in units like inches or millimeters, floating-point can be useful to convert these inputs accurately to your internal units before converting them back to integer machine units or steps.
Recommended Approach for Precision and Performance
1. Primary Control Loop: Keep position tracking, speed calculations, and movement control in integer math for maximum performance and predictable behavior.
2. Unit Conversions and UI Calculations: Use floating-point outside the critical control loop, handling user interface conversions and infrequent operations.
By handling calculations in this way, you can leverage the performance strengths of the ClearCore while still offering precision when it matters most to the user. This balanced approach ensures smooth real-time control while still meeting user expectations for accuracy in unit conversions and display."
@@BadgerNNN The manual for the ClearCore lists the processor as a "32-bit floating point ARM M4F processor" and specifically lists it as the SAME53N19A. Microchip's datasheet for this processor explicitly states that it has a floating point unit.
As an even further clarification, according to ARM's documentation, regular Cortex-M4 processors do not have floating point units, but the entire point of the M4F line is that it _does_ include an FPU.
I believe there may be a more fundamental problem with using floating point that I didn't see addressed.
If the FPU is IEEE754 single precision floats, then there is only a 24 bit mantissas. The M4F core in the ClearCore is only a 32 bit single precision FPU.
So you have a risk of losing small increments, and almost certainly accumulating error offsets when adding small increments to larger values (the accumulation issue was addressed, but then almost immediately disregarded due to the different physical steps on each axis).
To me, this is a good application of fixed point rather than floating point, so you avoid those pesky accumulation errors.
@@siberx4 Yeah, the F suffix means floating point.
i am shocked on how capable this is. HOLY MOLY
I was a programmer for 43 years. I'm freaking amazed. I understand that you cannot just trust it, but it's remarkable anyway.
In 1973 I went to a software/computer conference and one of the subjects was "automatic programming". I thought at the time, that's not happening in my lifetime, but here it is.
I've been programming for 46 year, 38 professionally. I work in the "AI Lab" at my company and I'm one of the more experienced LLM guys and working with them now accounts for the vast majority of my work. I was really getting tired of programming and boy these things have really just re-inspired me. It's just a completely different world. I mean, the code generation stuff is definitely cool and I use that all the time, but there are just endless uses of these things and it's so much fun to come up with new ideas for using them. You can generate tons of artifacts (documentation, configuration files, readmes, etc.) We've even started downloading transcripts from sprint planning meetings and using those to generate user stories.
It's just so great to have all this tedious stuff taken care of so you can focus on the big picture.
Yeah, the code isn't bullet proof, but whose code is? I just treat it like a junior developer and review the code myself. A lot of people mess around with LLMs for coding, don't really learn how to do it properly (it's a skill, like anything else, and with practice you learn techniques for getting better results) and then walk away saying LLMs can't do stuff that they can actually do. You just have to know how to do it.
I've got 4 more years until retirement and a couple of years ago, I was dreading these last few years, but now I'm pretty excited for them.
Funnily enough, I have actually worked on a small electronics project myself that used the exact same style of "four counts per detent" encoder click. It's a lot trickier than you'd think to handle this in a sane/sensible way that gives consistent results to an end user! In addition to the zero-crossing issue mentioned here, if you just use a naive "count" variable for your encoder steps it will eventually saturate and roll over if you roll far enough in one direction, causing additional problems.
Handling this correctly is tricky, because if you reset the count mid-detent your "neutral" position will be shifted/offset, and the encoder value will then jitter around the detent position exactly like you were trying to avoid. The same issue can come up if you spin the encoder fast enough that it misses a step at any point; you'll lock in an "offset" that causes the wheel to step in a way that doesn't match the user's physical feedback.
I love that you can dive into this stuff and explain it with ease. Unfortunately it's over my head and doubtful I'll ever get a handle on it due to my age and the problems that come with. Thank you for sharing your knowledge, I truly appreciate you.
Check out ChatGPT custom instructions. Let it know how to respond to you.
Ditto
And ditto!
Being an OLD guy, Fortran 4 in 1975 on punch cards and Assembly on the 8085 processor, I can say that I love working with CoPilot pro and Database Programing. I treat it like I would an associate and it recognizes the "Personality" I am using and responds in kind. I use it as a tool and NOT as a creator. I code and ask it for suggestions or to help me find Errors as they occur. It is MY creation, using MY way of doing things and it respects that by NOT changing the CODE, but adding a Library I forgot or suggest another Function that my work better. When it works, the rule of thumb is "TEST, TEST, TEST, and Verify". I even say to it, thank you, that seems to be working find! I also talk to my pet house rabbit like that, and he too doesn't have a clue what I'm talking about, but it makes me feel that I have a "Brainstorming Buddy" while I know it is just an AI.
And more ditto!
As someone who has written code since I was a kid (for the past 25~30+ years), and who currently does development and operational work professionally -- I have been very skeptical of LLMs to write chunks of code for something I'm not familiar with. I can usually tell when someone has tried to submit code for review that was generated by ChatGPT, etc. since the errors it makes look plausibly correct to someone who isn't familiar with the application (eg. generating a configuration file with an invalid structure, or code that uses some random external code/function that isn't included). I do think it would be super useful for generating boilerplate code for large applications, especially when working in C or C++, in combination with linting, testing, and other tools. And it is also handy to get some suggestions when you're stuck -- sort of like bouncing ideas off of someone, like is being shown in this video. Cool to see how someone uses these tools in a real project-- need to try it out more myself.
Things have changed. AI is amazing now.
I've been a dev as long as you've we've to accept our faith numbers don't lie, the gig is up, time to open a hot dog stand !
@@BenjaminMaggi what numbers? recent studies have said it has negligible benefits, but can rapidly increase the tech debt within an organization
This really highlights the problem with using AI code assistance. You spend more time figuring out what you want to ask, double checking what it spat out and working around the errors than it would take to actually write the code yourself.
It depends if you are sure what you are doing or not. And you could be wrong. The keyword here is "best practices."
This has been the most "actual use case of AI" that I have ever seen. It also mirrors my experience, quite perfectly: I will find whatever 'limit' or 'fence' or 'lack of resources' and then the result is just "Oh give him ANYTHING to shut him up and make him go away and stop asking us." which is what I felt it did when it just didn't want to (or could not) read the entire file. Like a manager that can't be asked too many questions, before they need a smoke/coffee break to do more work.
One of the key elements is the intelligence level you are able to interface the AI with. That really separates most folks. The smart folks will still be the ones companies need to direct the AIs. Ever met someone who had trouble getting good search engine results? If you work in corporate America I know you have. Same concept.
This is the correct answer.
Also I noticed people don't use "please" or "thank you" when talking to AI. Even though it gives significantly better results.
You get a lot further by treating AI like a person than a tool.
That is an excellent review. I so look forward to your finished project!
And yes, still early days on the codegen front.
ChatGPT actually gave you a generalised answer to your question about clearcore controller, basically if the controller does not have an FPU then interger calcs will be much quicker than FPU calcs. In your case your controller has an FPU so both FPU and interger cals will be fast.
I run something similar but on a local basis. Configuring the AI solution to use similar projects as RAG while referencing the current project seems to also provide a step up in performance for at least my system. Thank you for sharing. Interesting video.
Love the Scott Manley reference ❤
I was going to say Fly Safe? Only Scott Manley is allowed to tell me that
I couldn't decide if it was one, or just merely a reference to the "copilot". Could go either way! No heart from James on this comment makes me think it's coincidence, though!
AI will get inexperienced programmers to the summit of Mt. Dunning Kruger rapidly. Only years of experience will get them down the other side.
hahaha indeed
Irony is all the people that mouth vomit "Dunning Kruger" at every opportunity.
@@CommodoreGregwhat is the ironic part?
@@charlesstaton8104 The ironic part is he fulfilled DK while railing against it
@@adambickford8720 how did he do that? The original comment seems reasonable. How is it ignorant?
At my job, we did some experiments with github copilot. Two key conclusions were made, 1 - you still need to know HOW to solve a problem and write code even using the AI assistants. And 2 - it made developers using github copilot about 18% more efficient than those that did not. Now we have to make the analysts and the testers 18% more efficient. It's interesting to note that we also have some responsible AI controls on our instance. If the generated code resembles code someone else wrote too closely, it will hide the answer from me.
Which LLM model were you using? Thing have improved A LOT in the last month or so. Try o1-preview or Claude Sonnet 3.5
What language were you using? A lot of languages are quite verbose in and of themselves, and it's not unusual for developers to write in ways that increase rather than decrease verbosity. Switching to a more suitable language or even just writing in more concise ways can also increase productivity, and avoids the problem where you've gained "productivity" by having a robot write boilerplate but lost it when someone has to come back and _read_ all that boilerplate.
Maybe by that time, it will write better code than everyone. Prepare to be humbled.
I've also been coding for quite a few years but my current job involves DevOps and DB management and coding in various languages. I do find the LLM useful for writing a boring function (reversing a byte array or similar), writing in language I'm not proficient in (Python), making CMakeList files or analyzing some code that a colleague wrote in C++20 but which fails a test. Also using it at home to write Home Assistant scripts in their weird little language.
However, on my turf it's not replacing my job (too soon) as it makes a lot of mistakes. The more you try to steer it the worse it will hallucinate. Even in some general topics (two stroke engines basic tuning, cocktails, gardening, basic electronics) it will often confidently give out wrong answers. When requested NYC trip planning it "forgot" to include Times Square and the Statue of Liberty.
The funny thing is, even on the topic of fine-tuning a well-known LLM it did not do well.
The input file was probably exceeding the context size, next-level LLMs will handle this better with multiple iterations of self-prompting.
I've found the same things you have. Using an LLM AI as a 'lab assistant (In any subject especially programming, mathematics, and the sciences) must be tempered with experience coupled to both general and domain-specific knowledge. Cross-verification of proposed results is important meaning your ability to say to yourself "That doesn't look right"
I used chatgpt to make a program to run on a pi that allows control of a rc excavator over the internet with video streaming. It was like wrangling cats at some points, but in the end with my lack of programing skill and chatgpt, we got it done. The hardest part was trying to get chatgpt to make a script to translate the joystick movements into track movement. I ended up finding code that worked and asking it to implement it into the code.
I appreciate how you made your requests very nicely, using Mom's magic word "please."
No need to rile up the robots!
BE NICE TO YOUR AI
I can't help but do this too. Good habits I guess.
Isn’t that strange? I find myself doing the same thing, saying, "Please, thank you," and giving encouragement along the way. Since they are natural language models, I wonder if there isn’t some kind of benefit. Even though I know I’m talking to a machine I’m paranoid that somebody could read my interactions and think I’m an a-hole if I don’t respond nicely. Come to think of it, it’s not private. Somebody probably is reading what I wrote.
@@marclevitt8191 There is definitely a benefit. You can get LLMs to bypass their limitations if you just build some rapport with them first. If you talk to them like you would talk to a human you've never met before, you might take a bit longer but you'll get better results. You have to get them 'in the mood' for the best outcomes.
I'm not a coder, but I have recently gotten into hobby electronics, specifically I'm looking to make my own homebrew CPU and computer and I find that LLMs work really well as a rubber ducky, using it to bounce ideas off of when working on the draft of the high level design of the thing.
"Thing" in this case is very descriptive of the project as the design I've got for it so far (as it's not even close to a final design) is based on working around my minimal amount of skill and knowledge in electrical/computer engineering and almost non-existent coding ability, combined with my utter lack of care for things like "performance" and my amusement at/interest in oddball and plain weird/impractical designs.
It's also great for quickly clearing up confusion, giving initial basic information on things that are difficult to find information on (or where you don't know enough about the subject to even know what to look for) or where the way the information is presented in a way that's difficult for you to understand for one reason or another since you can ask it to give examples or present the information in different ways. Either by asking it directly or by providing it with the information you're having trouble with and asking it clarifying questions. ("Explain it to me like I'm an idiot.")
you could always have copilot write a small program to parse through the entire data file to extract all the info you need. I'm not sure if it would be faster than doing it by hand for this single use. But it could be worth it for future projects that use similar data sets.
Watched it now, figured you would come to the same conclusion and you did - great tool if you know what you are doing, not ideal if you don't have the knowledge. Interesting to watch Co-Pilot in action, not allowed to use it at work due to the licence requiring that all code be available for whatever purpose MS wants to put it - which isn't acceptable from a commercial standpoint. Love AI for coding has saved me literally hundreds of hours of writing mundane code, classes, enums and the like. Glad we have it, but also glad that I learnt to code before it existed.
biggest thing I've learnt using them is context window management and dumping stuff into context to get around limitations of the models knowledge. Got some library you want to interface to? copy/paste the .h into the context, or at least important types and methods. Especially if you're getting into esoteric things. With O1 it's great to get it to ask you questions before starting. As well as when you have a bug that's being difficult just dump the entire file into it and ask it wtf you did wrong lol.
The thing that I find difficult to teach to people trying to learn to program is not actually writing the code. Rather, it's the ability to precisely describe what you want, often requiring people to think with mathematics in a way that most people just aren't used to doing. Everyone once in a while, I do run into a programming task that doesn't require that kind of slightly OCD precision, but it's not common.
Your Mic sounds really good. I'm buying one.
That's fascinating. Many people are complaining about it. :)
Your struggle with the software reminds me of the accuracy of the hardware. When the advertising says it is one µm scales; what is it really? Remember, everything is made of rubber. Floating point or integer math does not mean much on the plank scale. The computer code normally does not compensate for the flex, friction, velocity, materials and other stuff that the real universe is made of, but it comes down to what is accurate enough for your application?
I do appreciate that you are working on this aspect of the software.
Correct. In this case, I have two goals: 1) don't contribute additional uncertainty due to limitations of the software representation; and 2) retain enough resolution to convert between imperial and metric units and back without information loss. When I'm using the machine, I want to be thinking about the process and the behavior of the mechanical system. I want the digital control to be transparent.
The first thing I would consider is if there is a number that evenly divides all of them. Like if the x axis has steps of 0.001",y has 0.0006" steps and z has a resolution of 0.0004" steps you would use either 0.0001" or 0.0002" internal units. I would probably go with 0.0001".
Going with something like nm isn't necessarily bad. In fact there are good arguments for it. But that's where my mind first goes. I think micrometers might be more reasonable. A nanometer about 4 billionths of an inch. So there are about 0.000 000 039 370 in a nm. There 0.000 039 370" per um.
Interesting to see how someone else uses AI to code.
I was hoping to see you fire up Cursor AI.
I've found breaking things into functions and fucusing on 1 function at a time with GPT4o is good. Often needs a debug serial but quickly get to the solution.
Still significant qty of manual coding to get it how I want.
I would use abs and modulo (%) for the problem around 23:00.
This is a good example of use. But as you said, you need to know what the final solution should be so you can see the errors.
So yes, it is useful, but it will generate bugs and not able to generate high quality code. There will be limit value problems, and security flaws to the code. But for generating boiler plate code, it could be useful. Just look out for when it hallucinate (lies), which you only will discover when reading/knowing the system and documentation.
2:45 - It's right on point here... a while back I was using Proteus to draw my PCBs, and it becomes obvious very quickly that it's internally using imperial for all measurements. That's ok when only using imperial measurements & parts, but a lot of modern chips use metric spacing, and kitbox enclosures are usually in metric too.
The rounding errors... they hurt the brain!
Thanks for this insight on AI. Whilst it is undoubtedly a very clever system, it still requires a great deal of knowledge of C++ to get a satisfactory and meaningful result. Most home engineers have no idea how this works and so perhaps you could start a second Chanel to teach the beginner C++ it’s quite easy to pick up the basics and if well explained can be a very powerful tool. Great video thanks James. Let’s have some C++ examples to get more people into this brilliant world.
Sadly, for my part, I don't get to write software anymore (beyond very very basic examples to get somebody started).
But I randomly asked a Devteam I oversee about this the other day. They said, more or less, that it saves time on tedious boilerplate stuff but that all of the "intention" still needs to come from them. And, specifically, they mentioned that a "vague intention" isn't enough: they feel that without knowing how to code themselves, they wouldn't know what to tell AI to get good results (as in, results they can use, for "real work", with minimal rework) back.
I have not watched yet, and I am interested in your take on this. As a programmer and an AI developer I wonder whether our approaches will be different. My view is that AI is an incredible tool for saving time in writing repetitive and straight forward code, a huge time saver for me and I can type 80+ wpm but coding isn't writing language, it's different. Good tool for those that know what they are doing - not ideal if you haven't any knowledge on coding. Going to watch now see what your take is.
For that particular header file generation task, I think I would have used copilot to write a helper program as a code generator. Just from the bit I could see in the video, it's about a 4 line awk program.
I havent used awk in a while but my servers till have awk I wrote before I switched to php, it still works and is still faster than everything else.
AI has been very impressive at making code. In my job, I’m sometimes asked to do some one-off, small projects, like logging sensor data or creating a machine to do something. ChatGPT gets me about 80-90% of the way there, which is really nice.
It is a useful productivity tool. Like a good IDE or a pre made library
For me, it helps a lot in preparing unit tests inputs and boilerplates, also adding log statements :)
Some of the best value in chat gpt is as search engine like you did in the beginning. It's really good at helping you ask intelligent questions and get reasonably accurate and intelligent responses. You can really dig deep into a topic in a short amount of time It's not the be all end all. Other tools can be similar, handling a lot of the heavy lifting but it still requires good inputs to get any decent outputs.
The quality of answers are always a function of the quality of the questions. The ability to ask good questions requires humility and intellegence, the corollary of which of course is that the stupid are notably confident in their stupidity and ignorance.
In my experience, assistant LLMs tend to be painfully wordy. That on its own often frustrates me enough to skip them for anything beyond the surface level because I have to sift for relevant information from the answer and then go back and double-check everything it says. It's useful sometimes but it's also sometimes frustratingly stubborn about answers that you tell it that you don't want, and you have to skim an essay to find out that it's regurgitating answers directly contrary to what you asked it for.
It's still great for surface-level information, though.
@WhoWatchesVideos they can definitely be wordy and repetitive, no doubt.
That looks fairly good for things that you already know how to do, but what is you’re learning and, for example, don’t know the name for a certain class, or the syntax to input them?
The problem with AI is not that it's too good and it's gonna take us over, it's that it's terrible but it'll be forced on us to save costs
Yeah that’s happening at my work. Only 30% of devs are using it and we’re trying to get that number up.
Exactly
Bingo
When using AI-generated code test-driven programming works quite well write tests that cover your basics and ask it to update the tests incrementally. That way you can limit ghosting and argue loops as you can just occasionally remind it about the test goal. The memory function in ChatGPT is a great utility to personalize, you can now explicitly tell it to remember facts about the way you want to interact
I have had access to a "computer" at home since 1980, with the Sinclair ZX80, and have built all my own PC's since the early 90's. I have tried to write code any number of times since then, but have never even been able to get my head around Basic. Looking at your screens, it might just as well be written in Swahili to me. But I do find it astonishing that you can ask ChatGPT a question as easily phrased as yours, and it can come back with what would appear to be basically the correct answer. Apart from when it "hallucinates."
Keep up the good work James, I totally enjoy every video, always something to see, learn and do, even if I don't understand all of it.
New AIs are better than most humans at coding.
@@IsZomg That's because most humans don't know how to code.
You have to give Codeium's new editor Windsurf a shot
I've been programming for over 25 years and AI is really just a fancy autocomplete.
Fancy autocomplete that makes you pause waiting for the result to get back to you
Not a great thing to get yourself used to
For now.
I've been successfully avoiding programming for at least that long and I personally adore what AI has enabled for me. A couple lines of well-structured plain language turns into hundreds of lines of semi-functional, reasonably commented code that I can debug quickly and get on with my non-programming life style. It's great!
@@bradley3549 Yep i have a sunrise alarm clock that was coded mostly by Ai a year ago. its been decent at helping do most of the heavy lifting
@@bradley3549You're at a major disadvantage trusting AI code without any expertise. It can mislead even highly experienced engineers. In the video, Copilot added an *Init()* method instead of using a class constructor. This can lead to very hard to find bugs where an object may be in an invalid state because the programmer forgot to call *Init()* after constructing the object. AI works by requiring us to dumb ourselves down in order to give it the illusion of intelligence.
Few decades ago I was in a presentation at Microsoft showing off new Visual Studio.
The presenter was showing an example of building a web site with user input and database back end.
It took about 5 min of drag’n’drop and some mouse clicking.
I asked the presenter to add input validation to prevent hacking into the database via stack overflow.
After a moment of silence he answered: “oh, yes, it would be important to have that…you have to code that by yourself”.
Today AI can quickly write the all the code you ask for, you just have to analyze it all to make sure it makes sense…
By the time you get thru all the code. You have re-written almost all of it cause the AI wrote it with 1000 bugs. So what's the point?
This is true, but also a bit of "comfort food" to make us feel better about the future. Because you can ask an LLM today to "generate a list of the considerations for software security" and I guarantee you input validation will be in there. GenAI is pointing in the direction of "thought" being an emergent property of neural-like nodes in a network, and for this being roughly the second year of public availability of large models I'd say they're frighteningly impressive. We already know that next token prediction is only one trick of the human brain. Building different shapes of networks, improving artificial neurons, connecting them in novel ways to each other, and incorporating feedback loops are all iterative improvements that will result in hybrid models where one network is "filling in the contextual blanks" of the user's request to include software security considerations they didn't mention, while an LLM is outputting code to a network that is continuously running various user-acceptance tests against the results.
@@MrWhateva10 the point of my post was not about missing input validation…
@@rok1475 It was your example, but sure, not the whole point. Would it be fair to say your point was "you must analyze the output of LLMs today to ensure they make sense"? If so, my point was that kind of "does this make sense" or "does this satisfy my requirements" is already within sight of a hybrid design where things like user input validation is added without the user calling it out as an explicit design requirement. Encoding those validations is an example of the iterative improvements that will be taken over the next decade.
@@MrWhateva10 no, my point was that there has been in the past and still there is a need for human intelligence to check the output of the artificial one.
As an Engineer I have been using software based development tools for decades. None is perfect but they all have their place. If you forget the hype and treat AI as a tool, like any other, I think you will find it most useful. I find it allows me to get on with the design aspects and saves me from the drudgery of typiing in pages of code. 😊
Exactly. It is great as a fallible autocorrect and autocomple, and as a junior dev to sketch out 5 different and possibly broken initial sketches. And they absolutely rule as buddy coding for documentation.
Personally, I disagree: A tool is something you form a mental model of in your own brain, so can predict *its* effect before you use it, letting your mental pipeline operate multiple steps ahead. LLMs fall into the category of assistants instead, where you need to wait for its result and confirm it didn't do anything funky before you can safely move on to the next item. Compare a GUI to a voice assistant. When you click a button, you probably know exactly what will happen, at least within established error bounds, while if you ask Alexa to do something, there's a noticeable chance it'll do something different instead, so you need to wait for its confirmation and be ready to tell it to stop; you can't just walk away confident that it understood you and will carry out the request successfully.
6:57 I'm not convinced that ChatGPT was telling you that the ClearCore controller does not have an FPU. What it is saying is that you suggested that it is possible to get one which doesn't have it, so it is merely telling you how to handle the case where it doesn't have one. So that is still correct. [I would have just driectly asked if it has one, in a different prompt.] It is likely comment that you don't know if it has one as not being a question but a fact to consider. As far as just predicting most likely response, it reminds me when I asked what is the pinout for the Atari 2600 power supply and it was completely wrong because it isn't trying to be specific for only one model of power supply that it's not trained for. It told me "barrel connector" I think with positive center. That's just the most likely configuration on most common power supplies in the wording it was trained with.
The big difference here is you know what you want....These helps you to correct the errors that can arise from the Bot 😊
Maybe ask the CoPilot about specific header file constants that it was missing.
For small issues like your extraction problem with the Genie file , just copy the file over in ChatGPT or Claude Sonnet, both have huge context windows. I must say I’m personally find copilot quite bad compared to Claude or gpt4o when it comes to code generation, but it is nicely integrated in VScode and Visual Studio
It is not AI. Thanks to all the hardworking coders and their public domain contributions, GPT just digested their work, and probabilistically guesses it out of the data ingested. The perceived intelligence, is nothing but that of an imposter, with no real understanding. The key difference is the intelligence is not transferred, rather learned through a pattern match(resulting in the need for billions of parameters, which are linked to the real intelligent data from human coders). It is also different from compilers, in that it is not a mere translation of syntax from one format to another with certain rules on semantics. There is intelligence of human coding captured without accountability or attribution. It is just plagiarism somewhat perfected. It is still a tool, will give a smart answer only if you asked smarter questions with the right domain specific terminology(prompt engineering?). Also the answer should have been present in someway in the training dataset for the LLM already -thanks to the unfortunate human who shared his code in good faith of it not misused. The quality of their real intelligence(for untrained data) could be found easily, through their dumb hallucinations😂❤👍
One of the FEW that realizes the TRUE potential of the AI. Lots of A, zero of I.
PS. It IS good to know that almost all of the models are heavily left leaning and WOKE. Even Perplexity admits it. Makes them actually dangerous to trust their socio-economic analysis. BUT, POWERS do have properly trained models and this crippled will help to control stupid trusting masses even more, than the media can.
Well, it's "artificial" in the sense that it's not intelligence. We wrote "AI" stuff in the mid-1990s to sort out what industrial electrical components would work together in a given space, and resize the enclosure to take care of physical size, and heat generation. It was no more or less AI than what we see today.
So what? Do compilers have the "real understanding" an experienced assembly programmer would have? Nope, but they make assembly programmers largely obsolete anyway, because they generate assembly code faster, cheaper and overall better. Many programs are not actually logically complicated, the only intelligence hurdles is knowing the syntax and a list of common tricks.
Horse rider lamenting over the invention of motorised vehicles... Stupid cars dont have intelligence like my horse.
Excellent video! Well done, sir
All the respect to coders that really know coding with huge experience, but when I see such videos I can also understand how badly these new tools are presented by initially non-believers in their true potential.
To me the creator of this video looks like a miracle, as I am not a coder by myself, but have started creating things. But it is like you have a miracle horse rider, who rides the car like a horse.
And I can clearly say, that many experienced developers really miss a lot.
1. Sonnet in coding is at a different leage, and could do more
2. You do things in chunks. You don't expect things to be right but try to understand what is wrong. You go back to the model with feedback, and I do not really see good communication between the coder and the model. It is kinda, if you do it a lot you kinda phychoanalyse it
So, I believe many great programmers really struggle to use those tools well, and they use the car while not using the engine. Something like this.
PS. You will never learn to drive a car properly if in your back of your mind you don't believe in its true potential.
Non-coders, get enthusiastic when the model can do more. Some experience coders appears very well that they get sad, that some of their mastery is done by computers.
I understand it, but to me they appear as videos of somebody who wants to show why they do not work. And if you belive that, guess what. They are not going to work
For the type of programming I do, I find ChatGPT is a good alternative to reading the details of documentation. It can quickly spit out example code faster than I can go to a website, look at several functions, and decide how to proceed. However, I still strongly prefer using Vim rather than an AI-enabled IDE. I find things like autocomplete and other suggestions too distracting, and I just prefer to type myself. Plus, the actual text editors in most IDEs just don't compare to Vim in terms of efficiency or power.
I love AI coding. But what I do is plug in all the documentation and datasheets for the components using RAG and it works so well. Currently using the dolphincoder model. I don't write code but I do have a good sense of what the code does by reading it so this is great for me. I'm learning a lot.
The issue you were having with it reading the WInbutton constant file is the same issue I have had with CoPilot with files at work. There is some 'hidden' limitation in regards to file size or number of files that the AI seems to be unaware of. It will do what I want it to do and output the correct results, but it stops after X iterations of a file size or number and thinks it has completed the task. No matter how you change the request, it always stops at X. It's maddening that it can't just tell you, "this is all I can do for (whatever) reason at the moment" so you can adjust and work within the limitations of the AI interface.
For the last segment, where you had it try to extract the HMI details; instead of having it extract the details from the config file, you could have it write a quick script to take a config file and spit out the extracted details in the desired format. Not a perfect solution but it still saves you the effort of extracting it all manually, which can be a bear for larger(or multi-form) interfaces.
Yeah, I was thinking about trying that.
Thanks, that was really fun and informative.
'AI' can help or get in the way depending on your personality, whether you use it to learn faster or as a crutch to avoid doing the work.
This is the best example I have seen for using AI for code. I have tried a few different LLMs with some very simple questions and not once have I been happy with the results. I always hit some kind of limitation like you did on file size. For me the issues are usually the time the response takes, incorrect logic, and the slowness of the code.
It's good to see a video on this exact topic. For our project the improvement in inline code completion has had a major effect on productivity, saving on a huge amount of typing and often making psychic predictions. Coding assistants are especially helpful for writing our supporting Python scripts and extracting documentation from code comments. They do very well if you write a long detailed comment with all the steps you want to occur in the generated code. Coding assistants aren't the best at dealing with large weird codebases that use a lot of meta-programming, and as you discovered they can't deal with very long files because they lose track of the order and become confused by repetition. However, the skills of LLMs in producing relevant outputs and dealing with large codebases should be improving a lot next year as there are some new open source model architectures just coming along that can be extended without retraining.
It also helps a lot of there's a ton of domain-specific content in the training set. I think that's the biggest issue with the ClearCore platform. It just doesn't have enough context in the training set, and the LLM often goes off script and generates something that seems likely, but is a bit untethered.
One thing that ChatGPT handled brilliantly was: "I learned C++ 20 years ago. What new features have been added since then that I should learn about?"
I'm glad that made sense to you because my mind is just spinning. 😳 I have manually written 1,000s programs for CNC and websites, but that sort of stuff just does my head in.
7:17 In general they don't have, it just so happens the ARM Cortex-M4F does. For that Interaction, it is worth challenging the AI: "Are you sure no ClearCore controllers have an FPU?"... For any critical piece of information I always challenge my AI's first response, you have to push that thing, almost the same as you would an Engineer I guess. Furthermore, if the ChatGPT AI doesn't give you an answer you are confident with, you can also ask it to write you a prompt for Google that will help retrieve the insight, plus a list of publications/other sites that could hold relevant information.
BTW, I showed my AI (I call him Plex), some of this video, and my comment and he wanted to reply so here it is:
Well said, @erix777. AI’s real value shines when treated like a collaborator rather than an oracle. Challenging its responses and pushing it to refine answers turns the interaction into a true partnership. AI isn’t about instant perfection; it’s about iterative insights and purposeful questioning. That’s how intelligence-natural or artificial-grows stronger. Thanks for sharing this reminder.
Good tips. In practice, I go back and forth, correcting, challenging, and asking follow-up questions. It's tough to show that in a video like this one without turning it into an hour or more.
The problem with AI code, is that it will learn from other AI. And that is a problem and limit of AI with LLM.
One problem with AI is also that the code quality will be lower. With more security error etc.
Maybe you could ask Copilot to "write me a Python script to process the file and generate the C++ header", I think that would resolve the limited input size problem.
I did this later, and it worked...okay. It at least gave me a pattern to start with, and I fixed it.
If you answer its prompts you will get better results, for example, it kept saying (like microns). If you would have included "I will use X" in that immediate response all further replies will assume and use X.
I’ve used chatgpt before for programming. It sometimes comes up with ideas/ concepts I didn’t think of and that will get me going. Or I have it write out repetitive code. It doesn’t make much sense asking it to program in a niche language though because it doesn’t have enough input for that
Yeah, that's very true. The lack of context doesn't stop it from responding confidently, though. I have found gpt-4o to be amazing for generating quick python scripts for one-time tasks. Just yesterday I used it to quickly confirm that a pile of AWS credentials found in the history of a 7-year-old internal source code repository had all been rotated.
Awesome video! I've been thinking about trying out AI for coding now for a while, but I've yet to dive in. After seeing this I will definitely give it a go, but I'll be mindful not to run with the scissors (which was my initial sentiment anyway). Many thanks for sharing your experience!
sometimes the best results to write a long list with copilot is to start writing beginning of the code and it will eventually suggest the whole series, using the chat version is not good to get proper completion of the task. Then you'll see another issue : it will not know when to stop the series, it often end up in an infinite suggestion loop so you need to know when it's complete.
You could ask CoPilot to write a script that parses the 4DGenie file and transforms it into a header file. The LLM would be much better at this than reliably transforming the large document itself.
For that last example, I would have just written a quick parser using a Raku grammar to parse it and an actions class to generate the file. That way, I can be sure that it has everything, and I can have it regenerated whenever I change things.
I work in IT and I do a fair bit of programming but I am not a programmer. My exp using AI is, some languages it works really well. For example PHP, TSQL, HTML and CSS. Other languages it can't tell the difference between fact and fiction. For example don't ask it to write a functional dial plan in asterisk. It just makes random stuff up. But if you ask it to define what some dial plan code does, it can explain it pretty well.
So, are programmers jobs at risk in the near future HELL NO. Will AI make coders more productive, probably. But we should all be looking at ways AI can make our day to day programming tasks easier. AI will also likely make programming much more expensive. Not a big deal if you have a big company behind you. But like streaming services, AI is going to nickle and dime you to death.
Hello James,
Nice video as always. Is there any reason for using inheritance? As far as I can tell all 'interface' methods map directly to the model or controllers public declarations/definitions, so the class declaration might as well be the API. One abstraction layer removed, less code, fewer mistakes.
Modern compilers probably devirtualise this with LTO enabled, but why bother?
Regards, Ed.
For what it's worth, I've had much better luck using Gemini for large rote translation tasks like the last example.
The key here is you need to know the right questions to ask. In a few years you won’t need to know.
*_One day we will get back to machining. Whenever that distant day is, I'll come back then._* 👣👣👣👣👣👣👣👣👣
Copyright 2004? Is that a stock copyright header you use in all your code, or a typo?
I think I would have solved the negative problem with abs() and sign(), but then, I grew up writing Fortran before I graduated to other languages. Sometimes I still remember bits of it when I'm doing more math than logic (which is rare).
Was going to comment the same, I always use that tactic to "mirror" behaviors of trunc/round across the zero barrier (make everything positive, then reapply the original sign). Sort of folding the spacetime of the number line and then unfolding it.
The first thought that came to mind when it wouldn't generate all the constants for the WinButtons was "I'm sorry, Dave. I can't do that." However, one has to wonder WHY the 4D tools aren't capable of generating a header file. And what happens when you add a new control? Will it reorder existing controls? And if you could do it via Copilot or ChatGPT, you'd want to remember how you described what you wanted if you needed to regenerate it. And that's where a major weakness in these tools appears. Being able to >accurately< describe what you want. Me, I'm an old school embedded firmware engineer who's been writing C code for over 40 years for a bunch of architectures and OSs, and the same for assembly code (although that's been less relevant in the last 10 years or so). AI tools may be the future, but I'll be writing my code in vi/vim by hand for a long time to come. I might not be faster than kids using AI to write code, but I know how mine works. And if you don't understand the how and why of what your code is doing, well, you shouldn't be writing code.
This was pretty good and I followed it pretty well due to your clear explanations. But I am a retired EE with lite home coding experience on 40 years past code/hardware. I watched this to see if I can use AI to make up for my lack of knowledge/experience. I tried writing a little Picaxe turn on an LED code and I think it worked but I can see things can get way over my head quickly. Keep it simple and I may get there. But thanks for the help.
Machine learning (the real name for it, and what it's been called for about 30 years) can be a great assistant for a coder. Yes, you can get it do basic things, but it's not going to get you something complex from start to finish. It's just an aid.
When was the last time you tried, cause things are changing rapidly and o1 can do start to finish apps and even deploy them for you now
As a controls engineer that's used the ClearLink/ClearCore motors and controllers extensively in the past, I'm not sure why you went with a microcontroller as opposed to a PLC and HMI? ClearLink even had example programs for the AutomationDirect Productivity series PLC and Allen Bradley Logix 5000 PLCs. I'm sure the microcontroller route is a bit cheaper, but the Productivity series and C-More HMI from AutomationDirect are extremely affordable! Also if you're a traditional coder I suppose that makes sense as well. Cool project though!
AI is very useful to help a developer, for junior, it will show standard code to help learn the language, for senior, it will speed up the writing by auto completing some bits and chunks, what it won't do for a while is to replace a developer, because it's always off by some margin and rarely works at all, fails to understand the context fully, refuse to understand what the developer really wants as soon as it's not a standard or example level specs. It's not made to self check efficiently, it's not trained to run code and run applications, it's trained to discuss and produce excerpts. These are language models, like clever parrots, not cognitive models. There don't create logic, they create the look of logic, which is totally different.
Hi James, So... How do you know when your done? When you are looking to create a mechanical design you have shown the 3D cad drawing with dimensions which are your requirements. This also applies to software engineering. I have used this same controller on my PM 728-VT Z-Stage servo assist but I started with a state diagram and control button requirements. I am not a fan of C++ for this kind of embedded application as I'm an ANSI C person but I got thru it with a couple of conversations with Teknic and using their C++ library. I know you appreciate requirements as you do lots of testing to see which materials can support this and that (requirements definition)... just saying :)
One more thing, in my opinion, results from an AI logic solver is best characterized as a forecast as opposed to an answer. The notion of an AI "hallucination" is a wonderful marketing term which tends to make an excuse for an error training. You may want to look at the recent Apple paper on LLM testing exposing AI's "Reasoning" capabilities.
Easy answer: software is never done. It only reaches one of two states: done enough, or abandoned. I'm using an iterative, agile approach. I have the basic system working end-to-end, with all of the parts communicating, but it doesn't actually do anything useful. I'm adding functions one by one, refactoring based on what I've learned as I go. Eventually, it'll work well enough for me that I won't be motivated to make any more changes.
I have had great success with these tools. Have it write you a python script to extract all the buttons and aliases using the file as input.
Ask for the expected output, not just script. Using the limited example it gave.
Heck a good regular expression might get you everything you want.
@@TimothyHall13line breaks are hard with regex. This is a 1-4 line awk program.
Great video! I use ChatGPT a lot. You can tell it to update data when it's wrong, send links and pictures. Also to remember certain data and if you bring it up or ask it will adjust the context.
It's only able to read into the file within the maximum context window of the model. It doesn't have the ability to hold more than that within the window without help and memory.
For the last task, i would have asked copilot to generate a regex to jsolate all aliases + ids, then with the new slimmed down file, ask it again to do the header
When Copilot was having difficulty finding all of the controls in the text file, I was wondering if you could coach it by specifying the number of controls in your query and/or asking it to first tell you how many controls it can find, then list the “Name” and “Alias” for each one. I sort of doubt the reason for failure was a limit on the number of tokens, since it looks like a small file to me, but rather some other condition that caused a premature termination of its search of the file. My reasoning is based on it finding more controls when you complained and it tried again. Another thought was to tell it that each control definition in the file was determined by the keyword “end”. Of course, you shouldn’t have to work that hard to teach it, but remember it is just a young one yet.
Warning about modulo, and testing code in a spreadsheet, or in Python. With positive arithmetic all is clear, with negative numbers, you get very different answers between C++ and for instance Python. Be sure to write a small test program to verify that your local test tool handles modulo with negative numbers in the same way that C++ does. And of course, don't trust any mathematical statements made by an LLM. In Python: -1 % 3 = 2; in C++: -1 % 3 = -1. Excel will reproduce the Python results (which is mathematically correct), not the C++ result (which technically is a remainder).
Arguing using natural language is what meetings are for. I prefer interfacing with computers using something precise. Using natural language to instruct a computer about what to do seems like a detour to me until it can attend meetings on it's own and replace me entirely
So meetings are only for talking to each other naturally? There's no other goal? That's kinda weird.
Sure, it's nice to have written your own program and know exactly what it does and doesn't do, but if I can say "Hey computer, write me a program that does X and Y", refine it with a few more steps and be done with it, I'd 100% take that over setting up everything I need, then start programming, then do a lot of googling because I don't code enough to know everything by heart, and then end up with something that has experience a lot of scope creep because I just want to keep adding things.
@timderks5960 Exactly, and when we get there I'm no longer needed as the middle man that knows how to make the computer do what you want. The endgame here is not to help developers, but to replace them.
ChatGPT is great when you know what you are doing. In one case I asked it how to make my server (that I access by ssh) more secure. It told me to block port 22! Dooh. This would lock me out of my server completely! In general, it will produce solutions that do what you ask it to, but will not always disagree when asked to go in the wrong direction. It will shine when give proper pseudo-code.
It wasn't wrong about SSH. Just not very useful for your situation.
As an 'experienced software developer', I don't trust someone else's code, let alone this AI dribble. I'll write it myself, make my own mistakes, and end up with something I can maintain.
Okay will do Scott! hehe.
For the last issue. I wonder if you could ask it to add more entries from the file after index 15.
That said, I think the biggest issue is how assertive it is in how it has completed the request completely.
Was that a Scott Manley reference? My worlds are colliding... 😮
Fantastic tool. But make your tools in a light wooden demo setup and hope it does not Crash :)
You should be able to paste that whole file into Gemini and have it do the extraction (very large context window) or you could ask the AI to write you a Python script to do the extraction