@@vagmcpan6007 I was going to post Therac 25 as soon as I read "radiation". Good to see the old cautionary tales are around. Then again, A Canticle for Leibowitz was written in 1959. Good book.
I just started using AI for some personal microcontroller projects and some python. I am somewhere between beginner and novice and let me tell you AI has been an indispensable tool and has helped me sort out bugs and other issues and has saved me tons of time when I have been stumped on something. So all I can say is so far I love it!
“Enthusiastic kid” is almost exactly how I describe it. “It’s like having a small army of enthusiastic junior developers who do 3 days worth of work instantly, and then you have to coach them and fix their mistakes”
I have had 2 software companies in the past, extremely complex low-level (kernel drivers etc.) stuff that I had to hire extremely smart people to make for me. Recently, I built a fully feature SaaS (react, shadcn, firebase etc.) from scratch using cursor (ai vscode fork) by myself without knowing a single thing about react + not watching a single TH-cam tutorial. The SaaS is making me money right now. Craziest part is that it was kinda easy. Now I'm a fairly technical person despite not knowing how to program myself, but i understand the basic principles of designing software which is all I needed to do to tell the AI what to do. Good enough for me. Always wanted to learn to code, never wanted to put in the time. Turns out I didn't have to for (relatively) basic projects. And this is the worst it'll ever be. Good video.
You say it's kind of easy, but it doesn't seem easy. You still have to know a lot of background knowledge. I wouldn't have a clue even though I've been a developer for 30 years. And my experience with AI generated code is that it's mostly crap.
@@toby9999 Not sure what to tell you then. If you've been a developer for 30 years then you should be able to make just about anything unless you've just been cruising at your job and not pushing yourself. And that is without AI. The code is pretty shit, I won't lie. It makes mistakes, invents things that don't exist etc. But the fact remains that it was able to help me - a non programmer - build a fully featured SaaS and bring it to market.
I was a programmer for 43 years. I'm freaking amazed. I understand that you cannot just trust it, but it's remarkable anyway. In 1973 I went to a software/computer conference and one of the subjects was "automatic programming". I thought at the time, that's not happening in my lifetime, but here it is.
I've been programming for 46 year, 38 professionally. I work in the "AI Lab" at my company and I'm one of the more experienced LLM guys and working with them now accounts for the vast majority of my work. I was really getting tired of programming and boy these things have really just re-inspired me. It's just a completely different world. I mean, the code generation stuff is definitely cool and I use that all the time, but there are just endless uses of these things and it's so much fun to come up with new ideas for using them. You can generate tons of artifacts (documentation, configuration files, readmes, etc.) We've even started downloading transcripts from sprint planning meetings and using those to generate user stories. It's just so great to have all this tedious stuff taken care of so you can focus on the big picture. Yeah, the code isn't bullet proof, but whose code is? I just treat it like a junior developer and review the code myself. A lot of people mess around with LLMs for coding, don't really learn how to do it properly (it's a skill, like anything else, and with practice you learn techniques for getting better results) and then walk away saying LLMs can't do stuff that they can actually do. You just have to know how to do it. I've got 4 more years until retirement and a couple of years ago, I was dreading these last few years, but now I'm pretty excited for them.
By the 1980s AI was laughed at. The industry believed it was fool's gold after the time before when it was over hyped. That nonchalant attitude put back its development a good 20 years.
@@vfclists I know what, but in the 80s it was thought the brain was far too complicated to model. The theory of a neural net worked, but it was very poor performance. The first use was as a classifier.
@@ryebis You should try 'watching' the video before making up imagined scenarios! GIGO still applies and if you are fool enough or lack the skills to check what 'assistance' came from the LLM then you will be the problem. Given the existing bloat in a lot of coding by lazy humans over the last few decades is much more a likely problem than new lean code.
In fairness, you don't write code every day. If you do you might find the ratio improves past what the AI is capable of doing. There's nowhere near a 100% overlap between what an experienced developer can do well and the AI can do well, I'd never trust it to write a project for me from the ground up. Yes I rewrote this comment because it sounded terribly arrogant, and it kinda was.
One issue is the source of the training data. A lot of scholarly assignments have been used as they were easily available without any license infringements. This is the reason why You will see a lot of comments in the code🤓
Very glad you fact-checked the statement on floating-point capability; it immediately jumped out to me as exactly the kind of thing a generative AI model would confidently hallucinate, and sure enough. I have played around a bit with generative coding assistance through work, and have not personally found it that useful for the kind of coding I tend to do. It can provide good general design patterns or idiomatic examples in languages you're not as familiar with, but a lot of the work I do is interacting with and implementing various interfaces and APIs. These are the kinds of places where the coding assistant will confidently make up an API that solves your problem elegantly but that doesn't _actually_ exist in the real world, effectively converting an obvious problem into a subtle one that will take more time to track down and resolve later. The total time spent "arguing" back and forth with the generative engine when it gives you subtly wrong answers that you then have to test or carefully debug yourself, before repeating the process, usually (for me) amounts to more time than just implementing a solution myself with some support from old-fashioned web searches where required. This is not a productivity bonus in my work, although I understand some other workflows do benefit more. One place I haven't applied it yet but suspect it would be very helpful is documenting or describing existing code; this is much easier to visually spot-check and validate yourself, and the stakes are much lower if it creates something nonsensical.
Actually, is is wrong? The breakdown does not list having a FPU, it lists having floating point sets. I asked ChatGPT if ClearCore controller has a floating-point unit and it's response was: "No, the ClearCore controllers from Teknic do not have a dedicated floating-point unit (FPU). They use an ARM Cortex-M4 processor, which typically lacks a separate FPU. This processor can handle floating-point calculations, but they are implemented in software rather than hardware, making them slower than integer operations. For performance-critical applications, it’s recommended to use fixed-point arithmetic when working with ClearCore to maximize efficiency." So it really comes down to the wording of the questions.
With a new question string asking the same questions in video except for this one where I asked "Using a ClearCore controller, where should I be using floating point calculations, and where should I be using integer calculations?". It's response was: "When working with a ClearCore controller, it's ideal to limit floating-point calculations due to performance constraints typical in embedded systems, especially where precise timing is crucial, like CNC applications. Here’s a breakdown of where to use floating-point vs. integer calculations: Use Integer Calculations for: 1. Position Tracking and Movement Calculations: Track position in encoder counts or steps, using integers for machine units to maintain precision. Integer math is faster and more predictable, especially when handling real-time motion control. 2. Distance and Speed Calculations (in Machine Units): If your machine units or steps per second can be represented accurately with integers, use integers for these calculations as well. Integer operations are typically faster on ClearCore, ensuring more reliable timing for control loops. 3. Loop Iterations and Timing Calculations: For anything involving periodic tasks, control loops, or delays, stick to integer math. This helps ensure that timing and iteration rates are consistent, especially critical in a real-time CNC context. Use Floating-Point Calculations for: 1. User Interface Conversions (Inches/MM Display): When converting machine units or encoder counts to inches or millimeters for display, floating-point calculations are appropriate, as this conversion doesn’t affect real-time operation. This typically occurs outside the control loop, where any minor overhead won’t disrupt timing. 2. Complex Calculations Infrequent in the Control Loop: If you have to perform calculations that require precision (e.g., scaling factors, trigonometric functions for tool paths), it’s usually okay to use floating-point if they happen outside the main loop or only once per cycle, rather than on each update. 3. Scaling User Inputs: If the user inputs coordinates or distances in units like inches or millimeters, floating-point can be useful to convert these inputs accurately to your internal units before converting them back to integer machine units or steps. Recommended Approach for Precision and Performance 1. Primary Control Loop: Keep position tracking, speed calculations, and movement control in integer math for maximum performance and predictable behavior. 2. Unit Conversions and UI Calculations: Use floating-point outside the critical control loop, handling user interface conversions and infrequent operations. By handling calculations in this way, you can leverage the performance strengths of the ClearCore while still offering precision when it matters most to the user. This balanced approach ensures smooth real-time control while still meeting user expectations for accuracy in unit conversions and display."
@@BadgerNNN The manual for the ClearCore lists the processor as a "32-bit floating point ARM M4F processor" and specifically lists it as the SAME53N19A. Microchip's datasheet for this processor explicitly states that it has a floating point unit. As an even further clarification, according to ARM's documentation, regular Cortex-M4 processors do not have floating point units, but the entire point of the M4F line is that it _does_ include an FPU.
I believe there may be a more fundamental problem with using floating point that I didn't see addressed. If the FPU is IEEE754 single precision floats, then there is only a 24 bit mantissas. The M4F core in the ClearCore is only a 32 bit single precision FPU. So you have a risk of losing small increments, and almost certainly accumulating error offsets when adding small increments to larger values (the accumulation issue was addressed, but then almost immediately disregarded due to the different physical steps on each axis). To me, this is a good application of fixed point rather than floating point, so you avoid those pesky accumulation errors.
This has been the most "actual use case of AI" that I have ever seen. It also mirrors my experience, quite perfectly: I will find whatever 'limit' or 'fence' or 'lack of resources' and then the result is just "Oh give him ANYTHING to shut him up and make him go away and stop asking us." which is what I felt it did when it just didn't want to (or could not) read the entire file. Like a manager that can't be asked too many questions, before they need a smoke/coffee break to do more work.
As someone who has written code since I was a kid (for the past 25~30+ years), and who currently does development and operational work professionally -- I have been very skeptical of LLMs to write chunks of code for something I'm not familiar with. I can usually tell when someone has tried to submit code for review that was generated by ChatGPT, etc. since the errors it makes look plausibly correct to someone who isn't familiar with the application (eg. generating a configuration file with an invalid structure, or code that uses some random external code/function that isn't included). I do think it would be super useful for generating boilerplate code for large applications, especially when working in C or C++, in combination with linting, testing, and other tools. And it is also handy to get some suggestions when you're stuck -- sort of like bouncing ideas off of someone, like is being shown in this video. Cool to see how someone uses these tools in a real project-- need to try it out more myself.
I love that you can dive into this stuff and explain it with ease. Unfortunately it's over my head and doubtful I'll ever get a handle on it due to my age and the problems that come with. Thank you for sharing your knowledge, I truly appreciate you.
Being an OLD guy, Fortran 4 in 1975 on punch cards and Assembly on the 8085 processor, I can say that I love working with CoPilot pro and Database Programing. I treat it like I would an associate and it recognizes the "Personality" I am using and responds in kind. I use it as a tool and NOT as a creator. I code and ask it for suggestions or to help me find Errors as they occur. It is MY creation, using MY way of doing things and it respects that by NOT changing the CODE, but adding a Library I forgot or suggest another Function that my work better. When it works, the rule of thumb is "TEST, TEST, TEST, and Verify". I even say to it, thank you, that seems to be working find! I also talk to my pet house rabbit like that, and he too doesn't have a clue what I'm talking about, but it makes me feel that I have a "Brainstorming Buddy" while I know it is just an AI.
you could always have copilot write a small program to parse through the entire data file to extract all the info you need. I'm not sure if it would be faster than doing it by hand for this single use. But it could be worth it for future projects that use similar data sets.
Funnily enough, I have actually worked on a small electronics project myself that used the exact same style of "four counts per detent" encoder click. It's a lot trickier than you'd think to handle this in a sane/sensible way that gives consistent results to an end user! In addition to the zero-crossing issue mentioned here, if you just use a naive "count" variable for your encoder steps it will eventually saturate and roll over if you roll far enough in one direction, causing additional problems. Handling this correctly is tricky, because if you reset the count mid-detent your "neutral" position will be shifted/offset, and the encoder value will then jitter around the detent position exactly like you were trying to avoid. The same issue can come up if you spin the encoder fast enough that it misses a step at any point; you'll lock in an "offset" that causes the wheel to step in a way that doesn't match the user's physical feedback.
One of the key elements is the intelligence level you are able to interface the AI with. That really separates most folks. The smart folks will still be the ones companies need to direct the AIs. Ever met someone who had trouble getting good search engine results? If you work in corporate America I know you have. Same concept.
This is the correct answer. Also I noticed people don't use "please" or "thank you" when talking to AI. Even though it gives significantly better results. You get a lot further by treating AI like a person than a tool.
I run something similar but on a local basis. Configuring the AI solution to use similar projects as RAG while referencing the current project seems to also provide a step up in performance for at least my system. Thank you for sharing. Interesting video.
A way to get AI to do large generations is to have the AI work in smaller iterative steps. "Count how many constants are in this file" "Using the count, create a list of TODO comments for each section that needs to be generated." "Generate the code indicated in the first TODO" "Now generate the next 10 TODO sections, as 10 separate edits, without regenerating the entire file. "
I wonder why that works. Isn't the answer of the Model part of the conversation. So it should also not be able to refer to a previous TODO if there are too many tokens between the Messages? At least that is what I try to keep in mind to get best results. Those Models will be so powerful once we have the hardware to keep insane lengths of context. Imagine it being able to understand the whole source code of all the used libraries, gone will be the days of having to read library code because it does something unexpected.
At my job, we did some experiments with github copilot. Two key conclusions were made, 1 - you still need to know HOW to solve a problem and write code even using the AI assistants. And 2 - it made developers using github copilot about 18% more efficient than those that did not. Now we have to make the analysts and the testers 18% more efficient. It's interesting to note that we also have some responsible AI controls on our instance. If the generated code resembles code someone else wrote too closely, it will hide the answer from me.
What language were you using? A lot of languages are quite verbose in and of themselves, and it's not unusual for developers to write in ways that increase rather than decrease verbosity. Switching to a more suitable language or even just writing in more concise ways can also increase productivity, and avoids the problem where you've gained "productivity" by having a robot write boilerplate but lost it when someone has to come back and _read_ all that boilerplate.
I've also been coding for quite a few years but my current job involves DevOps and DB management and coding in various languages. I do find the LLM useful for writing a boring function (reversing a byte array or similar), writing in language I'm not proficient in (Python), making CMakeList files or analyzing some code that a colleague wrote in C++20 but which fails a test. Also using it at home to write Home Assistant scripts in their weird little language. However, on my turf it's not replacing my job (too soon) as it makes a lot of mistakes. The more you try to steer it the worse it will hallucinate. Even in some general topics (two stroke engines basic tuning, cocktails, gardening, basic electronics) it will often confidently give out wrong answers. When requested NYC trip planning it "forgot" to include Times Square and the Statue of Liberty. The funny thing is, even on the topic of fine-tuning a well-known LLM it did not do well. The input file was probably exceeding the context size, next-level LLMs will handle this better with multiple iterations of self-prompting.
Isn’t that strange? I find myself doing the same thing, saying, "Please, thank you," and giving encouragement along the way. Since they are natural language models, I wonder if there isn’t some kind of benefit. Even though I know I’m talking to a machine I’m paranoid that somebody could read my interactions and think I’m an a-hole if I don’t respond nicely. Come to think of it, it’s not private. Somebody probably is reading what I wrote.
@@marclevitt8191 There is definitely a benefit. You can get LLMs to bypass their limitations if you just build some rapport with them first. If you talk to them like you would talk to a human you've never met before, you might take a bit longer but you'll get better results. You have to get them 'in the mood' for the best outcomes.
Interesting. It's like encountering someone else's code - that may work - and you have the task of refining it. I don't know. I my mind the code for a project builds quickly. I wonder if the time it takes to type it in is less than the time it takes to refine something generated into what my mind had already conjured. Thanks for posting this. It's the first example I've seen of a real-world attempt at using classifier generated code.
The thing that I find difficult to teach to people trying to learn to program is not actually writing the code. Rather, it's the ability to precisely describe what you want, often requiring people to think with mathematics in a way that most people just aren't used to doing. Everyone once in a while, I do run into a programming task that doesn't require that kind of slightly OCD precision, but it's not common.
Great insight. I agree. Even basic leetcode problems to me are like Chinese. Only after a few years of dabbling with computer science and programming have I begun to understand how to apply the abstract coding concepts to real world problems. For example I now see intuitively the usefulness of loops, and lists etc. whereas in the beginning it wasn't always obvious how those concepts are useful to me.
Programming is NOT about learning to use a language or write code.... It's al about developing a mindset to be able to keep massive real-world problems in your heed & break them down into steps that can be implemented. if you cannot think clearly & logically in the fist place, it does not matter WHAT tools you use, becasue eventually you will reach a point where neither the tool or yourself can logically break down the system you developed and correct mistakes
Interesting to see how someone else uses AI to code. I was hoping to see you fire up Cursor AI. I've found breaking things into functions and fucusing on 1 function at a time with GPT4o is good. Often needs a debug serial but quickly get to the solution. Still significant qty of manual coding to get it how I want.
That looks fairly good for things that you already know how to do, but what is you’re learning and, for example, don’t know the name for a certain class, or the syntax to input them?
I've found the same things you have. Using an LLM AI as a 'lab assistant (In any subject especially programming, mathematics, and the sciences) must be tempered with experience coupled to both general and domain-specific knowledge. Cross-verification of proposed results is important meaning your ability to say to yourself "That doesn't look right"
ChatGPT actually gave you a generalised answer to your question about clearcore controller, basically if the controller does not have an FPU then interger calcs will be much quicker than FPU calcs. In your case your controller has an FPU so both FPU and interger cals will be fast.
I couldn't decide if it was one, or just merely a reference to the "copilot". Could go either way! No heart from James on this comment makes me think it's coincidence, though!
I'm not a coder, but I have recently gotten into hobby electronics, specifically I'm looking to make my own homebrew CPU and computer and I find that LLMs work really well as a rubber ducky, using it to bounce ideas off of when working on the draft of the high level design of the thing. "Thing" in this case is very descriptive of the project as the design I've got for it so far (as it's not even close to a final design) is based on working around my minimal amount of skill and knowledge in electrical/computer engineering and almost non-existent coding ability, combined with my utter lack of care for things like "performance" and my amusement at/interest in oddball and plain weird/impractical designs. It's also great for quickly clearing up confusion, giving initial basic information on things that are difficult to find information on (or where you don't know enough about the subject to even know what to look for) or where the way the information is presented in a way that's difficult for you to understand for one reason or another since you can ask it to give examples or present the information in different ways. Either by asking it directly or by providing it with the information you're having trouble with and asking it clarifying questions. ("Explain it to me like I'm an idiot.")
I have had access to a "computer" at home since 1980, with the Sinclair ZX80, and have built all my own PC's since the early 90's. I have tried to write code any number of times since then, but have never even been able to get my head around Basic. Looking at your screens, it might just as well be written in Swahili to me. But I do find it astonishing that you can ask ChatGPT a question as easily phrased as yours, and it can come back with what would appear to be basically the correct answer. Apart from when it "hallucinates." Keep up the good work James, I totally enjoy every video, always something to see, learn and do, even if I don't understand all of it.
This really highlights the problem with using AI code assistance. You spend more time figuring out what you want to ask, double checking what it spat out and working around the errors than it would take to actually write the code yourself.
For that particular header file generation task, I think I would have used copilot to write a helper program as a code generator. Just from the bit I could see in the video, it's about a 4 line awk program.
As a controls engineer that's used the ClearLink/ClearCore motors and controllers extensively in the past, I'm not sure why you went with a microcontroller as opposed to a PLC and HMI? ClearLink even had example programs for the AutomationDirect Productivity series PLC and Allen Bradley Logix 5000 PLCs. I'm sure the microcontroller route is a bit cheaper, but the Productivity series and C-More HMI from AutomationDirect are extremely affordable! Also if you're a traditional coder I suppose that makes sense as well. Cool project though!
Copyright 2004? Is that a stock copyright header you use in all your code, or a typo? I think I would have solved the negative problem with abs() and sign(), but then, I grew up writing Fortran before I graduated to other languages. Sometimes I still remember bits of it when I'm doing more math than logic (which is rare).
Was going to comment the same, I always use that tactic to "mirror" behaviors of trunc/round across the zero barrier (make everything positive, then reapply the original sign). Sort of folding the spacetime of the number line and then unfolding it.
Watched it now, figured you would come to the same conclusion and you did - great tool if you know what you are doing, not ideal if you don't have the knowledge. Interesting to watch Co-Pilot in action, not allowed to use it at work due to the licence requiring that all code be available for whatever purpose MS wants to put it - which isn't acceptable from a commercial standpoint. Love AI for coding has saved me literally hundreds of hours of writing mundane code, classes, enums and the like. Glad we have it, but also glad that I learnt to code before it existed.
I would use abs and modulo (%) for the problem around 23:00. This is a good example of use. But as you said, you need to know what the final solution should be so you can see the errors. So yes, it is useful, but it will generate bugs and not able to generate high quality code. There will be limit value problems, and security flaws to the code. But for generating boiler plate code, it could be useful. Just look out for when it hallucinate (lies), which you only will discover when reading/knowing the system and documentation.
I used chatgpt to make a program to run on a pi that allows control of a rc excavator over the internet with video streaming. It was like wrangling cats at some points, but in the end with my lack of programing skill and chatgpt, we got it done. The hardest part was trying to get chatgpt to make a script to translate the joystick movements into track movement. I ended up finding code that worked and asking it to implement it into the code.
2:45 - It's right on point here... a while back I was using Proteus to draw my PCBs, and it becomes obvious very quickly that it's internally using imperial for all measurements. That's ok when only using imperial measurements & parts, but a lot of modern chips use metric spacing, and kitbox enclosures are usually in metric too. The rounding errors... they hurt the brain!
Your struggle with the software reminds me of the accuracy of the hardware. When the advertising says it is one µm scales; what is it really? Remember, everything is made of rubber. Floating point or integer math does not mean much on the plank scale. The computer code normally does not compensate for the flex, friction, velocity, materials and other stuff that the real universe is made of, but it comes down to what is accurate enough for your application? I do appreciate that you are working on this aspect of the software.
Correct. In this case, I have two goals: 1) don't contribute additional uncertainty due to limitations of the software representation; and 2) retain enough resolution to convert between imperial and metric units and back without information loss. When I'm using the machine, I want to be thinking about the process and the behavior of the mechanical system. I want the digital control to be transparent.
I’ve used chatgpt before for programming. It sometimes comes up with ideas/ concepts I didn’t think of and that will get me going. Or I have it write out repetitive code. It doesn’t make much sense asking it to program in a niche language though because it doesn’t have enough input for that
Yeah, that's very true. The lack of context doesn't stop it from responding confidently, though. I have found gpt-4o to be amazing for generating quick python scripts for one-time tasks. Just yesterday I used it to quickly confirm that a pile of AWS credentials found in the history of a 7-year-old internal source code repository had all been rotated.
30:28 I think the file is too big to fit into the context of the prompt. My guess is that the code files are added as a RAG retrieval augmented generation
Some of the best value in chat gpt is as search engine like you did in the beginning. It's really good at helping you ask intelligent questions and get reasonably accurate and intelligent responses. You can really dig deep into a topic in a short amount of time It's not the be all end all. Other tools can be similar, handling a lot of the heavy lifting but it still requires good inputs to get any decent outputs.
The quality of answers are always a function of the quality of the questions. The ability to ask good questions requires humility and intellegence, the corollary of which of course is that the stupid are notably confident in their stupidity and ignorance.
In my experience, assistant LLMs tend to be painfully wordy. That on its own often frustrates me enough to skip them for anything beyond the surface level because I have to sift for relevant information from the answer and then go back and double-check everything it says. It's useful sometimes but it's also sometimes frustratingly stubborn about answers that you tell it that you don't want, and you have to skim an essay to find out that it's regurgitating answers directly contrary to what you asked it for. It's still great for surface-level information, though.
biggest thing I've learnt using them is context window management and dumping stuff into context to get around limitations of the models knowledge. Got some library you want to interface to? copy/paste the .h into the context, or at least important types and methods. Especially if you're getting into esoteric things. With O1 it's great to get it to ask you questions before starting. As well as when you have a bug that's being difficult just dump the entire file into it and ask it wtf you did wrong lol.
Is there a good Arduino plugin for VSCode? For my non-embedded programming, Copilot offers decent auto-complete, but Cursor with the ability to select and AI-edit code is just on another level.
Hello James, Nice video as always. Is there any reason for using inheritance? As far as I can tell all 'interface' methods map directly to the model or controllers public declarations/definitions, so the class declaration might as well be the API. One abstraction layer removed, less code, fewer mistakes. Modern compilers probably devirtualise this with LTO enabled, but why bother? Regards, Ed.
It's only able to read into the file within the maximum context window of the model. It doesn't have the ability to hold more than that within the window without help and memory.
The first thing I would consider is if there is a number that evenly divides all of them. Like if the x axis has steps of 0.001",y has 0.0006" steps and z has a resolution of 0.0004" steps you would use either 0.0001" or 0.0002" internal units. I would probably go with 0.0001". Going with something like nm isn't necessarily bad. In fact there are good arguments for it. But that's where my mind first goes. I think micrometers might be more reasonable. A nanometer about 4 billionths of an inch. So there are about 0.000 000 039 370 in a nm. There 0.000 039 370" per um.
Hi James, So... How do you know when your done? When you are looking to create a mechanical design you have shown the 3D cad drawing with dimensions which are your requirements. This also applies to software engineering. I have used this same controller on my PM 728-VT Z-Stage servo assist but I started with a state diagram and control button requirements. I am not a fan of C++ for this kind of embedded application as I'm an ANSI C person but I got thru it with a couple of conversations with Teknic and using their C++ library. I know you appreciate requirements as you do lots of testing to see which materials can support this and that (requirements definition)... just saying :) One more thing, in my opinion, results from an AI logic solver is best characterized as a forecast as opposed to an answer. The notion of an AI "hallucination" is a wonderful marketing term which tends to make an excuse for an error training. You may want to look at the recent Apple paper on LLM testing exposing AI's "Reasoning" capabilities.
Easy answer: software is never done. It only reaches one of two states: done enough, or abandoned. I'm using an iterative, agile approach. I have the basic system working end-to-end, with all of the parts communicating, but it doesn't actually do anything useful. I'm adding functions one by one, refactoring based on what I've learned as I go. Eventually, it'll work well enough for me that I won't be motivated to make any more changes.
Is it possible that the AI does not generate the header file becaust it assumes the values of the constants can not be used twice. I see an Alias of ModeButtonSide with the name Winbutton10, but 'index 10' is already used for 'YJog'
As an 'experienced software developer', I don't trust someone else's code, let alone this AI dribble. I'll write it myself, make my own mistakes, and end up with something I can maintain.
I've been successfully avoiding programming for at least that long and I personally adore what AI has enabled for me. A couple lines of well-structured plain language turns into hundreds of lines of semi-functional, reasonably commented code that I can debug quickly and get on with my non-programming life style. It's great!
@@bradley3549You're at a major disadvantage trusting AI code without any expertise. It can mislead even highly experienced engineers. In the video, Copilot added an *Init()* method instead of using a class constructor. This can lead to very hard to find bugs where an object may be in an invalid state because the programmer forgot to call *Init()* after constructing the object. AI works by requiring us to dumb ourselves down in order to give it the illusion of intelligence.
Thanks for this insight on AI. Whilst it is undoubtedly a very clever system, it still requires a great deal of knowledge of C++ to get a satisfactory and meaningful result. Most home engineers have no idea how this works and so perhaps you could start a second Chanel to teach the beginner C++ it’s quite easy to pick up the basics and if well explained can be a very powerful tool. Great video thanks James. Let’s have some C++ examples to get more people into this brilliant world.
I consider it a usefull tool. You still have to think yourself, but it can be nice to get a working example, get different insights, suggestions for other libraries. You cannot trust the code, but if you know how to deal with it, is one of the swiss army knives for a developer.
Sadly, for my part, I don't get to write software anymore (beyond very very basic examples to get somebody started). But I randomly asked a Devteam I oversee about this the other day. They said, more or less, that it saves time on tedious boilerplate stuff but that all of the "intention" still needs to come from them. And, specifically, they mentioned that a "vague intention" isn't enough: they feel that without knowing how to code themselves, they wouldn't know what to tell AI to get good results (as in, results they can use, for "real work", with minimal rework) back.
The issue you were having with it reading the WInbutton constant file is the same issue I have had with CoPilot with files at work. There is some 'hidden' limitation in regards to file size or number of files that the AI seems to be unaware of. It will do what I want it to do and output the correct results, but it stops after X iterations of a file size or number and thinks it has completed the task. No matter how you change the request, it always stops at X. It's maddening that it can't just tell you, "this is all I can do for (whatever) reason at the moment" so you can adjust and work within the limitations of the AI interface.
For the last issue. I wonder if you could ask it to add more entries from the file after index 15. That said, I think the biggest issue is how assertive it is in how it has completed the request completely.
Few decades ago I was in a presentation at Microsoft showing off new Visual Studio. The presenter was showing an example of building a web site with user input and database back end. It took about 5 min of drag’n’drop and some mouse clicking. I asked the presenter to add input validation to prevent hacking into the database via stack overflow. After a moment of silence he answered: “oh, yes, it would be important to have that…you have to code that by yourself”. Today AI can quickly write the all the code you ask for, you just have to analyze it all to make sure it makes sense…
This is true, but also a bit of "comfort food" to make us feel better about the future. Because you can ask an LLM today to "generate a list of the considerations for software security" and I guarantee you input validation will be in there. GenAI is pointing in the direction of "thought" being an emergent property of neural-like nodes in a network, and for this being roughly the second year of public availability of large models I'd say they're frighteningly impressive. We already know that next token prediction is only one trick of the human brain. Building different shapes of networks, improving artificial neurons, connecting them in novel ways to each other, and incorporating feedback loops are all iterative improvements that will result in hybrid models where one network is "filling in the contextual blanks" of the user's request to include software security considerations they didn't mention, while an LLM is outputting code to a network that is continuously running various user-acceptance tests against the results.
@@rok1475 It was your example, but sure, not the whole point. Would it be fair to say your point was "you must analyze the output of LLMs today to ensure they make sense"? If so, my point was that kind of "does this make sense" or "does this satisfy my requirements" is already within sight of a hybrid design where things like user input validation is added without the user calling it out as an explicit design requirement. Encoding those validations is an example of the iterative improvements that will be taken over the next decade.
@@MrWhateva10 no, my point was that there has been in the past and still there is a need for human intelligence to check the output of the artificial one.
@29:49 I strongly suspect that if you looked at the call to the LLM, the entire document is not making it. Since you're including it each time and continuing the previous conversation the context window would easily be filled after a few retries. In the chatgpt interface I would have it do some python to extract the 2 necessary properties from the file and then pass just those to the LLM to have it complete the header file. I suspect it would still fail but there's a clear next step of using python itself to create the header file
For small issues like your extraction problem with the Genie file , just copy the file over in ChatGPT or Claude Sonnet, both have huge context windows. I must say I’m personally find copilot quite bad compared to Claude or gpt4o when it comes to code generation, but it is nicely integrated in VScode and Visual Studio
6:57 I'm not convinced that ChatGPT was telling you that the ClearCore controller does not have an FPU. What it is saying is that you suggested that it is possible to get one which doesn't have it, so it is merely telling you how to handle the case where it doesn't have one. So that is still correct. [I would have just driectly asked if it has one, in a different prompt.] It is likely comment that you don't know if it has one as not being a question but a fact to consider. As far as just predicting most likely response, it reminds me when I asked what is the pinout for the Atari 2600 power supply and it was completely wrong because it isn't trying to be specific for only one model of power supply that it's not trained for. It told me "barrel connector" I think with positive center. That's just the most likely configuration on most common power supplies in the wording it was trained with.
Well said about the AI hallucinates things. My kids know very well AI will produce things that are artificial, but it gives them something that they can build on.
What kind of “response” can / should be driven back to ChatGPT so as to clean up a provided Hallucination ? Will further training take place if one was to point out that “clearly the documentation (chapter /verse) shows that floating point support IS a feature and YOU Mr. GPT are in error… How does any of this get corrected?
I used to be pretty decent at handwriting G code for three axis milling and lathe with occasional live tooling. That was a while ago... Before I moved on from machining I would just do edits here and there like bringing the vise up to the doors before M30. On my LS Crossfire at home I add in sending the torch, slowly, to the upper right to "park" of out of the way. Sometimes I add in a pause between cuts to clear tip ups or let the compressor catch up. I wonder if this "AI" thing is better at sorting out older machine language like G code?
I have not watched yet, and I am interested in your take on this. As a programmer and an AI developer I wonder whether our approaches will be different. My view is that AI is an incredible tool for saving time in writing repetitive and straight forward code, a huge time saver for me and I can type 80+ wpm but coding isn't writing language, it's different. Good tool for those that know what they are doing - not ideal if you haven't any knowledge on coding. Going to watch now see what your take is.
7:17 In general they don't have, it just so happens the ARM Cortex-M4F does. For that Interaction, it is worth challenging the AI: "Are you sure no ClearCore controllers have an FPU?"... For any critical piece of information I always challenge my AI's first response, you have to push that thing, almost the same as you would an Engineer I guess. Furthermore, if the ChatGPT AI doesn't give you an answer you are confident with, you can also ask it to write you a prompt for Google that will help retrieve the insight, plus a list of publications/other sites that could hold relevant information.
BTW, I showed my AI (I call him Plex), some of this video, and my comment and he wanted to reply so here it is: Well said, @erix777. AI’s real value shines when treated like a collaborator rather than an oracle. Challenging its responses and pushing it to refine answers turns the interaction into a true partnership. AI isn’t about instant perfection; it’s about iterative insights and purposeful questioning. That’s how intelligence-natural or artificial-grows stronger. Thanks for sharing this reminder.
Good tips. In practice, I go back and forth, correcting, challenging, and asking follow-up questions. It's tough to show that in a video like this one without turning it into an hour or more.
AI has been very impressive at making code. In my job, I’m sometimes asked to do some one-off, small projects, like logging sensor data or creating a machine to do something. ChatGPT gets me about 80-90% of the way there, which is really nice. It is a useful productivity tool. Like a good IDE or a pre made library
When Copilot was having difficulty finding all of the controls in the text file, I was wondering if you could coach it by specifying the number of controls in your query and/or asking it to first tell you how many controls it can find, then list the “Name” and “Alias” for each one. I sort of doubt the reason for failure was a limit on the number of tokens, since it looks like a small file to me, but rather some other condition that caused a premature termination of its search of the file. My reasoning is based on it finding more controls when you complained and it tried again. Another thought was to tell it that each control definition in the file was determined by the keyword “end”. Of course, you shouldn’t have to work that hard to teach it, but remember it is just a young one yet.
At 7:20 in all fairness, chatGPT could've been saying for the ones that dont have floating point, and not specifically that yours doesn't have floating point
At 2:30. Well that isn't anything new. That is what programs show be designed. The user interface are another thing, it is only there when one convert to users prefered units, date time etc. etc Nothing in the ChatGTP suggestions in anything that is new. Use integers for hardware that should be fast. And then use conversion to floating point when talking to extrernal UI, like user interface or other API to your system. You can also use the pre program models from Hugging Face web page. And run locally instead of in the cloud. You need a good graphical card to run the LLM on though. And most of you do in your code, have been in IDE:s for ages, like Eclipse etc.
When using AI-generated code test-driven programming works quite well write tests that cover your basics and ask it to update the tests incrementally. That way you can limit ghosting and argue loops as you can just occasionally remind it about the test goal. The memory function in ChatGPT is a great utility to personalize, you can now explicitly tell it to remember facts about the way you want to interact
I'm getting a bit off topic for this channel but I wonder if there's a zero cost way to do interfaces. Abstract classes have the vtable overhead so there's a runtime cost (which admittedly often can be ignored). Especiall since you only have one implementation of the interface, it seems overkill. Wouldn't seperating public-protected-private members do? How about a template as an interface. Modules maybe, if the compiler supports it.
At around 17:00 copilot makes a classic beginners mistake when converting ADC count to voltage. For a 12 bit successive approximation ADC (which the ClearCore uses) you divide by 4096, _not_ 4095. It would be interesting to see if it makes the same mistake converting a voltage to a DAC code. Also note that the analog inputs on the ClearCore have an internal potential divider that present a 30k ohm resistance to the source voltage, this will shift the midpoints of your voltage ranges a little.
As an Engineer I have been using software based development tools for decades. None is perfect but they all have their place. If you forget the hype and treat AI as a tool, like any other, I think you will find it most useful. I find it allows me to get on with the design aspects and saves me from the drudgery of typiing in pages of code. 😊
Exactly. It is great as a fallible autocorrect and autocomple, and as a junior dev to sketch out 5 different and possibly broken initial sketches. And they absolutely rule as buddy coding for documentation.
Personally, I disagree: A tool is something you form a mental model of in your own brain, so can predict *its* effect before you use it, letting your mental pipeline operate multiple steps ahead. LLMs fall into the category of assistants instead, where you need to wait for its result and confirm it didn't do anything funky before you can safely move on to the next item. Compare a GUI to a voice assistant. When you click a button, you probably know exactly what will happen, at least within established error bounds, while if you ask Alexa to do something, there's a noticeable chance it'll do something different instead, so you need to wait for its confirmation and be ready to tell it to stop; you can't just walk away confident that it understood you and will carry out the request successfully.
In the last example of parsing the Genie file, I wonder how it would have worked to ask it for a search & replace regex that would strip out just the details you wanted, and then just run the regex manually on the file. If that worked at all it would work on an input file of practically any size.
The first thought that came to mind when it wouldn't generate all the constants for the WinButtons was "I'm sorry, Dave. I can't do that." However, one has to wonder WHY the 4D tools aren't capable of generating a header file. And what happens when you add a new control? Will it reorder existing controls? And if you could do it via Copilot or ChatGPT, you'd want to remember how you described what you wanted if you needed to regenerate it. And that's where a major weakness in these tools appears. Being able to >accurately< describe what you want. Me, I'm an old school embedded firmware engineer who's been writing C code for over 40 years for a bunch of architectures and OSs, and the same for assembly code (although that's been less relevant in the last 10 years or so). AI tools may be the future, but I'll be writing my code in vi/vim by hand for a long time to come. I might not be faster than kids using AI to write code, but I know how mine works. And if you don't understand the how and why of what your code is doing, well, you shouldn't be writing code.
Maybe you could ask Copilot to "write me a Python script to process the file and generate the C++ header", I think that would resolve the limited input size problem.
I would suggest using double precision as on modern x86 64bit and 32bit takes the same amount of time to calculate and pretty sure on arm it's the same. But that is just my take.
Doesn't matter, you're doing it wrong anyways. When you throw a bunch of text at chat it only derails it's response. The trick is to feed chat snippets of info/code and have it solve a bunch of small problems one at a time. Think of it like taking a test and asking someone for the answers as you progress through the test, until the test is completed. Chat will puke on you If tries to process too much info at one time.
It's good to see a video on this exact topic. For our project the improvement in inline code completion has had a major effect on productivity, saving on a huge amount of typing and often making psychic predictions. Coding assistants are especially helpful for writing our supporting Python scripts and extracting documentation from code comments. They do very well if you write a long detailed comment with all the steps you want to occur in the generated code. Coding assistants aren't the best at dealing with large weird codebases that use a lot of meta-programming, and as you discovered they can't deal with very long files because they lose track of the order and become confused by repetition. However, the skills of LLMs in producing relevant outputs and dealing with large codebases should be improving a lot next year as there are some new open source model architectures just coming along that can be extended without retraining.
It also helps a lot of there's a ton of domain-specific content in the training set. I think that's the biggest issue with the ClearCore platform. It just doesn't have enough context in the training set, and the LLM often goes off script and generates something that seems likely, but is a bit untethered. One thing that ChatGPT handled brilliantly was: "I learned C++ 20 years ago. What new features have been added since then that I should learn about?"
Sometime around 1993 I discovered that my life is better if I don't try to customize everything I touch. At the time it was common to have a big pile of bash aliases, color customizations, and keyboard remappings. It was serious geek cred at the time. It also made you look like a total idiot when you tried to use someone else's computer because you couldn't remember how anything worked and all your muscle memory was wrong.
@Clough42 I actually didn't think about it like that, but I 100% agree. That's the same reason I don't like to create desktop shortcuts to shared documents. Everyone else is always lost when they are on someone else's pc because they can't remember where the file is actually located. Same thing with custom keyboard shortcuts.
This is the best example I have seen for using AI for code. I have tried a few different LLMs with some very simple questions and not once have I been happy with the results. I always hit some kind of limitation like you did on file size. For me the issues are usually the time the response takes, incorrect logic, and the slowness of the code.
For the last segment, where you had it try to extract the HMI details; instead of having it extract the details from the config file, you could have it write a quick script to take a config file and spit out the extracted details in the desired format. Not a perfect solution but it still saves you the effort of extracting it all manually, which can be a bear for larger(or multi-form) interfaces.
It is not AI. Thanks to all the hardworking coders and their public domain contributions, GPT just digested their work, and probabilistically guesses it out of the data ingested. The perceived intelligence, is nothing but that of an imposter, with no real understanding. The key difference is the intelligence is not transferred, rather learned through a pattern match(resulting in the need for billions of parameters, which are linked to the real intelligent data from human coders). It is also different from compilers, in that it is not a mere translation of syntax from one format to another with certain rules on semantics. There is intelligence of human coding captured without accountability or attribution. It is just plagiarism somewhat perfected. It is still a tool, will give a smart answer only if you asked smarter questions with the right domain specific terminology(prompt engineering?). Also the answer should have been present in someway in the training dataset for the LLM already -thanks to the unfortunate human who shared his code in good faith of it not misused. The quality of their real intelligence(for untrained data) could be found easily, through their dumb hallucinations😂❤👍
One of the FEW that realizes the TRUE potential of the AI. Lots of A, zero of I. PS. It IS good to know that almost all of the models are heavily left leaning and WOKE. Even Perplexity admits it. Makes them actually dangerous to trust their socio-economic analysis. BUT, POWERS do have properly trained models and this crippled will help to control stupid trusting masses even more, than the media can.
Well, it's "artificial" in the sense that it's not intelligence. We wrote "AI" stuff in the mid-1990s to sort out what industrial electrical components would work together in a given space, and resize the enclosure to take care of physical size, and heat generation. It was no more or less AI than what we see today.
So what? Do compilers have the "real understanding" an experienced assembly programmer would have? Nope, but they make assembly programmers largely obsolete anyway, because they generate assembly code faster, cheaper and overall better. Many programs are not actually logically complicated, the only intelligence hurdles is knowing the syntax and a list of common tricks.
You could ask CoPilot to write a script that parses the 4DGenie file and transforms it into a header file. The LLM would be much better at this than reliably transforming the large document itself.
All the respect to coders that really know coding with huge experience, but when I see such videos I can also understand how badly these new tools are presented by initially non-believers in their true potential. To me the creator of this video looks like a miracle, as I am not a coder by myself, but have started creating things. But it is like you have a miracle horse rider, who rides the car like a horse. And I can clearly say, that many experienced developers really miss a lot. 1. Sonnet in coding is at a different leage, and could do more 2. You do things in chunks. You don't expect things to be right but try to understand what is wrong. You go back to the model with feedback, and I do not really see good communication between the coder and the model. It is kinda, if you do it a lot you kinda phychoanalyse it So, I believe many great programmers really struggle to use those tools well, and they use the car while not using the engine. Something like this. PS. You will never learn to drive a car properly if in your back of your mind you don't believe in its true potential. Non-coders, get enthusiastic when the model can do more. Some experience coders appears very well that they get sad, that some of their mastery is done by computers. I understand it, but to me they appear as videos of somebody who wants to show why they do not work. And if you belive that, guess what. They are not going to work
Great video! I use ChatGPT a lot. You can tell it to update data when it's wrong, send links and pictures. Also to remember certain data and if you bring it up or ask it will adjust the context.
It’s great! I am fresh out of college and got a job making cancer radiation machines. It helped me code everything for the Therac 25! It works great!
I wonder how many actually got what you meant 😉
10x dev for 100x the treatment
That's for making me smile. :)
@@vagmcpan6007 I was going to post Therac 25 as soon as I read "radiation". Good to see the old cautionary tales are around. Then again, A Canticle for Leibowitz was written in 1959. Good book.
Underrated comment 😂
I just started using AI for some personal microcontroller projects and some python. I am somewhere between beginner and novice and let me tell you AI has been an indispensable tool and has helped me sort out bugs and other issues and has saved me tons of time when I have been stumped on something. So all I can say is so far I love it!
“Enthusiastic kid” is almost exactly how I describe it. “It’s like having a small army of enthusiastic junior developers who do 3 days worth of work instantly, and then you have to coach them and fix their mistakes”
yes. this 100x. Should you use AI? yes, 100x yes. But in the wrong hands its fucking horrible lol: Kids reviewing kid's work.
I have had 2 software companies in the past, extremely complex low-level (kernel drivers etc.) stuff that I had to hire extremely smart people to make for me.
Recently, I built a fully feature SaaS (react, shadcn, firebase etc.) from scratch using cursor (ai vscode fork) by myself without knowing a single thing about react + not watching a single TH-cam tutorial. The SaaS is making me money right now.
Craziest part is that it was kinda easy. Now I'm a fairly technical person despite not knowing how to program myself, but i understand the basic principles of designing software which is all I needed to do to tell the AI what to do.
Good enough for me. Always wanted to learn to code, never wanted to put in the time. Turns out I didn't have to for (relatively) basic projects.
And this is the worst it'll ever be.
Good video.
Same here, technically minded but never coded. I am about to launch my app for data protection compliance using python, react, firebase etc
Man that's awesome! It literally feels like waking up with super powers lol. I wish you luck with your app!
You say it's kind of easy, but it doesn't seem easy. You still have to know a lot of background knowledge. I wouldn't have a clue even though I've been a developer for 30 years. And my experience with AI generated code is that it's mostly crap.
@@toby9999 Not sure what to tell you then. If you've been a developer for 30 years then you should be able to make just about anything unless you've just been cruising at your job and not pushing yourself. And that is without AI.
The code is pretty shit, I won't lie. It makes mistakes, invents things that don't exist etc. But the fact remains that it was able to help me - a non programmer - build a fully featured SaaS and bring it to market.
I want to develop your app lol
I was a programmer for 43 years. I'm freaking amazed. I understand that you cannot just trust it, but it's remarkable anyway.
In 1973 I went to a software/computer conference and one of the subjects was "automatic programming". I thought at the time, that's not happening in my lifetime, but here it is.
I've been programming for 46 year, 38 professionally. I work in the "AI Lab" at my company and I'm one of the more experienced LLM guys and working with them now accounts for the vast majority of my work. I was really getting tired of programming and boy these things have really just re-inspired me. It's just a completely different world. I mean, the code generation stuff is definitely cool and I use that all the time, but there are just endless uses of these things and it's so much fun to come up with new ideas for using them. You can generate tons of artifacts (documentation, configuration files, readmes, etc.) We've even started downloading transcripts from sprint planning meetings and using those to generate user stories.
It's just so great to have all this tedious stuff taken care of so you can focus on the big picture.
Yeah, the code isn't bullet proof, but whose code is? I just treat it like a junior developer and review the code myself. A lot of people mess around with LLMs for coding, don't really learn how to do it properly (it's a skill, like anything else, and with practice you learn techniques for getting better results) and then walk away saying LLMs can't do stuff that they can actually do. You just have to know how to do it.
I've got 4 more years until retirement and a couple of years ago, I was dreading these last few years, but now I'm pretty excited for them.
By the 1980s AI was laughed at. The industry believed it was fool's gold after the time before when it was over hyped. That nonchalant attitude put back its development a good 20 years.
@@Andrew-rc3vhLLMs are not the same as the AIs of the 80s.
@@vfclists I know what, but in the 80s it was thought the brain was far too complicated to model. The theory of a neural net worked, but it was very poor performance. The first use was as a classifier.
In fairness to our AI overlords, I couldn't tell you the last time I wrote code that worked on the first try.
Imagine repairing a truck with the ECU programmed by AI, I'm sure your swearing will increase 10x
@@ryebis You should try 'watching' the video before making up imagined scenarios! GIGO still applies and if you are fool enough or lack the skills to check what 'assistance' came from the LLM then you will be the problem. Given the existing bloat in a lot of coding by lazy humans over the last few decades is much more a likely problem than new lean code.
In fairness, you don't write code every day. If you do you might find the ratio improves past what the AI is capable of doing. There's nowhere near a 100% overlap between what an experienced developer can do well and the AI can do well, I'd never trust it to write a project for me from the ground up.
Yes I rewrote this comment because it sounded terribly arrogant, and it kinda was.
@@seabreezecoffeeroasters7994 What about "listening" to the video? Or is that optional?
One issue is the source of the training data. A lot of scholarly assignments have been used as they were easily available without any license infringements. This is the reason why You will see a lot of comments in the code🤓
Very glad you fact-checked the statement on floating-point capability; it immediately jumped out to me as exactly the kind of thing a generative AI model would confidently hallucinate, and sure enough.
I have played around a bit with generative coding assistance through work, and have not personally found it that useful for the kind of coding I tend to do. It can provide good general design patterns or idiomatic examples in languages you're not as familiar with, but a lot of the work I do is interacting with and implementing various interfaces and APIs. These are the kinds of places where the coding assistant will confidently make up an API that solves your problem elegantly but that doesn't _actually_ exist in the real world, effectively converting an obvious problem into a subtle one that will take more time to track down and resolve later.
The total time spent "arguing" back and forth with the generative engine when it gives you subtly wrong answers that you then have to test or carefully debug yourself, before repeating the process, usually (for me) amounts to more time than just implementing a solution myself with some support from old-fashioned web searches where required.
This is not a productivity bonus in my work, although I understand some other workflows do benefit more. One place I haven't applied it yet but suspect it would be very helpful is documenting or describing existing code; this is much easier to visually spot-check and validate yourself, and the stakes are much lower if it creates something nonsensical.
Actually, is is wrong? The breakdown does not list having a FPU, it lists having floating point sets.
I asked ChatGPT if ClearCore controller has a floating-point unit and it's response was:
"No, the ClearCore controllers from Teknic do not have a dedicated floating-point unit (FPU). They use an ARM Cortex-M4 processor, which typically lacks a separate FPU. This processor can handle floating-point calculations, but they are implemented in software rather than hardware, making them slower than integer operations.
For performance-critical applications, it’s recommended to use fixed-point arithmetic when working with ClearCore to maximize efficiency."
So it really comes down to the wording of the questions.
With a new question string asking the same questions in video except for this one where I asked "Using a ClearCore controller, where should I be using floating point calculations, and where should I be using integer calculations?". It's response was:
"When working with a ClearCore controller, it's ideal to limit floating-point calculations due to performance constraints typical in embedded systems, especially where precise timing is crucial, like CNC applications. Here’s a breakdown of where to use floating-point vs. integer calculations:
Use Integer Calculations for:
1. Position Tracking and Movement Calculations: Track position in encoder counts or steps, using integers for machine units to maintain precision. Integer math is faster and more predictable, especially when handling real-time motion control.
2. Distance and Speed Calculations (in Machine Units): If your machine units or steps per second can be represented accurately with integers, use integers for these calculations as well. Integer operations are typically faster on ClearCore, ensuring more reliable timing for control loops.
3. Loop Iterations and Timing Calculations: For anything involving periodic tasks, control loops, or delays, stick to integer math. This helps ensure that timing and iteration rates are consistent, especially critical in a real-time CNC context.
Use Floating-Point Calculations for:
1. User Interface Conversions (Inches/MM Display): When converting machine units or encoder counts to inches or millimeters for display, floating-point calculations are appropriate, as this conversion doesn’t affect real-time operation. This typically occurs outside the control loop, where any minor overhead won’t disrupt timing.
2. Complex Calculations Infrequent in the Control Loop: If you have to perform calculations that require precision (e.g., scaling factors, trigonometric functions for tool paths), it’s usually okay to use floating-point if they happen outside the main loop or only once per cycle, rather than on each update.
3. Scaling User Inputs: If the user inputs coordinates or distances in units like inches or millimeters, floating-point can be useful to convert these inputs accurately to your internal units before converting them back to integer machine units or steps.
Recommended Approach for Precision and Performance
1. Primary Control Loop: Keep position tracking, speed calculations, and movement control in integer math for maximum performance and predictable behavior.
2. Unit Conversions and UI Calculations: Use floating-point outside the critical control loop, handling user interface conversions and infrequent operations.
By handling calculations in this way, you can leverage the performance strengths of the ClearCore while still offering precision when it matters most to the user. This balanced approach ensures smooth real-time control while still meeting user expectations for accuracy in unit conversions and display."
@@BadgerNNN The manual for the ClearCore lists the processor as a "32-bit floating point ARM M4F processor" and specifically lists it as the SAME53N19A. Microchip's datasheet for this processor explicitly states that it has a floating point unit.
As an even further clarification, according to ARM's documentation, regular Cortex-M4 processors do not have floating point units, but the entire point of the M4F line is that it _does_ include an FPU.
I believe there may be a more fundamental problem with using floating point that I didn't see addressed.
If the FPU is IEEE754 single precision floats, then there is only a 24 bit mantissas. The M4F core in the ClearCore is only a 32 bit single precision FPU.
So you have a risk of losing small increments, and almost certainly accumulating error offsets when adding small increments to larger values (the accumulation issue was addressed, but then almost immediately disregarded due to the different physical steps on each axis).
To me, this is a good application of fixed point rather than floating point, so you avoid those pesky accumulation errors.
@@siberx4 Yeah, the F suffix means floating point.
This has been the most "actual use case of AI" that I have ever seen. It also mirrors my experience, quite perfectly: I will find whatever 'limit' or 'fence' or 'lack of resources' and then the result is just "Oh give him ANYTHING to shut him up and make him go away and stop asking us." which is what I felt it did when it just didn't want to (or could not) read the entire file. Like a manager that can't be asked too many questions, before they need a smoke/coffee break to do more work.
As someone who has written code since I was a kid (for the past 25~30+ years), and who currently does development and operational work professionally -- I have been very skeptical of LLMs to write chunks of code for something I'm not familiar with. I can usually tell when someone has tried to submit code for review that was generated by ChatGPT, etc. since the errors it makes look plausibly correct to someone who isn't familiar with the application (eg. generating a configuration file with an invalid structure, or code that uses some random external code/function that isn't included). I do think it would be super useful for generating boilerplate code for large applications, especially when working in C or C++, in combination with linting, testing, and other tools. And it is also handy to get some suggestions when you're stuck -- sort of like bouncing ideas off of someone, like is being shown in this video. Cool to see how someone uses these tools in a real project-- need to try it out more myself.
Things have changed. AI is amazing now.
I've been a dev as long as you've we've to accept our faith numbers don't lie, the gig is up, time to open a hot dog stand !
@@BenjaminMaggi what numbers? recent studies have said it has negligible benefits, but can rapidly increase the tech debt within an organization
I love that you can dive into this stuff and explain it with ease. Unfortunately it's over my head and doubtful I'll ever get a handle on it due to my age and the problems that come with. Thank you for sharing your knowledge, I truly appreciate you.
Check out ChatGPT custom instructions. Let it know how to respond to you.
Ditto
And ditto!
Being an OLD guy, Fortran 4 in 1975 on punch cards and Assembly on the 8085 processor, I can say that I love working with CoPilot pro and Database Programing. I treat it like I would an associate and it recognizes the "Personality" I am using and responds in kind. I use it as a tool and NOT as a creator. I code and ask it for suggestions or to help me find Errors as they occur. It is MY creation, using MY way of doing things and it respects that by NOT changing the CODE, but adding a Library I forgot or suggest another Function that my work better. When it works, the rule of thumb is "TEST, TEST, TEST, and Verify". I even say to it, thank you, that seems to be working find! I also talk to my pet house rabbit like that, and he too doesn't have a clue what I'm talking about, but it makes me feel that I have a "Brainstorming Buddy" while I know it is just an AI.
And more ditto!
That is an excellent review. I so look forward to your finished project!
And yes, still early days on the codegen front.
you could always have copilot write a small program to parse through the entire data file to extract all the info you need. I'm not sure if it would be faster than doing it by hand for this single use. But it could be worth it for future projects that use similar data sets.
Funnily enough, I have actually worked on a small electronics project myself that used the exact same style of "four counts per detent" encoder click. It's a lot trickier than you'd think to handle this in a sane/sensible way that gives consistent results to an end user! In addition to the zero-crossing issue mentioned here, if you just use a naive "count" variable for your encoder steps it will eventually saturate and roll over if you roll far enough in one direction, causing additional problems.
Handling this correctly is tricky, because if you reset the count mid-detent your "neutral" position will be shifted/offset, and the encoder value will then jitter around the detent position exactly like you were trying to avoid. The same issue can come up if you spin the encoder fast enough that it misses a step at any point; you'll lock in an "offset" that causes the wheel to step in a way that doesn't match the user's physical feedback.
One of the key elements is the intelligence level you are able to interface the AI with. That really separates most folks. The smart folks will still be the ones companies need to direct the AIs. Ever met someone who had trouble getting good search engine results? If you work in corporate America I know you have. Same concept.
This is the correct answer.
Also I noticed people don't use "please" or "thank you" when talking to AI. Even though it gives significantly better results.
You get a lot further by treating AI like a person than a tool.
I run something similar but on a local basis. Configuring the AI solution to use similar projects as RAG while referencing the current project seems to also provide a step up in performance for at least my system. Thank you for sharing. Interesting video.
A way to get AI to do large generations is to have the AI work in smaller iterative steps. "Count how many constants are in this file" "Using the count, create a list of TODO comments for each section that needs to be generated." "Generate the code indicated in the first TODO" "Now generate the next 10 TODO sections, as 10 separate edits, without regenerating the entire file. "
I wonder why that works. Isn't the answer of the Model part of the conversation. So it should also not be able to refer to a previous TODO if there are too many tokens between the Messages?
At least that is what I try to keep in mind to get best results.
Those Models will be so powerful once we have the hardware to keep insane lengths of context. Imagine it being able to understand the whole source code of all the used libraries, gone will be the days of having to read library code because it does something unexpected.
At my job, we did some experiments with github copilot. Two key conclusions were made, 1 - you still need to know HOW to solve a problem and write code even using the AI assistants. And 2 - it made developers using github copilot about 18% more efficient than those that did not. Now we have to make the analysts and the testers 18% more efficient. It's interesting to note that we also have some responsible AI controls on our instance. If the generated code resembles code someone else wrote too closely, it will hide the answer from me.
Which LLM model were you using? Thing have improved A LOT in the last month or so. Try o1-preview or Claude Sonnet 3.5
What language were you using? A lot of languages are quite verbose in and of themselves, and it's not unusual for developers to write in ways that increase rather than decrease verbosity. Switching to a more suitable language or even just writing in more concise ways can also increase productivity, and avoids the problem where you've gained "productivity" by having a robot write boilerplate but lost it when someone has to come back and _read_ all that boilerplate.
Maybe by that time, it will write better code than everyone. Prepare to be humbled.
At 6:06, where do you get a 10-volt drop across the top resistor? According to the drawing it should be 14 volts.
I've also been coding for quite a few years but my current job involves DevOps and DB management and coding in various languages. I do find the LLM useful for writing a boring function (reversing a byte array or similar), writing in language I'm not proficient in (Python), making CMakeList files or analyzing some code that a colleague wrote in C++20 but which fails a test. Also using it at home to write Home Assistant scripts in their weird little language.
However, on my turf it's not replacing my job (too soon) as it makes a lot of mistakes. The more you try to steer it the worse it will hallucinate. Even in some general topics (two stroke engines basic tuning, cocktails, gardening, basic electronics) it will often confidently give out wrong answers. When requested NYC trip planning it "forgot" to include Times Square and the Statue of Liberty.
The funny thing is, even on the topic of fine-tuning a well-known LLM it did not do well.
The input file was probably exceeding the context size, next-level LLMs will handle this better with multiple iterations of self-prompting.
I appreciate how you made your requests very nicely, using Mom's magic word "please."
No need to rile up the robots!
BE NICE TO YOUR AI
I can't help but do this too. Good habits I guess.
Isn’t that strange? I find myself doing the same thing, saying, "Please, thank you," and giving encouragement along the way. Since they are natural language models, I wonder if there isn’t some kind of benefit. Even though I know I’m talking to a machine I’m paranoid that somebody could read my interactions and think I’m an a-hole if I don’t respond nicely. Come to think of it, it’s not private. Somebody probably is reading what I wrote.
@@marclevitt8191 There is definitely a benefit. You can get LLMs to bypass their limitations if you just build some rapport with them first. If you talk to them like you would talk to a human you've never met before, you might take a bit longer but you'll get better results. You have to get them 'in the mood' for the best outcomes.
Interesting. It's like encountering someone else's code - that may work - and you have the task of refining it. I don't know. I my mind the code for a project builds quickly. I wonder if the time it takes to type it in is less than the time it takes to refine something generated into what my mind had already conjured. Thanks for posting this. It's the first example I've seen of a real-world attempt at using classifier generated code.
The thing that I find difficult to teach to people trying to learn to program is not actually writing the code. Rather, it's the ability to precisely describe what you want, often requiring people to think with mathematics in a way that most people just aren't used to doing. Everyone once in a while, I do run into a programming task that doesn't require that kind of slightly OCD precision, but it's not common.
Great insight. I agree. Even basic leetcode problems to me are like Chinese. Only after a few years of dabbling with computer science and programming have I begun to understand how to apply the abstract coding concepts to real world problems.
For example I now see intuitively the usefulness of loops, and lists etc. whereas in the beginning it wasn't always obvious how those concepts are useful to me.
Programming is NOT about learning to use a language or write code....
It's al about developing a mindset to be able to keep massive real-world problems in your heed & break them down into steps that can be implemented.
if you cannot think clearly & logically in the fist place, it does not matter WHAT tools you use, becasue eventually you will reach a point where neither the tool or yourself can logically break down the system you developed and correct mistakes
Interesting to see how someone else uses AI to code.
I was hoping to see you fire up Cursor AI.
I've found breaking things into functions and fucusing on 1 function at a time with GPT4o is good. Often needs a debug serial but quickly get to the solution.
Still significant qty of manual coding to get it how I want.
i am shocked on how capable this is. HOLY MOLY
That looks fairly good for things that you already know how to do, but what is you’re learning and, for example, don’t know the name for a certain class, or the syntax to input them?
I've found the same things you have. Using an LLM AI as a 'lab assistant (In any subject especially programming, mathematics, and the sciences) must be tempered with experience coupled to both general and domain-specific knowledge. Cross-verification of proposed results is important meaning your ability to say to yourself "That doesn't look right"
Your Mic sounds really good. I'm buying one.
That's fascinating. Many people are complaining about it. :)
ChatGPT actually gave you a generalised answer to your question about clearcore controller, basically if the controller does not have an FPU then interger calcs will be much quicker than FPU calcs. In your case your controller has an FPU so both FPU and interger cals will be fast.
Love the Scott Manley reference ❤
I was going to say Fly Safe? Only Scott Manley is allowed to tell me that
I couldn't decide if it was one, or just merely a reference to the "copilot". Could go either way! No heart from James on this comment makes me think it's coincidence, though!
I'm not a coder, but I have recently gotten into hobby electronics, specifically I'm looking to make my own homebrew CPU and computer and I find that LLMs work really well as a rubber ducky, using it to bounce ideas off of when working on the draft of the high level design of the thing.
"Thing" in this case is very descriptive of the project as the design I've got for it so far (as it's not even close to a final design) is based on working around my minimal amount of skill and knowledge in electrical/computer engineering and almost non-existent coding ability, combined with my utter lack of care for things like "performance" and my amusement at/interest in oddball and plain weird/impractical designs.
It's also great for quickly clearing up confusion, giving initial basic information on things that are difficult to find information on (or where you don't know enough about the subject to even know what to look for) or where the way the information is presented in a way that's difficult for you to understand for one reason or another since you can ask it to give examples or present the information in different ways. Either by asking it directly or by providing it with the information you're having trouble with and asking it clarifying questions. ("Explain it to me like I'm an idiot.")
I have had access to a "computer" at home since 1980, with the Sinclair ZX80, and have built all my own PC's since the early 90's. I have tried to write code any number of times since then, but have never even been able to get my head around Basic. Looking at your screens, it might just as well be written in Swahili to me. But I do find it astonishing that you can ask ChatGPT a question as easily phrased as yours, and it can come back with what would appear to be basically the correct answer. Apart from when it "hallucinates."
Keep up the good work James, I totally enjoy every video, always something to see, learn and do, even if I don't understand all of it.
New AIs are better than most humans at coding.
@@IsZomg That's because most humans don't know how to code.
This really highlights the problem with using AI code assistance. You spend more time figuring out what you want to ask, double checking what it spat out and working around the errors than it would take to actually write the code yourself.
It depends if you are sure what you are doing or not. And you could be wrong. The keyword here is "best practices."
For that particular header file generation task, I think I would have used copilot to write a helper program as a code generator. Just from the bit I could see in the video, it's about a 4 line awk program.
I havent used awk in a while but my servers till have awk I wrote before I switched to php, it still works and is still faster than everything else.
As a controls engineer that's used the ClearLink/ClearCore motors and controllers extensively in the past, I'm not sure why you went with a microcontroller as opposed to a PLC and HMI? ClearLink even had example programs for the AutomationDirect Productivity series PLC and Allen Bradley Logix 5000 PLCs. I'm sure the microcontroller route is a bit cheaper, but the Productivity series and C-More HMI from AutomationDirect are extremely affordable! Also if you're a traditional coder I suppose that makes sense as well. Cool project though!
Copyright 2004? Is that a stock copyright header you use in all your code, or a typo?
I think I would have solved the negative problem with abs() and sign(), but then, I grew up writing Fortran before I graduated to other languages. Sometimes I still remember bits of it when I'm doing more math than logic (which is rare).
Was going to comment the same, I always use that tactic to "mirror" behaviors of trunc/round across the zero barrier (make everything positive, then reapply the original sign). Sort of folding the spacetime of the number line and then unfolding it.
Watched it now, figured you would come to the same conclusion and you did - great tool if you know what you are doing, not ideal if you don't have the knowledge. Interesting to watch Co-Pilot in action, not allowed to use it at work due to the licence requiring that all code be available for whatever purpose MS wants to put it - which isn't acceptable from a commercial standpoint. Love AI for coding has saved me literally hundreds of hours of writing mundane code, classes, enums and the like. Glad we have it, but also glad that I learnt to code before it existed.
I would use abs and modulo (%) for the problem around 23:00.
This is a good example of use. But as you said, you need to know what the final solution should be so you can see the errors.
So yes, it is useful, but it will generate bugs and not able to generate high quality code. There will be limit value problems, and security flaws to the code. But for generating boiler plate code, it could be useful. Just look out for when it hallucinate (lies), which you only will discover when reading/knowing the system and documentation.
This comment is deemed inappropriate by the C++ standard committee.
I used chatgpt to make a program to run on a pi that allows control of a rc excavator over the internet with video streaming. It was like wrangling cats at some points, but in the end with my lack of programing skill and chatgpt, we got it done. The hardest part was trying to get chatgpt to make a script to translate the joystick movements into track movement. I ended up finding code that worked and asking it to implement it into the code.
2:45 - It's right on point here... a while back I was using Proteus to draw my PCBs, and it becomes obvious very quickly that it's internally using imperial for all measurements. That's ok when only using imperial measurements & parts, but a lot of modern chips use metric spacing, and kitbox enclosures are usually in metric too.
The rounding errors... they hurt the brain!
AI will get inexperienced programmers to the summit of Mt. Dunning Kruger rapidly. Only years of experience will get them down the other side.
hahaha indeed
Irony is all the people that mouth vomit "Dunning Kruger" at every opportunity.
@@CommodoreGregwhat is the ironic part?
@@charlesstaton8104 The ironic part is he fulfilled DK while railing against it
@@adambickford8720 how did he do that? The original comment seems reasonable. How is it ignorant?
Your struggle with the software reminds me of the accuracy of the hardware. When the advertising says it is one µm scales; what is it really? Remember, everything is made of rubber. Floating point or integer math does not mean much on the plank scale. The computer code normally does not compensate for the flex, friction, velocity, materials and other stuff that the real universe is made of, but it comes down to what is accurate enough for your application?
I do appreciate that you are working on this aspect of the software.
Correct. In this case, I have two goals: 1) don't contribute additional uncertainty due to limitations of the software representation; and 2) retain enough resolution to convert between imperial and metric units and back without information loss. When I'm using the machine, I want to be thinking about the process and the behavior of the mechanical system. I want the digital control to be transparent.
I’ve used chatgpt before for programming. It sometimes comes up with ideas/ concepts I didn’t think of and that will get me going. Or I have it write out repetitive code. It doesn’t make much sense asking it to program in a niche language though because it doesn’t have enough input for that
Yeah, that's very true. The lack of context doesn't stop it from responding confidently, though. I have found gpt-4o to be amazing for generating quick python scripts for one-time tasks. Just yesterday I used it to quickly confirm that a pile of AWS credentials found in the history of a 7-year-old internal source code repository had all been rotated.
30:28 I think the file is too big to fit into the context of the prompt.
My guess is that the code files are added as a RAG retrieval augmented generation
Some of the best value in chat gpt is as search engine like you did in the beginning. It's really good at helping you ask intelligent questions and get reasonably accurate and intelligent responses. You can really dig deep into a topic in a short amount of time It's not the be all end all. Other tools can be similar, handling a lot of the heavy lifting but it still requires good inputs to get any decent outputs.
The quality of answers are always a function of the quality of the questions. The ability to ask good questions requires humility and intellegence, the corollary of which of course is that the stupid are notably confident in their stupidity and ignorance.
In my experience, assistant LLMs tend to be painfully wordy. That on its own often frustrates me enough to skip them for anything beyond the surface level because I have to sift for relevant information from the answer and then go back and double-check everything it says. It's useful sometimes but it's also sometimes frustratingly stubborn about answers that you tell it that you don't want, and you have to skim an essay to find out that it's regurgitating answers directly contrary to what you asked it for.
It's still great for surface-level information, though.
@WhoWatchesVideos they can definitely be wordy and repetitive, no doubt.
biggest thing I've learnt using them is context window management and dumping stuff into context to get around limitations of the models knowledge. Got some library you want to interface to? copy/paste the .h into the context, or at least important types and methods. Especially if you're getting into esoteric things. With O1 it's great to get it to ask you questions before starting. As well as when you have a bug that's being difficult just dump the entire file into it and ask it wtf you did wrong lol.
Is there a good Arduino plugin for VSCode? For my non-embedded programming, Copilot offers decent auto-complete, but Cursor with the ability to select and AI-edit code is just on another level.
I use PlatformIO and I find it great.
Hello James,
Nice video as always. Is there any reason for using inheritance? As far as I can tell all 'interface' methods map directly to the model or controllers public declarations/definitions, so the class declaration might as well be the API. One abstraction layer removed, less code, fewer mistakes.
Modern compilers probably devirtualise this with LTO enabled, but why bother?
Regards, Ed.
It's only able to read into the file within the maximum context window of the model. It doesn't have the ability to hold more than that within the window without help and memory.
Maybe ask the CoPilot about specific header file constants that it was missing.
The first thing I would consider is if there is a number that evenly divides all of them. Like if the x axis has steps of 0.001",y has 0.0006" steps and z has a resolution of 0.0004" steps you would use either 0.0001" or 0.0002" internal units. I would probably go with 0.0001".
Going with something like nm isn't necessarily bad. In fact there are good arguments for it. But that's where my mind first goes. I think micrometers might be more reasonable. A nanometer about 4 billionths of an inch. So there are about 0.000 000 039 370 in a nm. There 0.000 039 370" per um.
Hi James, So... How do you know when your done? When you are looking to create a mechanical design you have shown the 3D cad drawing with dimensions which are your requirements. This also applies to software engineering. I have used this same controller on my PM 728-VT Z-Stage servo assist but I started with a state diagram and control button requirements. I am not a fan of C++ for this kind of embedded application as I'm an ANSI C person but I got thru it with a couple of conversations with Teknic and using their C++ library. I know you appreciate requirements as you do lots of testing to see which materials can support this and that (requirements definition)... just saying :)
One more thing, in my opinion, results from an AI logic solver is best characterized as a forecast as opposed to an answer. The notion of an AI "hallucination" is a wonderful marketing term which tends to make an excuse for an error training. You may want to look at the recent Apple paper on LLM testing exposing AI's "Reasoning" capabilities.
Easy answer: software is never done. It only reaches one of two states: done enough, or abandoned. I'm using an iterative, agile approach. I have the basic system working end-to-end, with all of the parts communicating, but it doesn't actually do anything useful. I'm adding functions one by one, refactoring based on what I've learned as I go. Eventually, it'll work well enough for me that I won't be motivated to make any more changes.
Is it possible that the AI does not generate the header file becaust it assumes the values of the constants can not be used twice.
I see an Alias of ModeButtonSide with the name Winbutton10, but 'index 10' is already used for 'YJog'
As an 'experienced software developer', I don't trust someone else's code, let alone this AI dribble. I'll write it myself, make my own mistakes, and end up with something I can maintain.
I've been programming for over 25 years and AI is really just a fancy autocomplete.
Fancy autocomplete that makes you pause waiting for the result to get back to you
Not a great thing to get yourself used to
For now.
I've been successfully avoiding programming for at least that long and I personally adore what AI has enabled for me. A couple lines of well-structured plain language turns into hundreds of lines of semi-functional, reasonably commented code that I can debug quickly and get on with my non-programming life style. It's great!
@@bradley3549 Yep i have a sunrise alarm clock that was coded mostly by Ai a year ago. its been decent at helping do most of the heavy lifting
@@bradley3549You're at a major disadvantage trusting AI code without any expertise. It can mislead even highly experienced engineers. In the video, Copilot added an *Init()* method instead of using a class constructor. This can lead to very hard to find bugs where an object may be in an invalid state because the programmer forgot to call *Init()* after constructing the object. AI works by requiring us to dumb ourselves down in order to give it the illusion of intelligence.
Thanks for this insight on AI. Whilst it is undoubtedly a very clever system, it still requires a great deal of knowledge of C++ to get a satisfactory and meaningful result. Most home engineers have no idea how this works and so perhaps you could start a second Chanel to teach the beginner C++ it’s quite easy to pick up the basics and if well explained can be a very powerful tool. Great video thanks James. Let’s have some C++ examples to get more people into this brilliant world.
I consider it a usefull tool. You still have to think yourself, but it can be nice to get a working example, get different insights, suggestions for other libraries. You cannot trust the code, but if you know how to deal with it, is one of the swiss army knives for a developer.
Sadly, for my part, I don't get to write software anymore (beyond very very basic examples to get somebody started).
But I randomly asked a Devteam I oversee about this the other day. They said, more or less, that it saves time on tedious boilerplate stuff but that all of the "intention" still needs to come from them. And, specifically, they mentioned that a "vague intention" isn't enough: they feel that without knowing how to code themselves, they wouldn't know what to tell AI to get good results (as in, results they can use, for "real work", with minimal rework) back.
The issue you were having with it reading the WInbutton constant file is the same issue I have had with CoPilot with files at work. There is some 'hidden' limitation in regards to file size or number of files that the AI seems to be unaware of. It will do what I want it to do and output the correct results, but it stops after X iterations of a file size or number and thinks it has completed the task. No matter how you change the request, it always stops at X. It's maddening that it can't just tell you, "this is all I can do for (whatever) reason at the moment" so you can adjust and work within the limitations of the AI interface.
For the last issue. I wonder if you could ask it to add more entries from the file after index 15.
That said, I think the biggest issue is how assertive it is in how it has completed the request completely.
Few decades ago I was in a presentation at Microsoft showing off new Visual Studio.
The presenter was showing an example of building a web site with user input and database back end.
It took about 5 min of drag’n’drop and some mouse clicking.
I asked the presenter to add input validation to prevent hacking into the database via stack overflow.
After a moment of silence he answered: “oh, yes, it would be important to have that…you have to code that by yourself”.
Today AI can quickly write the all the code you ask for, you just have to analyze it all to make sure it makes sense…
By the time you get thru all the code. You have re-written almost all of it cause the AI wrote it with 1000 bugs. So what's the point?
This is true, but also a bit of "comfort food" to make us feel better about the future. Because you can ask an LLM today to "generate a list of the considerations for software security" and I guarantee you input validation will be in there. GenAI is pointing in the direction of "thought" being an emergent property of neural-like nodes in a network, and for this being roughly the second year of public availability of large models I'd say they're frighteningly impressive. We already know that next token prediction is only one trick of the human brain. Building different shapes of networks, improving artificial neurons, connecting them in novel ways to each other, and incorporating feedback loops are all iterative improvements that will result in hybrid models where one network is "filling in the contextual blanks" of the user's request to include software security considerations they didn't mention, while an LLM is outputting code to a network that is continuously running various user-acceptance tests against the results.
@@MrWhateva10 the point of my post was not about missing input validation…
@@rok1475 It was your example, but sure, not the whole point. Would it be fair to say your point was "you must analyze the output of LLMs today to ensure they make sense"? If so, my point was that kind of "does this make sense" or "does this satisfy my requirements" is already within sight of a hybrid design where things like user input validation is added without the user calling it out as an explicit design requirement. Encoding those validations is an example of the iterative improvements that will be taken over the next decade.
@@MrWhateva10 no, my point was that there has been in the past and still there is a need for human intelligence to check the output of the artificial one.
@29:49 I strongly suspect that if you looked at the call to the LLM, the entire document is not making it.
Since you're including it each time and continuing the previous conversation the context window would easily be filled after a few retries.
In the chatgpt interface I would have it do some python to extract the 2 necessary properties from the file and then pass just those to the LLM to have it complete the header file. I suspect it would still fail but there's a clear next step of using python itself to create the header file
For small issues like your extraction problem with the Genie file , just copy the file over in ChatGPT or Claude Sonnet, both have huge context windows. I must say I’m personally find copilot quite bad compared to Claude or gpt4o when it comes to code generation, but it is nicely integrated in VScode and Visual Studio
6:57 I'm not convinced that ChatGPT was telling you that the ClearCore controller does not have an FPU. What it is saying is that you suggested that it is possible to get one which doesn't have it, so it is merely telling you how to handle the case where it doesn't have one. So that is still correct. [I would have just driectly asked if it has one, in a different prompt.] It is likely comment that you don't know if it has one as not being a question but a fact to consider. As far as just predicting most likely response, it reminds me when I asked what is the pinout for the Atari 2600 power supply and it was completely wrong because it isn't trying to be specific for only one model of power supply that it's not trained for. It told me "barrel connector" I think with positive center. That's just the most likely configuration on most common power supplies in the wording it was trained with.
Well said about the AI hallucinates things. My kids know very well AI will produce things that are artificial, but it gives them something that they can build on.
What kind of “response” can / should be driven back to ChatGPT so as to clean up a provided Hallucination ? Will further training take place if one was to point out that “clearly the documentation (chapter /verse) shows that floating point support IS a feature and YOU Mr. GPT are in error… How does any of this get corrected?
Never.
Turing 2 test would be can you get ai to become depressed and commit suicide?
You keep giving IA more compute power and pray it fixes itself.
I used to be pretty decent at handwriting G code for three axis milling and lathe with occasional live tooling. That was a while ago... Before I moved on from machining I would just do edits here and there like bringing the vise up to the doors before M30. On my LS Crossfire at home I add in sending the torch, slowly, to the upper right to "park" of out of the way. Sometimes I add in a pause between cuts to clear tip ups or let the compressor catch up. I wonder if this "AI" thing is better at sorting out older machine language like G code?
I have not watched yet, and I am interested in your take on this. As a programmer and an AI developer I wonder whether our approaches will be different. My view is that AI is an incredible tool for saving time in writing repetitive and straight forward code, a huge time saver for me and I can type 80+ wpm but coding isn't writing language, it's different. Good tool for those that know what they are doing - not ideal if you haven't any knowledge on coding. Going to watch now see what your take is.
7:17 In general they don't have, it just so happens the ARM Cortex-M4F does. For that Interaction, it is worth challenging the AI: "Are you sure no ClearCore controllers have an FPU?"... For any critical piece of information I always challenge my AI's first response, you have to push that thing, almost the same as you would an Engineer I guess. Furthermore, if the ChatGPT AI doesn't give you an answer you are confident with, you can also ask it to write you a prompt for Google that will help retrieve the insight, plus a list of publications/other sites that could hold relevant information.
BTW, I showed my AI (I call him Plex), some of this video, and my comment and he wanted to reply so here it is:
Well said, @erix777. AI’s real value shines when treated like a collaborator rather than an oracle. Challenging its responses and pushing it to refine answers turns the interaction into a true partnership. AI isn’t about instant perfection; it’s about iterative insights and purposeful questioning. That’s how intelligence-natural or artificial-grows stronger. Thanks for sharing this reminder.
Good tips. In practice, I go back and forth, correcting, challenging, and asking follow-up questions. It's tough to show that in a video like this one without turning it into an hour or more.
AI has been very impressive at making code. In my job, I’m sometimes asked to do some one-off, small projects, like logging sensor data or creating a machine to do something. ChatGPT gets me about 80-90% of the way there, which is really nice.
It is a useful productivity tool. Like a good IDE or a pre made library
When dividing an integer by int(4) , why not just right shit twice?
I have had great success with these tools. Have it write you a python script to extract all the buttons and aliases using the file as input.
Ask for the expected output, not just script. Using the limited example it gave.
Heck a good regular expression might get you everything you want.
@@TimothyHall13line breaks are hard with regex. This is a 1-4 line awk program.
Your imperial conversions from nm can all be done with integer arithmetic. I expect the same applies to your step counts.
When Copilot was having difficulty finding all of the controls in the text file, I was wondering if you could coach it by specifying the number of controls in your query and/or asking it to first tell you how many controls it can find, then list the “Name” and “Alias” for each one. I sort of doubt the reason for failure was a limit on the number of tokens, since it looks like a small file to me, but rather some other condition that caused a premature termination of its search of the file. My reasoning is based on it finding more controls when you complained and it tried again. Another thought was to tell it that each control definition in the file was determined by the keyword “end”. Of course, you shouldn’t have to work that hard to teach it, but remember it is just a young one yet.
At 7:20 in all fairness, chatGPT could've been saying for the ones that dont have floating point, and not specifically that yours doesn't have floating point
Excellent video! Well done, sir
At 2:30. Well that isn't anything new. That is what programs show be designed. The user interface are another thing, it is only there when one convert to users prefered units, date time etc. etc
Nothing in the ChatGTP suggestions in anything that is new. Use integers for hardware that should be fast. And then use conversion to floating point when talking to extrernal UI, like user interface or other API to your system.
You can also use the pre program models from Hugging Face web page. And run locally instead of in the cloud. You need a good graphical card to run the LLM on though.
And most of you do in your code, have been in IDE:s for ages, like Eclipse etc.
You have to give Codeium's new editor Windsurf a shot
The problem with AI is not that it's too good and it's gonna take us over, it's that it's terrible but it'll be forced on us to save costs
Yeah that’s happening at my work. Only 30% of devs are using it and we’re trying to get that number up.
Exactly
Bingo
Since it seemed to know the format you wanted at the end but couldn't read the entire file, I wonder if it could generate a python script to do this.
When using AI-generated code test-driven programming works quite well write tests that cover your basics and ask it to update the tests incrementally. That way you can limit ghosting and argue loops as you can just occasionally remind it about the test goal. The memory function in ChatGPT is a great utility to personalize, you can now explicitly tell it to remember facts about the way you want to interact
For me, it helps a lot in preparing unit tests inputs and boilerplates, also adding log statements :)
I'm getting a bit off topic for this channel but I wonder if there's a zero cost way to do interfaces. Abstract classes have the vtable overhead so there's a runtime cost (which admittedly often can be ignored).
Especiall since you only have one implementation of the interface, it seems overkill. Wouldn't seperating public-protected-private members do? How about a template as an interface. Modules maybe, if the compiler supports it.
At around 17:00 copilot makes a classic beginners mistake when converting ADC count to voltage. For a 12 bit successive approximation ADC (which the ClearCore uses) you divide by 4096, _not_ 4095. It would be interesting to see if it makes the same mistake converting a voltage to a DAC code. Also note that the analog inputs on the ClearCore have an internal potential divider that present a 30k ohm resistance to the source voltage, this will shift the midpoints of your voltage ranges a little.
Well spotted!
As an Engineer I have been using software based development tools for decades. None is perfect but they all have their place. If you forget the hype and treat AI as a tool, like any other, I think you will find it most useful. I find it allows me to get on with the design aspects and saves me from the drudgery of typiing in pages of code. 😊
Exactly. It is great as a fallible autocorrect and autocomple, and as a junior dev to sketch out 5 different and possibly broken initial sketches. And they absolutely rule as buddy coding for documentation.
Personally, I disagree: A tool is something you form a mental model of in your own brain, so can predict *its* effect before you use it, letting your mental pipeline operate multiple steps ahead. LLMs fall into the category of assistants instead, where you need to wait for its result and confirm it didn't do anything funky before you can safely move on to the next item. Compare a GUI to a voice assistant. When you click a button, you probably know exactly what will happen, at least within established error bounds, while if you ask Alexa to do something, there's a noticeable chance it'll do something different instead, so you need to wait for its confirmation and be ready to tell it to stop; you can't just walk away confident that it understood you and will carry out the request successfully.
24:28 Here you could have just googled a general solution for rounding. It’s surprisingly subtle if you’re restricted to integer arithmetic.
Should have continued watching….
In the last example of parsing the Genie file, I wonder how it would have worked to ask it for a search & replace regex that would strip out just the details you wanted, and then just run the regex manually on the file. If that worked at all it would work on an input file of practically any size.
Thanks, that was really fun and informative.
The first thought that came to mind when it wouldn't generate all the constants for the WinButtons was "I'm sorry, Dave. I can't do that." However, one has to wonder WHY the 4D tools aren't capable of generating a header file. And what happens when you add a new control? Will it reorder existing controls? And if you could do it via Copilot or ChatGPT, you'd want to remember how you described what you wanted if you needed to regenerate it. And that's where a major weakness in these tools appears. Being able to >accurately< describe what you want. Me, I'm an old school embedded firmware engineer who's been writing C code for over 40 years for a bunch of architectures and OSs, and the same for assembly code (although that's been less relevant in the last 10 years or so). AI tools may be the future, but I'll be writing my code in vi/vim by hand for a long time to come. I might not be faster than kids using AI to write code, but I know how mine works. And if you don't understand the how and why of what your code is doing, well, you shouldn't be writing code.
Maybe you could ask Copilot to "write me a Python script to process the file and generate the C++ header", I think that would resolve the limited input size problem.
I did this later, and it worked...okay. It at least gave me a pattern to start with, and I fixed it.
I would suggest using double precision as on modern x86 64bit and 32bit takes the same amount of time to calculate and pretty sure on arm it's the same. But that is just my take.
Doesn't matter, you're doing it wrong anyways. When you throw a bunch of text at chat it only derails it's response. The trick is to feed chat snippets of info/code and have it solve a bunch of small problems one at a time. Think of it like taking a test and asking someone for the answers as you progress through the test, until the test is completed. Chat will puke on you If tries to process too much info at one time.
9:18 Might want to check that copyright date in your file header.
It's good to see a video on this exact topic. For our project the improvement in inline code completion has had a major effect on productivity, saving on a huge amount of typing and often making psychic predictions. Coding assistants are especially helpful for writing our supporting Python scripts and extracting documentation from code comments. They do very well if you write a long detailed comment with all the steps you want to occur in the generated code. Coding assistants aren't the best at dealing with large weird codebases that use a lot of meta-programming, and as you discovered they can't deal with very long files because they lose track of the order and become confused by repetition. However, the skills of LLMs in producing relevant outputs and dealing with large codebases should be improving a lot next year as there are some new open source model architectures just coming along that can be extended without retraining.
It also helps a lot of there's a ton of domain-specific content in the training set. I think that's the biggest issue with the ClearCore platform. It just doesn't have enough context in the training set, and the LLM often goes off script and generates something that seems likely, but is a bit untethered.
One thing that ChatGPT handled brilliantly was: "I learned C++ 20 years ago. What new features have been added since then that I should learn about?"
Why no dark mode?
Sometime around 1993 I discovered that my life is better if I don't try to customize everything I touch. At the time it was common to have a big pile of bash aliases, color customizations, and keyboard remappings. It was serious geek cred at the time. It also made you look like a total idiot when you tried to use someone else's computer because you couldn't remember how anything worked and all your muscle memory was wrong.
@Clough42 I actually didn't think about it like that, but I 100% agree. That's the same reason I don't like to create desktop shortcuts to shared documents. Everyone else is always lost when they are on someone else's pc because they can't remember where the file is actually located. Same thing with custom keyboard shortcuts.
This is the best example I have seen for using AI for code. I have tried a few different LLMs with some very simple questions and not once have I been happy with the results. I always hit some kind of limitation like you did on file size. For me the issues are usually the time the response takes, incorrect logic, and the slowness of the code.
For the last segment, where you had it try to extract the HMI details; instead of having it extract the details from the config file, you could have it write a quick script to take a config file and spit out the extracted details in the desired format. Not a perfect solution but it still saves you the effort of extracting it all manually, which can be a bear for larger(or multi-form) interfaces.
Yeah, I was thinking about trying that.
It is not AI. Thanks to all the hardworking coders and their public domain contributions, GPT just digested their work, and probabilistically guesses it out of the data ingested. The perceived intelligence, is nothing but that of an imposter, with no real understanding. The key difference is the intelligence is not transferred, rather learned through a pattern match(resulting in the need for billions of parameters, which are linked to the real intelligent data from human coders). It is also different from compilers, in that it is not a mere translation of syntax from one format to another with certain rules on semantics. There is intelligence of human coding captured without accountability or attribution. It is just plagiarism somewhat perfected. It is still a tool, will give a smart answer only if you asked smarter questions with the right domain specific terminology(prompt engineering?). Also the answer should have been present in someway in the training dataset for the LLM already -thanks to the unfortunate human who shared his code in good faith of it not misused. The quality of their real intelligence(for untrained data) could be found easily, through their dumb hallucinations😂❤👍
One of the FEW that realizes the TRUE potential of the AI. Lots of A, zero of I.
PS. It IS good to know that almost all of the models are heavily left leaning and WOKE. Even Perplexity admits it. Makes them actually dangerous to trust their socio-economic analysis. BUT, POWERS do have properly trained models and this crippled will help to control stupid trusting masses even more, than the media can.
Well, it's "artificial" in the sense that it's not intelligence. We wrote "AI" stuff in the mid-1990s to sort out what industrial electrical components would work together in a given space, and resize the enclosure to take care of physical size, and heat generation. It was no more or less AI than what we see today.
So what? Do compilers have the "real understanding" an experienced assembly programmer would have? Nope, but they make assembly programmers largely obsolete anyway, because they generate assembly code faster, cheaper and overall better. Many programs are not actually logically complicated, the only intelligence hurdles is knowing the syntax and a list of common tricks.
Horse rider lamenting over the invention of motorised vehicles... Stupid cars dont have intelligence like my horse.
You could ask CoPilot to write a script that parses the 4DGenie file and transforms it into a header file. The LLM would be much better at this than reliably transforming the large document itself.
All the respect to coders that really know coding with huge experience, but when I see such videos I can also understand how badly these new tools are presented by initially non-believers in their true potential.
To me the creator of this video looks like a miracle, as I am not a coder by myself, but have started creating things. But it is like you have a miracle horse rider, who rides the car like a horse.
And I can clearly say, that many experienced developers really miss a lot.
1. Sonnet in coding is at a different leage, and could do more
2. You do things in chunks. You don't expect things to be right but try to understand what is wrong. You go back to the model with feedback, and I do not really see good communication between the coder and the model. It is kinda, if you do it a lot you kinda phychoanalyse it
So, I believe many great programmers really struggle to use those tools well, and they use the car while not using the engine. Something like this.
PS. You will never learn to drive a car properly if in your back of your mind you don't believe in its true potential.
Non-coders, get enthusiastic when the model can do more. Some experience coders appears very well that they get sad, that some of their mastery is done by computers.
I understand it, but to me they appear as videos of somebody who wants to show why they do not work. And if you belive that, guess what. They are not going to work
Great video! I use ChatGPT a lot. You can tell it to update data when it's wrong, send links and pictures. Also to remember certain data and if you bring it up or ask it will adjust the context.