208. Separating AI Hype from AI Reality

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 พ.ย. 2024

ความคิดเห็น • 163

  • @CripplingDuality
    @CripplingDuality 5 หลายเดือนก่อน +24

    We recently onboarded (and subsequently fired) an SWE who was using chatgpt to generate code (that didn't solve the problem) to write design docs (that explained the tech we were looking to implement without considering actual integration points) and wrote readmes (that were basically redundant as they just explained what we could see with our own eyes in a dockerfile). He spent several months in a team and slowed us all down by at least an order of magnitude. I feel like the sheer cost of this kind of scenario is really underreported if not outright ignored.

    • @michaelnurse9089
      @michaelnurse9089 5 หลายเดือนก่อน +5

      Hang on. Should you not have has some clear discussions about using AI tools on day one? I would think every team would do that.

    • @ayy479
      @ayy479 5 หลายเดือนก่อน +1

      @@michaelnurse9089 It's definitely going to start being a thing, but it's such an idiotic idea that none of my teams would ever think to bring it up. You have to get approval for everything you use.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +11

      Yep, use of AI tools can really elevate a developer when used correctly. But when they are used incorrectly, it causes a massive mess.

    • @CripplingDuality
      @CripplingDuality 5 หลายเดือนก่อน +1

      ​@@michaelnurse9089firstly, as someone else said, it literally never occurred to the rest of us to abuse AI like that. Secondly, he wasn't forthcoming about it so no one one was aware of it until very late in the day. Doesn't help that he works in Asia so little opportunity to pair and become aware of the smell.

    • @JamesMorris-rr4pl
      @JamesMorris-rr4pl 5 หลายเดือนก่อน +5

      Computer Science PhD with 30+ years of SW dev experience and currently a working senior mobile app dev. The experience of the original poster was spot on. AI does not work for chatbots. And AI will not work for serious SW dev for long after everyone alive today is dead.

  • @johanneszellinger232
    @johanneszellinger232 5 หลายเดือนก่อน +5

    Another point to add here is a problem our team is facing at work, which is an AI research institute, is managing expectations for company partners. Everybody sees the (cherrypicked) demos from Open AI and assumes AI can do anything. However, in reality we are often working with very limited and extremely specialized industrial or medical data. AI still has a long way to go here (which is good for me, means my job is secure :D )

  • @perfumegoose
    @perfumegoose 5 หลายเดือนก่อน +11

    I got some feedback from co-pilot on C# linq questions, which jogged my memory. I looked back at C# In Depth by Jon Skeet and found what co-pilot spit out was word for word from a book written12-15 years ago. Yes, a heck of a search engine, but no better than the jerk in high-school who used to copy test answers.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +1

      😂 Thanks for sharing.

    • @joshbarghest7058
      @joshbarghest7058 5 หลายเดือนก่อน +1

      Yeah, this is the other serious issue here: AI productizes the act of plagiarism.

    • @malakiblunt
      @malakiblunt 4 หลายเดือนก่อน

      Genius Sam altman is literaly making you pay him for your work he stole from you.

  • @jonathankerr6928
    @jonathankerr6928 5 หลายเดือนก่อน +2

    I feel like a good way to view it also is that if someone is selling an AI tool that's there to enhance what a person is doing, it probably has a lot more staying power than something that is being sold as a full replacement for a person. Computers don't work like how brains do so it can't replace them, but making stuff that's intended for people to use is something that is actually useful enough that folks would adopt it.
    And just personally as an engineer, it's a lot more satisfying to make stuff that people like to use vs a fully automated system that is supposed to exist without any human interaction.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      Thanks for sharing!

  • @ary9655
    @ary9655 5 หลายเดือนก่อน +2

    In my attempt, ChatGPT 4o was able to create a nearly circular, regular nonagon or even a hexadecagon. then I asked it to create a nonagon that looks like a trapezoid. This time, it gave me a distorted octagon. I told it that it didn't look like a trapezoid at all and that it was missing a side. So it gave me a nonagon that somewhat resembled a chubby winter melon.

  • @Norman_Fleming
    @Norman_Fleming 5 หลายเดือนก่อน +3

    I love your explanation of how it works and also your example of how to get it to fail. Examples of what it can't do are so much more informative of how it works than all of the 'look what it did' examples.

    • @michaelnurse9089
      @michaelnurse9089 5 หลายเดือนก่อน

      There are two more useful categories. What it can do in the future and what it will never be able to do. These are more pertinent than the right now in such a fast moving field.

  • @MZZenyl
    @MZZenyl 5 หลายเดือนก่อน +2

    I had Gemini refuse my request to write unsafe C# code (pointers), citing "ethical concerns" regarding memory leaks. Didn't have a problem if I asked it to do the exact same thing in C, so Gemini was seemingly getting hung up on the word "unsafe" signifying something it should avoid.
    ChatGPT seems decent at answering semi-obscure questions, e.g. things relating to C# source generators. It sometimes references APIs that don't exist, but it is usually pretty good with short and precise questions.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +1

      Yeah, it is a great tool in the right context and used with the right level of skill on the operator's part. Not just in asking the right question, but more importantly in being able to evaluate the answer.

  • @yuotyu3569
    @yuotyu3569 5 หลายเดือนก่อน +1

    Thank you very much Tim for this clear explanations, the only issue developers are facing now are the layoffs we see everywhere, partially if not entirely because of the IA

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +4

      AI isn't the cause for most of the layoffs. I'm not convinced it has caused any real layoffs. The biggest reason for layoffs is because of the Covid rebound. Companies hired LOTS of developers during Covid. Much more than they needed. Now, budgets are being tightened again and those people are being laid off. However, the other part is that people often look to the big companies (Google, Apple, Microsoft, etc.) and think that they are representative of the industry. They aren't. They do employe a lot of developers, but there are thousands of companies out there that hire developers. For example, I worked at a company in Pennsylvania that employed a couple dozen developers. They've been trying to hire developers almost constantly for years. I know of lots of other companies that are in the same boat. I'm not saying that getting a job is easy, but there are a lot of jobs out there.

  • @PeterP-y4u
    @PeterP-y4u 5 หลายเดือนก่อน +4

    Great explanations again, Tim. Thank you

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +1

      You are welcome.

  • @carljalal3855
    @carljalal3855 2 หลายเดือนก่อน

    to be honest... I think it's really nice to just learn from regular google searches and scroll through web pages for answers. It might be time consuming, but perhaps I learn better by having that journey. it also gives me a chance to recognize the very people who work hard on educational websites and blog posts to teach us

    • @IAmTimCorey
      @IAmTimCorey  2 หลายเดือนก่อน

      Thanks for sharing.

  • @TISINLI2
    @TISINLI2 4 หลายเดือนก่อน +1

    This is just the beginning of AI. in about 5-10 years from now, companies will have 1-2 devs rather than 8-10 devs. It currently writes resumes, accomplishments email. Imagine what it can do in the future.

    • @IAmTimCorey
      @IAmTimCorey  4 หลายเดือนก่อน +5

      I'm not nearly so optimistic about AI. Right now, it is handling the low-hanging fruit in a reasonably good way. However, assumptions that this will continue and that improvements will be exponential or even linear are wildly optimistic. We are already starting to see a significant slowdown in AI growth and there have already been documented regressions in the longer-running systems like OpenAI/ChatGPT. The key thing to remember is that AI does not think. That's the piece that we humans have to add. AI can make suggestions, but we have to be the drivers of the AI since we have to handle the hallucinations that occur. That isn't going away. It is also not getting better. In fact, there was a push by OpenAI to have the AI check its own work to ensure it wasn't hallucinating. That check actually made things worse, not better.
      AI will not replace jobs. Sure, some companies may be stupid enough to lay people off in favor of AI doing their jobs, but that isn't going to last. AI will make developers more efficient, which in theory might mean less developers doing more work. However, that's happened multiple times in the past. For example, search engines made developers faster. So did the rise of blogs and online documentation. So did Stack Overflow. So did Intellisense for C# developers. So did C# itself. So did BASIC back in the day. We've always had advances that make developers more efficient. The trend with those added efficiencies isn't a loss of jobs in the long term. Instead, the trend is that the market grows.

    • @TISINLI2
      @TISINLI2 4 หลายเดือนก่อน

      @@IAmTimCorey not sure you read it or not but there was an article where a guy tricked the AI to sell him a truck for $1 😂. It wasn't sold because it had to be verified by human sales rep.

    • @travro2525
      @travro2525 4 หลายเดือนก่อน

      ​@@IAmTimCorey Thank you, Tim

  • @a.porteghali7402
    @a.porteghali7402 5 หลายเดือนก่อน

    In 2001, while I was completing my university project with C++ programming, Spielberg's movie Artificial Intelligence was screened. I was thinking when the "Dr. Know" character will come true to help me get the project done!🙂
    Thanks for your excellent explanation to differentiate between AI and the programming logic.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      You are welcome.

  • @michaelnurse9089
    @michaelnurse9089 5 หลายเดือนก่อน +6

    Devin - the step forward was that they had a editor, file system, debugger and constant loop - all working together, at least in a limited scope.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +8

      That was the pitch (along with the lies). The problem is that how do you separate the truth from the lies? Even in their demo, you can see the developer telling Devin how to do things with those tools.

  • @faisalalhoqani6151
    @faisalalhoqani6151 4 หลายเดือนก่อน

    It was a great episode like usual thank you, dear Tim, keep it.

    • @IAmTimCorey
      @IAmTimCorey  4 หลายเดือนก่อน

      Thank you!

  • @andergarcia1115
    @andergarcia1115 5 หลายเดือนก่อน

    Thanks Master for clearing things up in this video! Super important topic, couldn't have explained it better .

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      You are welcome.

  • @TruptiDaliaTC
    @TruptiDaliaTC 5 หลายเดือนก่อน

    Thanks for this clear explaination, Tim. I however do not get 1 thing -
    If SDE's are laid off/reduced as some coding tasks AI can do & an SDE can do more work. With this, how will AI generate new jobs? In which area/duties it could generate new jobs? What are your thoughts on it? Thanks.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      The biggest opportunity I see is in how AI lowers the bar for solo developers to do more. This will allow more companies to hire one developer to do custom work for them, where they would not have otherwise been able to even hire one developer. A single developer can now do a wider range of tasks, making them more stand-alone. It used to be that businesses needed a few developers in most cases. That's a really expensive proposition. AI will make that less of an issue, which means more businesses can hire a developer. While hiring a single developer (or a smaller team) might sound like not a big deal, the number of businesses in this category far outweigh the businesses that hire larger teams.

  • @govindagoudpatil7632
    @govindagoudpatil7632 5 หลายเดือนก่อน +1

    you are right as of now Tim Corey. We will be there soon. AI will be the artificial life.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +2

      Definitely not with LLMs (our current "AI"). They just don't work that way. As for AGI (the "artificial life" you are talking about), I'm doubtful that will ever happen but even the companies that are actively working on it say we are a long way off.

  • @michaelnurse9089
    @michaelnurse9089 5 หลายเดือนก่อน +1

    Logic - AI aces university logic tests. I agree that this is because it has seen similar things before. The question is whether human logic can only do the same or whether it can deal with things it has never seen before. I cannot say for sure. I am reminded of quantum physics - the rules seem to 'defy logic'.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      Getting a good score on a test isn't a great indicator of intelligence. In fact, it really highlights the flaws we have in our educational system. As a developer, do you think a multiple-choice test can identify if you are a good developer? A short answer test? Nope. I used to teach C# at a local college. They really wanted me to have tests to assess the students. I did the bare minimum of testing because it didn't tell me much of value. Sure, it told me if they listened when we talked about some concepts. It could tell me if they understood some theory. But the way I assessed the software development skills of a student was to have them build software. A student could recite the principles of DRY, SOLID, and more to me, but if they couldn't identify when and how to implement them in an application, they didn't actually understand them yet.
      AI treats the world like digital. Ones and zeros. A CRUD application is the same as every other CRUD application. But we know that's not the case. The world is analog. What works for one customer will not work for another. There are thousands, even millions, of data points that humans assess without even thinking about it. The thought is that if we just get enough of those data points into AI that it can think like us. That's simply not the case.

  • @عروضوهميزات-ل5غ
    @عروضوهميزات-ل5غ 5 หลายเดือนก่อน

    Hey Tim a good idea to learn and stay up to date with two frontend frameworks at the same time?

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +2

      It can be distracting (you are spending half of the time you could in each) and confusing, but it can make getting a job (or the next job) easier as well. It all depends on your current skills (depth and breadth) as well as your goals.

  • @SteveC-t9t
    @SteveC-t9t 5 หลายเดือนก่อน +1

    What's scarry is how quickly Microsoft is just putting AI all over your personal data on your devices. So now it's going through all your personal documents looking for patterns and monitoring everything you do. I recently upgraded to Windows 11 and the new recommendation feature on the start menu recommended a file that I was sure I had gotten rid of years ago. It contained some passwords and other confidential info. I don't even know how the AI found it because it was in a folder that was in a location far from being discoverable. I mean it wasn't in the documents folder or anything. Yet some how the AI found and displayed it as the first item recommendation right on the start menu.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      Well, improved file search is something that Microsoft has needed for a long time. That's not really an AI thing. Your entire hard drive has been searchable for forever. Microsoft just didn't do a great job at it before. I am concerned, however, about how much AI is being incorporated into the operating system, as well as every other device. Depending on the checks and balances, it could be a good thing or a bad thing. It should definitely be better than Cortana/Siri/Alexa.

    • @SteveC-t9t
      @SteveC-t9t 5 หลายเดือนก่อน

      @@IAmTimCorey Tim, file searching is one thing. However this is something completely different. It's recommending items that you didn't search for and displaying them to you when you never asked it to find these files. They just show up. How is this not an AI thing?

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      It might be a function of the AI to show them to you, but it really is just a file search. The difference is that the OS is trying to anticipate your needs. However, don't think that files are ever "hidden" on your hard drive. Whether the OS shows it to you in anticipation of your search or whether you search for it first, that's the only difference here. Either way, the file is discoverable and readily available.

  • @SomeGuyIsWorking
    @SomeGuyIsWorking 5 หลายเดือนก่อน

    Very well explained, thank you!

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      You're very welcome!

  • @codefoxtrot
    @codefoxtrot 5 หลายเดือนก่อน

    As a software engineer, I'm most proud when I fix the problems I created-- especially before anyone else notices! Should come as no surprise that I first learned BASIC in a basement. It's just who I am.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      Thanks for sharing!

  • @miyu545
    @miyu545 5 หลายเดือนก่อน +3

    All I know is that in the past 3 weeks, I have developed completely functional applications that previously would have taken months, on their own. I have pushed out and published more apps with AI in the past let's say 3 months, then the last 5 years.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +4

      That's great! AI does definitely make you more efficient. That's going to be true for all developers who learn to use this new tool. The trick is to understand the results it gives you. There was a study recently that found that ChatGPT gave the wrong answer to programming questions 52% of the time. The worst part was that 39% of the time, the developer didn't notice and just implemented that bad code. That's the real danger.

    • @MattMcQueen1
      @MattMcQueen1 5 หลายเดือนก่อน +5

      @@IAmTimCorey I think there's going to be a huge market soon for developers who can fix all the issues caused by AI.

    • @Vekikev1
      @Vekikev1 5 หลายเดือนก่อน

      Well, apps are easy, you wouldn't be saying that if you were programming pacemakers or cockpits. The harder the task the less useful(or more lost/confused) the AI.

    • @devpitch
      @devpitch 5 หลายเดือนก่อน +1

      @@IAmTimCorey Agree 100%. AI is the new skill.

  • @satish1012
    @satish1012 หลายเดือนก่อน

    Great presentation
    Basically you are talking about hallucinations? i observed many times Chat GPT provides answers very confidently.
    When is ask again it thinks for a while and provide s the wrong answer again

    • @IAmTimCorey
      @IAmTimCorey  หลายเดือนก่อน +1

      Hallucinations are one major issue with AI. Another, though, is the fact that it does not actually reason. It doesn't create new logic. It only parrots what it has been taught. This may seem like a non-distinction from how we operate, but that is far from the truth. For example, if you had never seen a 23-sided shape, could you draw one? Sure. You would just use the existing knowledge you have about what constitutes a side and how to count to 23 and you would make a shape like that. It might be ugly, but you could do it. AI isn't able to even take a small leap like that. The problem is that we are used to taking large leaps as humans. We just don't think of it that way. We can also self-evaluate. So, if you created that 23-sided shape, you could evaluate if you had succeeded by counting the sides. An AI cannot do this. It is basically guessing.
      "Better" AI models will be better at tricking you into thinking that they aren't constrained by these problems, that they are thinking and taking leaps. However, that's just smoke and mirrors. The underlying mechanism by which AI models make decisions isn't changing. It cannot.

  • @cysecgnz
    @cysecgnz 8 วันที่ผ่านมา

    Currently have spent years on a path to become a software engineer. With the advent of AI and tools like bolt.new, I'm now concerned that all my hard work will go to waste and won't be able to find a job in the field once I graduate from school. Trying not to allow the negative thoughts to consume or discourage me, or make me feel hopeless.

    • @IAmTimCorey
      @IAmTimCorey  5 วันที่ผ่านมา

      Don't worry, AI won't take your job. Learn to use AI to help you do things faster, but don't let it lead you. AI is like having all of the technical knowledge on a topic, but not the practical experience. Imagine decide to help a teenager out, so you teach them every part of a car. You show them how every switch, pedal, dial, and gauge work. You teach them the theory of how the car works, how to steer, how to brake, how to shift gears, and more. You never have them actually do any driving, but they now know everything about the car. Now imagine the first time the teen drives, they drive in a sleet storm when the roads are icy and traffic is heavy. How do you think they will do? Not great. Just because they know all of the technical parts of how the car works doesn't mean they can drive through a sleet storm. That takes experience driving, experience in bad conditions, experience in traffic, and more. All of that comes from practice.
      That's what it is like to be a developer with AI. It gives you a lot of information and that information can be helpful. If you already know how to "drive", it can help you by giving you the information you need more quickly. However, it isn't the same as letting it drive. You still need to be in charge and you still need to have the experience necessary to get you to the right destination on time and safely.
      Learn to be a good developer. Get that experience. Let AI help you, but don't let it lead you. That will set you up for long-term success.

  • @lebogangncongwane4298
    @lebogangncongwane4298 5 หลายเดือนก่อน

    After reading the "Non computable you by Robert J Marks" it put confidence in me that AI won't live to the hype that is around it.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +1

      Thanks for sharing!

  • @buildtolove
    @buildtolove 5 หลายเดือนก่อน

    thank you Tim

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      You are welcome.

  • @perfumegoose
    @perfumegoose 5 หลายเดือนก่อน

    Excellent, excellent video!!!!!!

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      Thank you!

  • @MouhamedSaleemAlZayat
    @MouhamedSaleemAlZayat 5 หลายเดือนก่อน

    Wonderful as usual ...... thanks Tim

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      You are welcome.

  • @hqcart1
    @hqcart1 5 หลายเดือนก่อน

    Actually Devin appeared on MS Build event and was on the keynotes, so i don't think it was totally fake, otherwise Microsoft wont touch it.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +1

      Devin itself isn't fake. The demo they showed off to launch it was faked.

    • @hqcart1
      @hqcart1 5 หลายเดือนก่อน

      @@IAmTimCorey if the demo was fake, do you think microsoft will mention them in the keynotes? i am sure MS saw something real..

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +2

      No, they won't/didn't mention it. That's not something any company would do. The Devin team already talked about how they faked the demo. That's the problem with the "gold rush" we have right now with AI - companies are rushing to get out the door and faking demos in order to catch the public's eye. The leaders in AI will be paid enormous amounts of money. That leads to cutting corners. This isn't an AI-only problem. The company behind Anthem (the game) did the same thing with their demo. There, the developers actually used the demo footage as a guide for how to build the game. In the case of Devin, it is doing some interesting things. However, it isn't nearly as innovative as they said. It is progress, but it isn't a massive leap forward.

  • @LE8271
    @LE8271 4 หลายเดือนก่อน +1

    AI is like a know-it-all uncle who pretends to have the answer to everything. It bluffs and fibs, but you can't embarrass it.

  • @heck0272
    @heck0272 5 หลายเดือนก่อน

    The term AI is too broad to be used for a specific use case. ChatGPT is a Large Language Model or LLM which is a subfield of NLP which belongs under Machine Learning. There are other AI fields doing amazing things such as Object Recognition, a Deep Learning subfield, not to mention ML algorithms like linear regression and classification doing some great work in many of the products we use today… in other words we cannot use the term AI to refer to a single LLM trained by a single company (OpenAI in this case)

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +2

      If I say "there are too many cars on the road", do you assume I'm excluding trucks, vans, tractor trailers, and more? Or do you understand that the term "car" in that context refers to the broader category? The same thing is true here. Do you think anyone was confused by the term AI? Yes, there are lots of specifics inside of the AI field, but that has been the term that everyone uses for the general field. The actual company you referred to is called "OpenAI". Not OpenLLM or OpenML. In other words, we absolutely should use that term because that is what people are familiar with.

  • @عروضوهميزات-ل5غ
    @عروضوهميزات-ل5غ 5 หลายเดือนก่อน

    Having the Ai training on millions of github repos, will it be able to program by itself?

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +2

      No, because AI does not have reasoning. It cannot evaluate if something is good or bad. It is already trained on millions of GitHub repos, and it is proving that just seeing lots of code doesn't make you a good developer. For example, a recent study found that it is wrong 52% of the time in programming questions (and 39% of the time, the developer asking the question didn't spot the error).

  • @michaelnurse9089
    @michaelnurse9089 5 หลายเดือนก่อน

    Follow up questions: this is technically feasible - this was the big mistake OpenAI and friends made with LLMs - LLMs should ask a ton of follow up questions.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      Follow-up questions would be good. However, that would take a LOT more work to get right and it would be a LOT more expensive. In the end, I'm not sure if it would provide enough output gains to be worth the price. It is cheaper and easier to train humans to ask better questions of the AI.

  • @concreteltd5912
    @concreteltd5912 5 หลายเดือนก่อน

    Thank you for this

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      You are welcome.

  • @michaelnurse9089
    @michaelnurse9089 5 หลายเดือนก่อน

    I don't think search engine is a good analogy. Search engines return discrete, pre-existing pieces of data. Neural nets interpolate and extrapolate making them useful (or dangerous) where the exact data needed does not exist.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +1

      A current search engine takes the prompt and tries to find a page that matches that prompt. An LLM takes the prompt and tries to find various pieces that match that prompt. It then puts them together into a result. That's really all that is happening. The "magic" is it determining which pieces it needs, which parts are relevant even inside of a group of similar solutions, and how to assemble them together into a mostly cohesive whole.

  • @kgnet8831
    @kgnet8831 5 หลายเดือนก่อน +1

    Gemini 1.5 advanced generated source code for python to draw a nonagon with python and matplotlib and also an correct image as output. So it depends on your AI 😁

    • @michaelnurse9089
      @michaelnurse9089 5 หลายเดือนก่อน

      And also the prompter - many things that fail on LLM's work once you change the prompt.

    • @denisquarte7177
      @denisquarte7177 5 หลายเดือนก่อน

      This hole take seems pretty misinformed on how the human brain works, as well as (and that's the disappointing part) how AI works.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      I think you missed the point of the example I was using. The point was that ChatGPT could logically think through what a nine-sided object would entail and create it. It also could not evaluate its own product to determine that it had given me a failing solution. Even in your example, Gemini isn't actually creating a nine-sided shape. It is using Python code to create the shape. Code that probably already exists thousands of times. That's the deception of AI - you can't just look at the output to determine intelligence. You have to look at how it arrived at that output. It didn't actually create anything new, it just imitated and returned an example based upon what it has been told (through training).

  • @ArkFen
    @ArkFen 5 หลายเดือนก่อน

    It's all sounds good but I suggest only one correction. When you say AI does not have logic you most likely mean generative AI or AI based only (or mostly) on LLM does not really have logic. Because there are a lot of AI implementations done for 50+ years which has logic. The thing about AGI or General AI is I guess when logical blocks of AI tools will be connected in a right way with the predicting model AI blocks (like gpt etc) in a right way and with a good balance ...somewhat like in our brains or so... Until that tools are mostly either too much logical but not so powerful in creation (old school) or very good in creation but mostly based on what they seen and not so good with logic and math when you need to think strictly or count drom scratch. So iny.opinion we should distinguish that and admit that there is a lot of work to be done to reach that level and it is not like LLM becomes better but they will be hooked and used the right way into the AGI.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      Modern LLMs will not become the foundation of AGI. It is a fundamentally different set of activities. That's not just my opinion, it is the opinion of people actively working on creating AGI and LLMs.

    • @ArkFen
      @ArkFen 5 หลายเดือนก่อน

      @@IAmTimCorey I s of course and this is exactly what I have said ...so I am not sure if I understood your point, or if you understood mine.
      Foundation of something and part of something is a huge difference...
      I can be wrong, but LLM easily becomes a part of AGI because we as humans do the same things in our brain... But it is NOT a foundation of our intelligence, just one of the parts

    • @ArkFen
      @ArkFen 5 หลายเดือนก่อน

      @@IAmTimCorey agree. great - I got your point. thanks. I guess I meant not the crunching numbers way mimicking the brain but the part where our memory and pattern recognition works etc.. which is still not studied well etc... but having in mind how kids learn stuff just repeating all they see and mimicking ... it is still in a lot of ways how predicting model works to some extend... again - I do not go deep... just basically. other than that of course I agree with you that brain and neuro networks does not have much common besides names )) and some very loose analogy
      so in a nutshell I love AI in all its forms and I think as many forms we create and then combine together is better (without extremes) - then in the end of day just like brain and body has many many parts working together we can possibly create much better intelligent products when we combine many specialized part under some front-lobe AI orchestration... its just my opinion.
      Thanks again for your awesome job Tim and even taking time for comments. apricate that. Keep it up man!

  • @nwdreamer
    @nwdreamer 5 หลายเดือนก่อน

    But will AI ever be more powerful than RS??? 🤔😁
    I work with a guy who's addicted to AI (it can be painful!). The issue is this: Put 100 random people into a room. How many are experts in any single topic? VERY doubtful that it will be all 100! However, AI is trained on all the information out there which also includes a large amount of sub-standard stuff! When I do ask it for something, its success rate is about 10%, however, to be fair, it does give me ideas so I can figure things out myself.

  • @mikey803
    @mikey803 5 หลายเดือนก่อน

    good video. AI is very interesting to learn.

  • @michaelnurse9089
    @michaelnurse9089 5 หลายเดือนก่อน +2

    We have proved scientifically and mathematically that Neural Nets are universal algorithms. In other words it is an easy trap to fall into a trap and say "Meh...4-o can't do this or that" but AI firms know that this is fixable with 1. More data 2. Better Data 3. Larger models 4. More compute. And so they roll up their sleaves and get to work. All the things AI cannot do are theoretically possible given enough resources.

    • @danilodjokic5303
      @danilodjokic5303 5 หลายเดือนก่อน +1

      We also proved you can brute force RSA

    • @MrArkaneMage
      @MrArkaneMage 5 หลายเดือนก่อน

      How exactly do you make a speaking database intelligent by increasing the data its been fed?
      You need to change the way it works so it is an intelligence that relies on the database as information, not vice versa (a database relying on an intelligence to tell it what to do, aka prompting)
      To make things more clear... you can teach a database what it knows but you cant teach a database how to develope new knowledge, only intelligence can accomplish that and it requires a completely different approach.

    • @michaelnurse9089
      @michaelnurse9089 5 หลายเดือนก่อน

      @@danilodjokic5303 Different things. RSA brute forcing is practically unviable. Doubling your data quality and compute for a 4x improvement is very doable - that is what OpenAi etc is doing and will be doing going forward - probably for the next couple of decades.

    • @MrArkaneMage
      @MrArkaneMage 5 หลายเดือนก่อน +3

      To elaborate my point a bit further... try asking it for causalities and correlations e.g. regarding climate change whether its man made or not (big discussion still) and you will see what I mean - it is repeating whats in his database, it is not explaining anything so its not "intelligent" by any means yet.

    • @denisquarte7177
      @denisquarte7177 5 หลายเดือนก่อน

      @@MrArkaneMage There is no discussion. You can measure it. It's that easy.

  • @6957-c5k
    @6957-c5k 5 หลายเดือนก่อน

    Tim destroyed AI in under 10 mins. Social media is the primary reason AI hype is so high.😁

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      I am glad it was helpful.

  • @arrrryyy
    @arrrryyy 3 หลายเดือนก่อน

    I may seem to come too hard on Tim and many may think I am disrespectful, but the opposite is true. Since he is teaching, most things are assumed to be what he says, that's why I post only what I disagree. He says AI does not have logic, and that's not the case. For sure AI has logic but not perfect, and that logic get improved exponentially by the same AI. What we - humans have for not is control of AI, but when AI gets too smart we may lose it too unless thorough measures are taken on right time. If we lose control of AI - then it can literally be totally fucked or completely wonderful or something we can't even think of. Regulators are sleeping.

    • @IAmTimCorey
      @IAmTimCorey  3 หลายเดือนก่อน

      "Have logic" and the ability to "create logic" are two different things. The AI has been trained on what humans would do. When it can copy what we would do, it does pretty well. That's "having logic". That's like training for a test with both the questions and the answers. The thing is, that leaves out a LOT of real-world situations. People who over-estimate the powers of AI severely under-estimate the power of the human brain. We don't just regurgitate what we have learned before. We innovate. We take logical leaps. The AI cannot do this. That would describe AGI and the industry experts say we are probably decades away from AGI if we can even achieve it. So no, AI will not get "too smart" and start thinking for itself. Again, as the industry experts have said, the way we train "AI" right now cannot and will not be the method with which we build AGI. AI is effectively just a really precise search engine. That's about it. The difference is that instead of taking you to a webpage where something can be found, it takes the billions and trillions of inputs it has trained upon, figures out which ones best fit your scenario, and then identifies which pieces are most likely to be relevant to you. It then takes those pieces, fits them together, and returns the result.
      If AI were truly able to create logic, the existing models would change on their own. For example, GPT-4o would get better over time without anyone touching it. That's not happening. In fact, more and more work is going into figuring out how to further refine the models via more input. The problem there is one where more input actually causes more problems, not just better solutions. The thought was that if we trained AI on bigger data sets, it would get better. That worked at first. Now, the opposite happens.

  • @sarangshinde5490
    @sarangshinde5490 5 หลายเดือนก่อน

    If your going to use AI for simple task , like I see people even using it for writing 10 min home works, and 2 min codes . In past our older generation had good body physic couse most of them use to work in farm and now days we don't have that kind of phisic couse we don't use our muscles anymore so they don't develop, similarly with AI if you will start using artificial intelligence more and more use of your own intelligence will reduce over time , it's just law of survival of fittest and AI is fit in so many sense as it don't die , get seek , don't lose memory, don't get emotional, sad , depressed, it's just has more possible chance to survive compre to us .

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      AI will cause developers to be sloppy, but that will cause problems for them.

    • @sarangshinde5490
      @sarangshinde5490 5 หลายเดือนก่อน

      ​@@IAmTimCorey but what about overall humans not developing intelligently due to reduce in use of actual use of human intelligence ? Evolution will hit hard !

  • @MattMcQueen1
    @MattMcQueen1 5 หลายเดือนก่อน

    AI is interesting, but it's hard to see how we avoid AI hallucinations, and how we can trust the answers it comes up with. It's fine if we already know the answer, but that's not very useful. Try asking your favourite AI for a list of fruits ending in the letters UM.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +2

      I was at a talk given by one of the leaders in the AI industry. The question was asked of them "how do we reduce/eliminate hallucinations?" The answer was "we don't know, and we aren't sure we will ever know."

    • @MattMcQueen1
      @MattMcQueen1 5 หลายเดือนก่อน

      @@IAmTimCorey AI is making progress, but we are a long way away from what is being promised now. Too much hype, and for some applications of the technology, there are lots of copyright concerns.

  • @manojbp07
    @manojbp07 5 หลายเดือนก่อน

    We know AI is just a better search engine, awesome code gen or assistant... But its based on the knowledge already on the internet or whatever it was trained in at point in time.. techniques like rag can work with realtime data but has its limitations.... But it is not accurate all the time and needs someone to validate its output... Replacing most devs will not be possible with AI or text generator... Most of the industries use computers and automation or everything and putting everyone out of job means those people will just buy essentials to survive i.e. food, clothes etc. which will bring down many businesses and its a vicious cycle

  • @periloustactic
    @periloustactic 5 หลายเดือนก่อน

    I see AI, copilot as a real problem in the future.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      Yeah, it is a bit concerning.

  • @zarzache1
    @zarzache1 5 หลายเดือนก่อน

    Hit like before watching the video :)

  • @thomasr22272
    @thomasr22272 5 หลายเดือนก่อน

    All the things he just said about AI could be said about humans honestly

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      I'm not sure what things you are talking about, but I don't see the comparison. The example I used of ChatGPT not understanding what a nine-sided shape might look like and not being able to evaluate its own work - that is something a 7-year-old could probably get right. The way AI evaluates data is fundamentally different than how humans evaluate data.

  • @silence8806
    @silence8806 5 หลายเดือนก่อน

    I don't think AI will make the job as a software developer easier, as BASIC did not make the job easier. Getting more done, does not mean, the field in general becomes simpler.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน +1

      I think you are trying to split hairs here. If I can get 10 tasks done before I start using a tool and I can get 15 tasks done after using the tool, doesn't that mean the tool made the job easier? AI helps you get more done. That doesn't diminish the job or its complexity. It just means that you can do more because of this additional tool.

  • @DevMeloy
    @DevMeloy 5 หลายเดือนก่อน

    A.I is a tool which we can use to make our lives easier. I do think that as more companies adopt A.I we will see jr positions disappear.
    My area already has a lack of entry level positions and at some point there will need to be a market correction to fill positions as sr. Developers age out.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      There are a few problems with the idea that AI will replace junior developers. First, as you pointed out, that's not long-term sustainable because you need new talent in the industry all of the time.
      Second, companies never look to hire more expensive people. So, if anyone is in danger, it would be the senior developers since AI can make junior developers more capable. The reality is that each level of developer has their place still.
      Third, since AI allows people to be more capable, it allows the junior developers to do more than they otherwise could have, therefore they are more likely to be hired by companies.
      Fourth, we aren't lacking for developer jobs. In fact, being able to do more as a single developer means that more companies that could not afford a team of developers might now hire a single developer. This means more companies offering developer jobs, which would be a huge expansion of the market.

    • @DevMeloy
      @DevMeloy 5 หลายเดือนก่อน +2

      @@IAmTimCorey The market is currently saturated with developers and each job posting gets 100+ applications almost immediately. I can't vouch for each candidates credentials, but speaking with recruiters; companies are looking for Sr. dev's at a lower rate.
      This is most likely caused by the economy slowing over A.I but from my very recent experience job hunting, employers are looking at Sr's over Jr's.
      It will be interesting to see what happens over the next few years for sure, my guess would be shrinkage as companies try to do more with less. Jr's will have to be self made or do some form of internship.

    • @IAmTimCorey
      @IAmTimCorey  5 หลายเดือนก่อน

      The economic ebbs and flows are going to happen. However, the software development field is growing faster than most other career fields based upon tracked data. That means that overall, things will get better.
      I do want to address the idea that 100+ applicants are applying for every job. First, that's probably not true for on-site only jobs. However, that was true when I opened up a position for a senior developer at my company a few months ago (work from home but in the Dallas area). Yes, I was looking for a senior developer because they were going to be the first developer I hired. So, I received over 300 applications for the position. That sounds like a saturated market. However, I can tell you that over 250 of those applicants were not at a level where it was even possible that they could do the job. People over-estimate their skills and don't have anything to show off to prove what they can do. So that meant that I only really had about 50 people to go through. Of those, a significant number did not actually have the skills or experience of a senior developer. In the end, it came down to about 10 candidates who were at the senior developer level. Then it came down to provable skills and the ability to communicate them. That narrowed it down to just a few. So 300+ sounds like an overwhelming number, but for a remote/hybrid job, it turned out that I only had a handful of viable candidates to choose from.

    • @DevMeloy
      @DevMeloy 5 หลายเดือนก่อน

      @@IAmTimCorey Yup, I can only speak from personal experience in the Midwest. Working with several recruiters saying companies are leaning more towards smaller teams filled with Sr devs.
      I'm sure the market will rebound, and it will be interesting to see what part A.I plays.

  • @perfumegoose
    @perfumegoose 5 หลายเดือนก่อน

    Caused the problems it is proud of fixing??? Are we talking software here or government actions whooooooohahahahahahahahahahaha

  • @manojbp07
    @manojbp07 5 หลายเดือนก่อน

    We know AI is just a better search engine, awesome code gen or assistant... But its based on the knowledge already on the internet or whatever it was trained in at point in time.. techniques like rag can work with realtime data but has its limitations.... But it is not accurate all the time and needs someone to validate its output... Replacing most devs will not be possible with AI or text generator... Most of the industries use computers and automation or everything and putting everyone out of job means those people will just buy essentials to survive i.e. food, clothes etc. which will bring down many businesses and its a vicious cycle