AI Scuffed Programming

แชร์
ฝัง
  • เผยแพร่เมื่อ 29 ก.ย. 2024

ความคิดเห็น • 494

  • @brunospasta
    @brunospasta 3 หลายเดือนก่อน +1498

    For small code snippets and small tasks its great. Use it all the time for it. The longer and more complex the problem and code gets the worse it gets (obviously). But it drops of very quickly.

    • @terminallyonline5296
      @terminallyonline5296 3 หลายเดือนก่อน +33

      Right, it's good at a for loop, but more than functions it drops off really quick. As soon as you're spending more time debugging a function than making one yourself it stops being worth it.

    • @AmxCsifier
      @AmxCsifier 3 หลายเดือนก่อน +26

      they should've called co-snippet not co-pilot

    • @TouringRCs
      @TouringRCs 3 หลายเดือนก่อน +18

      Yeah, I usually use Chat-GPT to find out how to do a simple task and then use that knowledge to make the more complex thing I want to do.

    • @theIbraDev
      @theIbraDev 3 หลายเดือนก่อน +12

      But snippet engines are instant, free and way better most pf the time

    • @FeLiNe418
      @FeLiNe418 3 หลายเดือนก่อน

      Don't tell me what to use

  • @schonsense
    @schonsense 20 วันที่ผ่านมา +3

    you just forgot to prompt it for best quality, high quality, masterpiece, top-tier, elite-level, error-free, premium code.

  • @Shonicheck
    @Shonicheck 3 หลายเดือนก่อน +7

    Ai is great at applying pre-known algorythms to the problem. Ai code tends to be cleaner than my own if created as sorta small snippet(since it puts great emphasiz on styling and redability a lot of times, like examples in books), and he makes work way faster than it may have been otherwise just by doing that, wven with all the mistakes and needed refactoring.
    My point is that AI is great at making arbitrary decision and working as a sort of a code monkey - you can't trust it to write long or structuraly complex code, but it will do with simpler widely aplicable things tuned to the set restrictions. Its also good at guessing things and looking at things at weird angles which in turn creates sort of a rubber ducky effect on steroids, since you have a rubber ducky that can provide unique input.

    • @wesleywyndam-pryce5305
      @wesleywyndam-pryce5305 2 หลายเดือนก่อน +1

      it being cleaner means you need to learn to do better. not that it should be used.

    • @Shonicheck
      @Shonicheck 2 หลายเดือนก่อน

      @@wesleywyndam-pryce5305 or it may as well mean you use it wrong) Not that i think that i write good code)
      If you actually use it to generate clear cut and small snippets it gets quite the good job of naming and applying algorythms give or take a small cleanup. If you are trying to generate big code blocks or complex ideas that actually require way more than just applying something you may find in a book or documentation for the thing you are using - then sure it will generate pure garbage most of the time(but sometimes it even works, so it can be ocassionally usefull, though in my practice it takes more to refactor it than to just decomposit the task yourself and make the ai do the "busy work" of doing verbose and somewhat "clean" code). Its also good at translating unsearchable queries like "that bus that is used to control and communicate with displays in mobile phones"(not the best example, but you get the jist of it, trying to find the term based on its vague qualities that any human fammiliar with it could identify) or something vague like that, which would take some time to actually get the answer.

    • @TeHzoAr
      @TeHzoAr หลายเดือนก่อน

      there needs to be a term like 'dunning-kruger effect' but for people that think LLMs are good at writing even small code snippets because they don't have the tools/capacity to judge good code

    • @Shonicheck
      @Shonicheck 18 วันที่ผ่านมา

      @@TeHzoAr ehh, well maybe. My definition of good/clean code is that i can nearly instantly understand what and how it does something, preferably without even needing to look at any other places in the code base. Additional points for extensibility, thread safety(if aplicable), and neat mm. While ai rarely scores additional points i don't really feed it any tasks where it can mess it up in any major way(since as i said, the snippets are small, and mosttly repetative), i also mostly write c code in embedded enviroments, so most of my code is either self-contained or with very little and hastle free enviroments.

  • @arstneio
    @arstneio 3 หลายเดือนก่อน +41

    I think that’s why the way supermaven gives suggestions is better. Supermaven only provides small one liners that are usually tedious to write instead of providing an entire ginormous piece of code that you probably don’t want. It also results in better performance and less shitty code cause by pure laziness.

    • @bryanmoore3927
      @bryanmoore3927 3 หลายเดือนก่อน

      I think copilot in vs code or GitHub copilot does a fairly good job of the same thing. Sure sometimes when defining a function it tries to give me the whole function definition right at the start but once I'm in it, copilot usually sticks to one line suggestions.

    • @Daniel_WR_Hart
      @Daniel_WR_Hart 3 หลายเดือนก่อน

      Thanks, I recently retired TabNine and was looking for better AI copilots for 1-liners

  • @GottZ
    @GottZ 3 หลายเดือนก่อน +12

    fourth time TH-cam recommended this exact short to me. wtf is going on with TH-cam recently..

    • @ayoaloko1453
      @ayoaloko1453 3 หลายเดือนก่อน +1

      lool youtube is shaming you for using the competition

    • @smthngsmthngsmthngdarkside
      @smthngsmthngsmthngdarkside 3 หลายเดือนก่อน

      Seems like you need to hear it

    • @JamesJosephFinn
      @JamesJosephFinn 3 หลายเดือนก่อน

      Copilot wrote the new YT algo

  • @CodingThingsIRL
    @CodingThingsIRL 3 หลายเดือนก่อน +32

    "Let me hit you with some facts".
    Proceeds to postulate

    • @bayern1806
      @bayern1806 3 หลายเดือนก่อน +5

      You clearly never used AI 😂. You would know he is right.

    • @CodingThingsIRL
      @CodingThingsIRL 3 หลายเดือนก่อน +2

      @@bayern1806 Postulate doesn't mean he's wrong.

  • @NinjaLobsterStudios
    @NinjaLobsterStudios 2 หลายเดือนก่อน

    I love these AI tools for extremely simple stuff, and starting from scratch. It makes getting the first draft of a prototype very easy. But once I get into the refinement steps, the AIs start to drop off in usefulness. Not just them being error-prone, but even knowing what i want. If i break my work down into a lot of small functions the AI is usually pretty good at figuring out what it should be doing, but at that point there is no productivity gain

  • @ASmith2024
    @ASmith2024 3 หลายเดือนก่อน +14

    bro just shat on the entire AI hype cycle in one yt short. Well done.

    • @chris285as
      @chris285as 3 หลายเดือนก่อน

      It’s true, if your a bad coder it’s going to make be an average coder, after that your on your own that’s all it tells me.

  • @SHADOWGAMER-ni8iv
    @SHADOWGAMER-ni8iv 3 หลายเดือนก่อน

    Spent hours digging into the CYBEROPOLIS whitepaper, and wow, I'm blown away. The idea, roadmap - meticulously detailed. I'm usually wary of presales, but after this, I threw a big bag into it. This project's got serious potential!

  • @frittex
    @frittex 2 หลายเดือนก่อน

    claude does a great job at programming with their new model.
    even I, a full-time AI hater, am impressed

  • @myne00
    @myne00 2 หลายเดือนก่อน

    By that logic it'll probably work pretty good on assembly or Cobol.
    More of the code is going to be of a greater quality because most of the people who wrote the existing codebases to learn from, are going to be of a higher calibre.

  • @stefanperko
    @stefanperko 2 หลายเดือนก่อน

    You draw a bunch of unimodal distributions (the only ones you learn of in college) though no actual conditional distribution of any relevant DL model is ever gonna look remotely like that

  • @sagetarus1
    @sagetarus1 3 หลายเดือนก่อน

    I typically ignore my copilot until my workspace is large enough. After that point, it tends to learn off of my own trends and not random bs.

  • @alexurbina7072
    @alexurbina7072 หลายเดือนก่อน

    I split complex logic into small digestible ones, so copilot and handle it. It speeds up the process.

  • @angrydragonslayer
    @angrydragonslayer 3 หลายเดือนก่อน

    I'm generally not programming anything anymore but i tried it out and i've been lucky.
    All i get is gibberish like the program being inside 20-something seemingly redundant if-statements..... But it works. This specific program was actually faster than a program i asked an active coder friend for to compare by nearly 200%.

  • @SpadeQc123
    @SpadeQc123 3 หลายเดือนก่อน

    Only use I found for AI tools right now is asking them to show me in 15 minutes how to use a library I didn’t use before instead of spending 2 days on their docs lol

  • @afflatusstep6247
    @afflatusstep6247 3 หลายเดือนก่อน

    That's fixed if the code distribution on which the AI is trained is high quality code, a distribution which doesn't represent the real population (which contains lots of shit code).

  • @smanzoli
    @smanzoli 3 หลายเดือนก่อน

    The fact is: it will get 5x better every year... VERY SOON it will have close to zero errors... in this decade yet. Imagine Chat GPT 7 in a few years, being 200x better than the awesome 4omni

  • @motbus3
    @motbus3 3 หลายเดือนก่อน

    Reason is simple. It learns from questions too. It doesn't know differentiate good code from bad code.

    • @stefanth8596
      @stefanth8596 3 หลายเดือนก่อน

      Not true, if it takes stack overflow as input answers are rated, and atleast chatgpt "understands" the questions.

    • @motbus3
      @motbus3 3 หลายเดือนก่อน

      @@stefanth8596 how you know? I will tell you how I know. Copilot gives question snippets consistently

  • @kipchickensout
    @kipchickensout 3 หลายเดือนก่อน

    changing from copilot to plain chatgpt 4 was a game changer for me lol

  • @alexandreparent3942
    @alexandreparent3942 3 หลายเดือนก่อน

    I find using copilot as a autocomplete++ is the only decent use case. Start a line and 60%, copilot complete the line correctly. For sure, you cannot ask it to write a full module. But it can handle a single line decently.

  • @mop-kun2381
    @mop-kun2381 2 หลายเดือนก่อน +1

    Dude you are slowly turning into Thor

  •  3 หลายเดือนก่อน

    Yes. Finally someone smart enough to say that!

  • @ahoi370
    @ahoi370 3 หลายเดือนก่อน

    Youre forgetting many ai tools are a committee of experts, they dont simply look at the average.

  • @Nadeli0
    @Nadeli0 3 หลายเดือนก่อน

    I disable co pilot for everything but my own language because I'm the only person to use my language, so I know when something is bad or not. Co pilot just makes things faster for me.

  • @pyromancy8439
    @pyromancy8439 3 หลายเดือนก่อน

    I only use AI for:
    1. Single-line autocomplete
    2. Simple repetitive tasks, like translating/restructuring a large JSON or rewriting a simple template into a different templating language
    3. Generating boilerplate
    4. Sometimes, doc comments
    That's all I've ever found it useful for. The common examples of what coding AI can do like simple algorithms and such are not something that I write for real-world codebases, and for anything specific it writes something between a suboptimal solution and complete nonsense about 99% of the time.
    Also, I thought it was obvious that if you don't check the code and just assume your AI wrote it correctly, you will definitely spend more time debugging your code than it would've taken you to write it yourself.

    • @erickmoya1401
      @erickmoya1401 3 หลายเดือนก่อน

      Not true. You can add some tests. You can write a clear and concise firm of the function, and not only the AI would do it "just right" but also any software engineer with 3 months of coding.
      I have only seen complaints from software over engineers.

    • @andercorujao
      @andercorujao 3 หลายเดือนก่อน

      regular expressions, CSV, JSON, those kind of things
      just paste a snipet of the text and ask the AI what you want, the AI will do it with a very small chance of errors, well, less errors than when i do it myself
      also, when you're lazy and didn't read all the documentation, all the methods functions etc of your programing language and of the packages you're using, and you ask the AI for something its already implemented and documented, the AI will greatly do the search and show you, better than google i think

  • @nowonmetube
    @nowonmetube 3 หลายเดือนก่อน +1

    Don't try to generate code - rather try to generate the explanation to a solution. If you can't code with that, then there's the problem.

  • @komakaze1
    @komakaze1 3 หลายเดือนก่อน

    I see "this is only the beginning" a lot, but I think ai progress is just going to get increasingly harder from here. Compute is requiring more and more energy. They've already scraped the internet. Now they need to refine and filter data, train and combine smaller specialised models. Train detail and nuance. We have seen the easy wins and low hanging fruit. Ai development will gradually slow and get more complex.

  • @osudrecenzie2583
    @osudrecenzie2583 3 หลายเดือนก่อน

    Its buggy, but If You know what code You want, You can tell AI to do it in any existing language and You do not need to learn 2 months of new syntax. It make existing programmer vield every program language on earth. Opens up doors.

  • @Bruvkek
    @Bruvkek 3 หลายเดือนก่อน

    Could be just me, but i tried it a few times. It either takes massive inputs clarifying the problem clearly, or i just get the most random responses that have no value. Ecen when clarifying, got often forgets details. I got bored of that and stopped doing it all together, for now

  • @Mulakulu
    @Mulakulu 3 หลายเดือนก่อน

    I only use AI to know if there's some function or library available and roughly how to use it. I always program the stuff myself

  • @Yuumi_Bot
    @Yuumi_Bot 3 หลายเดือนก่อน

    Перепутал причину и следствие. Нейросети нужно не количество кодов в сети, а популярность одного из них у пользователей, по этому редкие, но хорошие варианты, будут предоставлены в первую очередь, если они популярны у пользователей, а если они не популярны у пользователей, то так ли они хороши на самом деле?

  • @snipedtwodeath
    @snipedtwodeath หลายเดือนก่อน

    It is only good for helping write the json from bullets.

  • @zxc232
    @zxc232 3 หลายเดือนก่อน

    Copilot save lots of time if you know how to use it. It's far from perfect but if you use it for no more than a few lines at a time and you are a decent programmer you can spot all the errors and fix it in a second.

  • @sguploads9601
    @sguploads9601 3 หลายเดือนก่อน

    that guy has no clue how LLm works at all. normal distrubution is not connected in any way.

  • @dgpreston5593
    @dgpreston5593 3 หลายเดือนก่อน

    Sturgeon's Law strikes again ..
    See? This is why we can't have nice AGI...
    😮😢😂😅😊

  • @quinlanal-aziz6155
    @quinlanal-aziz6155 2 หลายเดือนก่อน

    I feel like he doesn’t know what he’s talking about

  • @cparks1000000
    @cparks1000000 หลายเดือนก่อน

    If you train AI right, it should be able to tell good code apart from bad code. Unfortunately, there's not training data for that.

  • @yahi06
    @yahi06 3 หลายเดือนก่อน

    use it as documentation search

  • @GEARINSADE
    @GEARINSADE 3 หลายเดือนก่อน

    Depends, chatGPT works good on flutter (like really good), but not as good on phyton

  • @microcolonel
    @microcolonel 3 หลายเดือนก่อน

    Not the way to appreciate linear algebra rightly.

  • @mooted5513
    @mooted5513 22 วันที่ผ่านมา

    Sure. This guy didn’t steal the whole shtick from somewhere else.

  • @putyavka
    @putyavka 2 หลายเดือนก่อน

    I don't like all this assistant-like stuff 'cause I hate to unaware what my code exactly do. So in any case I had to analyze code of the assistant and waste my time

  • @colorpalet
    @colorpalet 3 หลายเดือนก่อน

    Well it's kinda true but even if the data was the best that there is it. It would still fail a lot, since it wont be cable to dell with the problems that have no or littel amount of solutions. And also using stuctures ment for langues models for programing kinda works at making code that looks like a code that all that it cares about.

  • @TOZA
    @TOZA 3 หลายเดือนก่อน

    I had some Rust code for prime numbers it took 1 minute to create a file with these numbers from 0 to 1 000 000 - i used gemini ai in ai google studio with 1.5 gemini flash model and it made my code 375 ms, so...
    [I am shit at Rust(it was my first program) and I wasn't using any algorithm only some buffer before writing file, method by ai used Sieve of Eratosthenes], so for me its good.

  • @criticalchai
    @criticalchai 3 หลายเดือนก่อน

    i use it like a tutorial or manual. let generate some code, question it then test it on its own. depending on where it gets the code it will work or be versions out of date and need lots of help to force it to use the most recent versions.

  • @SynergyOfTwo
    @SynergyOfTwo 3 หลายเดือนก่อน

    It can't be used to generate large pieces of code or to change large pieces of code.

  • @ahmedaldhabi1930
    @ahmedaldhabi1930 3 หลายเดือนก่อน

    I have tested copilot and google's Ai gimini or something like that. And copilot won every single aspect...
    And yes when given a long and complex tasks it would do many mistakes... One way to go around that is by breaking complex tasks into many smaller ones.
    You really need to talk to it as if it was a child. For example if i wanted it to count 1 to 10 but skip 4 and 8. I'd have to declare my main idea such is count 1 to 10. And then specify the extra tasks ignore 4 and 8 so i would say in one line " count one to ten. But skip 4 and 8 "
    Or i would start with the main task which is count 1 to 10 and send and when done replying i would add now repeat and skip 4 and 8 .... And so on
    But if i would have just thrown an old code i had previously which needed much extra work and ask it to fix it all up and add this and add that while at it. That won't work and that ba5tared won't tell you if there was a mistake or an error while executing you requests.. It would just say: Certainly here is your BS you gave me with an extra BS i added... 🤣🤣🤣
    It could get a pain in the a55 to carefully and precisely tell it what to do with what exact order... But ince you get a grip of how to do it.. You will know how to make more out of it by working some stuff yourself and giving other to copilot

    • @ahmedaldhabi1930
      @ahmedaldhabi1930 3 หลายเดือนก่อน

      Punctuation also plays a huuuuuge role 😅

  • @luislanga
    @luislanga 3 หลายเดือนก่อน

    Lol I basically don't have to code anymore at work, chat gpt 3.5 is insane. I have to be very specific and get isolated sections though, but the code is generally ok. If you're getting bad code all the time maybe you can't explain yourself in human language that good or you don't know what you're doing.

  • @monad_tcp
    @monad_tcp 3 หลายเดือนก่อน

    after using it for a month I came to the same conclusion

  • @N5O1
    @N5O1 3 หลายเดือนก่อน

    using copilot without reading the generated code is the same to copy-paste from StackOverflow

  • @RoyalYoutube_PRO
    @RoyalYoutube_PRO 3 หลายเดือนก่อน

    ... Alpha distribution

  • @dandogamer
    @dandogamer 3 หลายเดือนก่อน +515

    Whenever I use AI generated code and it looks like it should work. Then it doesnt and I have to debug it, I always wish I would have just written it myself from scratch 😂

    • @CodeWithMathias
      @CodeWithMathias 3 หลายเดือนก่อน +14

      and you can't think of a proper solution now that you have the solution proposed by ai ... it's like you didn't want to see anything to write a good and clean code

    • @Elefantoche
      @Elefantoche 3 หลายเดือนก่อน +5

      I tried using AI recently to generate some paginated dates as an array of arrays of dates, by receiving a start date, end date and time interval (weeks and months to begin with) and surprisingly, AI generated what I asked.
      However, code generated was far from optimal, as it included some steps that are not actually needed when generating pages and also, it had a bug where last generated page was not padded to make all page sizes the same length always, even when I specified it multiple times by modifying the prompt and giving feedback to each output.
      In the end, I had to spend extra time just debugging and optimizing AI generated code manually, taking almost (or probably more time) than I should if decided to do it by myself from scratch.

    • @breakingbadest9772
      @breakingbadest9772 3 หลายเดือนก่อน +6

      I use it as an active working process not as a direct copy paste. My issue is getting started, ai really helps me get the process started and structuring.

    • @stabgan
      @stabgan 3 หลายเดือนก่อน

      Skill issue

    • @majzerofive
      @majzerofive 2 หลายเดือนก่อน +4

      AI gives 100% correct solutions for the problems that have already been sold on the internet. It doesn't think, ever. It copy-pastes. You ask AI to provide the code and if there is a solution to it, it will be given. If there isn't, the code will look wrong and you ignore it completely.
      I've used AI for all kind of problems and it simplifies the life A LOT if you have the ability to understand the answer AI's giving you. If you cannot, do not use it, period.
      If you need the absolute proof of AI's inability to generate any "thinking process" find any riddle (word, math whatever) and change the terms to the point there should be no possible answer on the internet. The AI will give the scripted answer on the riddle you copied and changed. So do not expect the program to think, it is your job.

  • @hjewkes
    @hjewkes 3 หลายเดือนก่อน +367

    Thats why its so good at writing comments and unit tests. Because the shitty programmers never write them

    • @NubeBuster
      @NubeBuster 3 หลายเดือนก่อน +16

      Lol took me two reads. true

    • @formigarafa
      @formigarafa 3 หลายเดือนก่อน +3

      There should be a specialty in updating comments. Then it would be an advantage all the way through.

    • @jenstrudenau9134
      @jenstrudenau9134 3 หลายเดือนก่อน

      If you need comments you are already a shitty programmer

    • @applepie9806
      @applepie9806 3 หลายเดือนก่อน +1

      Faxxsss 😂😂😂😂

    • @paininmydroid4526
      @paininmydroid4526 2 หลายเดือนก่อน +1

      Makes sense

  • @TADevelopment
    @TADevelopment 3 หลายเดือนก่อน +66

    I primarily use GPT as a ‘super Google’
    I.e., for learning new languages/libraries.
    And for taking my thought process and quickly querying library documentation in search of an answer.
    The basic code examples are just a bonus to me

    • @MyAmazingUsername
      @MyAmazingUsername 3 หลายเดือนก่อน +7

      Nahhh. Almost no matter what I ask it, it hallucinates nonexistent APIs and adds bugs everywhere.

    • @TekExplorer
      @TekExplorer 3 หลายเดือนก่อน +1

      Don't do that for libraries. It cannot and will not include anything new.
      Might be fine for _old_ stuff though

    • @conundrum2u
      @conundrum2u 3 หลายเดือนก่อน +3

      Basically this. It's gets you there faster by allowing you to ask more pointed questions and sift through less bullshit. There's still bullshit, but if you have actual experience you'll see the bad answers right away, but it's great for launching points

    • @conundrum2u
      @conundrum2u 3 หลายเดือนก่อน +1

      @@TekExplorer this depends on the LLM dataset. some are 1 year old, some are 6 months or younger. too young and it's pretty safe to say that you shouldn't be integrating libraries or concepts that new into production code. if it's an established library or concept, you'll get an old link but you always evaluate. the real danger is people using what the LLM spits out without analysis

    • @wesleywyndam-pryce5305
      @wesleywyndam-pryce5305 2 หลายเดือนก่อน +2

      so you're learning incorrect information.

  • @boredSoloDev
    @boredSoloDev 3 หลายเดือนก่อน +138

    "this is shitty code which is also most of the code"
    I'm doing my part

    • @freecalradia
      @freecalradia 3 หลายเดือนก่อน +4

      According to Prime, we have the biggest burden to bear: the pile of shitty code has to come from somewhere.

  • @Flashcard_Games
    @Flashcard_Games 3 หลายเดือนก่อน +211

    Ive been using chat GPT to help me diagnose issues I am struggling to fix with my program. It's helpful when you don't know where to look next to solve your problem. If you properly explain your problem to it you'll get sent in the right direction. Very helpful. It also can explain single lines of code in detail making learning code much less painful.

    • @shortsupply4760
      @shortsupply4760 3 หลายเดือนก่อน +40

      Yes. Gpt is like the rubber ducky that also write boiler plate and can condense docs

    • @monad_tcp
      @monad_tcp 3 หลายเดือนก่อน +30

      Its helpful because its basically rubber ducking. What helps you is the fact that you wrote your problem in plain text

    • @RigelOrionBeta
      @RigelOrionBeta 3 หลายเดือนก่อน +3

      How do you know it's helpful? Unless you know the actual answer, you don't really know if it's truly helpful. And you certainly don't get answers from ChatGPT - you just get confident guesses.

    • @Flashcard_Games
      @Flashcard_Games 3 หลายเดือนก่อน +24

      @@RigelOrionBeta Sending me in the right direction when I'm stuck is extremely helpful and when I don't understand something it can explain. I'm both learning faster and improving my software faster. However everyone should do what works best for them.

    • @Hallgrenoid
      @Hallgrenoid 3 หลายเดือนก่อน +7

      ​@@RigelOrionBetaI know it's helpful when I try what's suggested and it works.

  • @jonathanwells-7663
    @jonathanwells-7663 3 หลายเดือนก่อน +12

    That's right. Copilot is like an aggressive 8 year old for coding. For small tasks, brilliant. Incredible. For larger more complex tasks? Let it take the wheel but just know you'll be adjusting about 75% of the initial suggestion.

  • @benjoe1993
    @benjoe1993 3 หลายเดือนก่อน +31

    THANK YOU! Everyone around me keeps saying that they use gpt for coding and I don't understand how because I spent more time correcting its shitty code than it would've taken me to write that code.

    • @palmberry5576
      @palmberry5576 3 หลายเดือนก่อน +2

      Yeah, it’s really nice for doing things that would take a while manually, way too much knowledge of your editor and multi cursors, or a small script otherwise though. e.g. converting a Java enum to its lua equivalent

    • @JustinBowsher
      @JustinBowsher 3 หลายเดือนก่อน

      And the boilerplate stuff that it saves time doing, we already have plenty of tools that do that. Minor time-save

    • @CTisHipHop
      @CTisHipHop 3 หลายเดือนก่อน +3

      well in my case the explanation is simple: i am quite heavily contributing to the HS portion of the global code distribution. Thus i benefit from copilot or gpt

    • @kmuralikrishna1998
      @kmuralikrishna1998 3 หลายเดือนก่อน +4

      You are doing it wrong. Ask it correct your code. That's when it starts to shine.

    • @VforVictorYT
      @VforVictorYT 3 หลายเดือนก่อน +2

      Copilot learns from the patterns in your current code, so if it gives you shit suggestions you probably have shit code already 😂

  • @wristcontr0l
    @wristcontr0l 3 หลายเดือนก่อน +125

    The difference between those who understand the mathematical underpinnings of models like those used by Copilot, and software engineers, is that the former didn't need to use it for a second to arrive at the same conclusion.

    • @erickmoya1401
      @erickmoya1401 3 หลายเดือนก่อน +15

      80% of the code you need its ok to be in that space. For the 20% we keep ours jobs.

    • @Firehazard159
      @Firehazard159 3 หลายเดือนก่อน +1

      What are the mathematical underpinnings of human comprehension?
      That's the real question here.

    • @danielschwegler5220
      @danielschwegler5220 3 หลายเดือนก่อน +1

      What ya trying to say???

    •  3 หลายเดือนก่อน +4

      So.. you think software engineers don’t understand math and statistics?
      I’m sure a lot of javascript front-enders don’t.

    • @jason_v12345
      @jason_v12345 3 หลายเดือนก่อน +3

      As someone who has used it to write tens of thousands of lines of code, I can tell you that the difference between those two is that one of them knows how to use it effectively and the other doesn't but refuses to admit that the problem is at all them.

  • @ynngih725
    @ynngih725 2 หลายเดือนก่อน +4

    From my point of view, as an amateur learning to program, it's way better to have an AI assistant that has the knowledge of maybe a 40th percentile programmer than having no assistance at all.

  • @lau6438
    @lau6438 12 วันที่ผ่านมา +1

    The problem with the AI code isn't actually the code. It's the reliancy and inability to actually code it creates.

  • @sdadffsd456
    @sdadffsd456 3 หลายเดือนก่อน +5

    Co-pilot can help, but you definitely have to go back and double-check, test etc, I try and use it for small stuff, because it does produce so many bugs

    • @erickmoya1401
      @erickmoya1401 3 หลายเดือนก่อน +1

      As you would do, and you would go back to your own code when you think about edge cases.
      Copilot is amazing to be the first iteration of any solution, instead of taking that much time yourself.
      Add a good firm and some tests and voila!

  • @nicolaskeroack7860
    @nicolaskeroack7860 3 หลายเดือนก่อน +3

    I use it to understand stuff and generate 'easy' stuff, not to literally architecture my programs from a to z

  • @IdAefixBE
    @IdAefixBE 3 หลายเดือนก่อน +3

    Now let's embrace the fact that this actually is also the reality for basically any subject, like art or even speech xD

  • @bogdanbarbu363
    @bogdanbarbu363 2 หลายเดือนก่อน +2

    You don't need to train your model against a random sample of code, which is the only situation in which your plot is accurate. You can put only high-quality code bases in its training set.

    • @KucheKlizma
      @KucheKlizma 8 วันที่ผ่านมา

      What I'm reading is to remove gradient descent and make the LLM just descent to "high quality"? So pay a bunch of people to specifically tell your GPU farm what's high quality and what isn't? I'm already smelling the vast profits and values.

    • @bogdanbarbu363
      @bogdanbarbu363 8 วันที่ผ่านมา

      ​@@KucheKlizmaWhat you wrote makes no sense. First of all, LLMs do use gradient descent and so does mostly everything else, it's pretty ubiquitous. Secondly, we already do curate training sets, we put SO much effort into that. If you just put random data in there, the result is going to be crap. This isn't snake oil, it's what needs to happen, one way or another.

  • @IAmNotASandwich453
    @IAmNotASandwich453 3 หลายเดือนก่อน +2

    Finally someone says it. The longer I work I realized that most programmers are pretty bad at what theyre doing. However, it is still mostly "good enough" to get the job done, which is why most of them never needed to actively put in any effort to actually get good.

  • @dooterino
    @dooterino 3 หลายเดือนก่อน +1

    LLMs have been obscenely oversold and overhyped -- if you know what you're doing the best case scenario is break-even, but you'll likely end up losing efficiency by having to fix the LLM mistakes on top of doing all the actual difficult architectural work on the program.

  • @Fantalic
    @Fantalic 5 ชั่วโมงที่ผ่านมา

    this is not how LLMs work. It's more like, that they understand a grammar of a language. It is always grammaticly correct and it puts together code, where the grammar makes perfect sense. But it does not understand any meaning of it, so it makes Errors because of that and not becaues it was trainied on shitty code. I think you get the wrong idea of how LLMs work, if you simplify it like this and due to all respect, it is wrong.

  • @Jackson_Zheng
    @Jackson_Zheng 3 หลายเดือนก่อน

    Skill issue. You don't train LLMs on any arbitrary data, you scrape and filter out the best tokens like from books existing production codebase, open source software, etc and you train them on that.

  • @michelabifadel4251
    @michelabifadel4251 8 วันที่ผ่านมา

    Agreed. Just to keep in mind. Let's not be binary. AI by itself is shit. Yourself by yourself is okay. AI + Yourself = amazin

  • @charlesabboud1613
    @charlesabboud1613 2 วันที่ผ่านมา

    I think that this is true of AI for non-code but text-based uses as well. A lot of the writing in freebie internet articles is trash, so when you ask it to explain something, like I did recently, it was wrong and very mediocre.

  • @ishtiaquekhan1148
    @ishtiaquekhan1148 6 วันที่ผ่านมา

    AI still not intended to be used as is for coding it is a support, not the coder itself. I use chat GPT and I find it very useful. I don't use the code from chat GPT as is like some beginner programmers used to use stack overflow. Rather I use chat GPT as a support.

  • @jamesusespivot
    @jamesusespivot 3 หลายเดือนก่อน

    It's true if it's just pretrained. But they can be fine tuned using RLHF to learn to produce better and more consistent results. This is how the train chatGPT to be a "useful" assistant. The issue is that now that AI hype and quarterly thinking have taken over, no one is bothering to properly align this stuff or go the extra mile to actually make a good product. You can see this with the recent failures of Google and Microsoft. Google has deepmind, they should know better. I would be especially ashamed if I were deepmind to be associated with Google right now. Love them or hate them, as unscrupulous and ethically dubious OpenAI is, they've proven to be best (afik) at alignment, attention to detail and actually taking their time to work on a product rather than just shitting out a minimum viable product. More like a series of PR disasters that will no doubt set back the field for decades in the same way people fear nuclear energy because of Chernobyl and Fukushima, despite the reality that nuclear power is statistically much safer for both human workers and the environment.

  • @TheRealMask3r
    @TheRealMask3r 3 หลายเดือนก่อน

    Stop dumbing yourselves, fellow anon programmers. Stop using AI to program, you'll eventually become dumb

  • @davidbellamy3522
    @davidbellamy3522 หลายเดือนก่อน

    It’s a nice try at an argument but it’s not quite right. Pre-training data, yes, mostly contains low or mid quality code. But LLMs undergo further refinement after that (SFT, RLHF/alignment) and that’s what lifts them to above average programmers. We’re not sure yet how far this post-training paradigm can take us. Could be to superhuman coding. Best to stay open-minded so as to not get left behind!

  • @LukePighetti
    @LukePighetti 2 หลายเดือนก่อน

    using copilot is like allowing a low-mid tier developer who read a lot of books in college to hijack your brain. honestly starting to see it as a self own when people praise it so highly

  • @werthersoriginal
    @werthersoriginal หลายเดือนก่อน

    Nah, sorry but if you take your code and break it down into chunks, feed it into chatGPT you'll get a solution. If your code has errors refeed it into chatGPT, rinse & repeat. I do NOT have the luxury to NOT ship code and going through the smug, arrogant, Asperger-syndrom-like nightmare that is StackOverflow is no longer going to cut it. Consumers do not buy code they buy solutions.

  • @nathanshaw4189
    @nathanshaw4189 23 วันที่ผ่านมา

    My favorite use is to write lua scripts for Hammerspoon, shell scripts for SwiftBar, or some other type of utility that I kinda want, but wouldn't have ever considered creating myself.

  • @Matrh88
    @Matrh88 3 หลายเดือนก่อน

    Copilot is good for boilerplate and aome troubleshooting. I would NEVER use it for anything complex and i wouls never use code generated from ot that I myself would not have written

  • @_skyyskater
    @_skyyskater 2 หลายเดือนก่อน

    I know you're not a tiddy guy, but this is where TDD steps in. If you write a small, testable program using co-pilot, and then test them with unit tests (aiming for 100% coverage), then you will very likely catch them. Sure, it may not help you with larger integrations of stuff, or front-end, for example, but simple functional unit tests on "pure" functions can go a long way.

  • @alst4817
    @alst4817 หลายเดือนก่อน

    Uh, dude…you realise that quantum mechanics is literally just probabilistic. You wouldn’t have computers at all if probabilistic systems were useless…

  • @OctagonalSquare
    @OctagonalSquare 3 หลายเดือนก่อน

    Only thing I’ve used AI for in coding is finding the meaning of an error message that I literally couldn’t find anywhere else online. And it worked and I was able to fix the bug. Then there’s another one that even AI can’t find the meaning of that I’m pretty sure is due to improper audio routing in VMs

  • @GiannisM-t5y
    @GiannisM-t5y 3 หลายเดือนก่อน

    AI for small shit it's pretty ok, but like others said when you're trying to do something complex it's dogshit, consistently error-prone. Chat GPT though is amazing for helping give you specific info about a language when learning languages.

  • @Vanit1
    @Vanit1 3 หลายเดือนก่อน

    Yeah, this is why you can actually get better results by promising to tip in the AI prompt - it ends up looking in the latent space near where tipping is offered, which is higher quality code. Amusing, but also pretty silly.

  • @Sonsequence
    @Sonsequence หลายเดือนก่อน

    Right, but it's not that kind of statistical model. That's a common misunderstanding. It does not pick the next word based on the word string. It groups words into basic semantic groupings, then forms concepts, metaconcepts and so on. It has learned a deep model of what code does. There's lots of research showing that a model can outperform the average quality of its source data. There's a core wisdom of the crowd in there because there are a million wrong ways to do something and only a few right and these get repeated more, strengthening a correct underlying conceptual map.
    Of course, if a specific wrong thing becomes weirdly common (e.g. using mongodb for anything) then...welp

  • @LanceBryantGrigg
    @LanceBryantGrigg 3 หลายเดือนก่อน

    I mean, Ill be honest with you; If you know how to READ code, Copilot is a god send. If you don't know how to READ code and you read the words but not the meaning you are going to be doomed with Copilot. So here's my advice to you; forget what he said here; and learn to READ what the code is actually doing and forget what the words say its doing. Here how you practice. Look at really bad code. Code that has mis named variables, poorly organized sub structure and otherwise totally painfully tangled variables and breaks every rule under the sun. Then figure out what it does. For days. Eventually, you will learn to separate your mind from what its saying and instead focus on what its doing.

  • @lukasaudir8
    @lukasaudir8 หลายเดือนก่อน

    It is good to find some interesting functions in environments your not familiar with, like a quick dynamic documentation, with examples...
    It's safe because it's easy and quick to double check and test

  • @dacid44
    @dacid44 2 หลายเดือนก่อน

    I almost never use AI generated code. Mostly because I'm doing personal projects to learn which won't happen if I have AI do it for me, but I've also had group projects in college where I had to tell group members to stop using ChatGPT because fixing all of the bugs that they had from it was taking longer than it would have to just write it themselves in the first place.

  • @MrAdamo
    @MrAdamo 2 วันที่ผ่านมา

    Ok here’s some hard facts: you can statistically bias statistical models towards the end of the curve

  • @shaunbava1801
    @shaunbava1801 2 หลายเดือนก่อน

    We got copilot at work like 6 months ago, it makes tons of mistakes and the issue is finding and fixing the bugs actually takes more time than just writing code yourself. The sales pitch vs the reality are far apart. I cannot get over how much code it produces which doesn't even compile! Ask it to do something obscure and the code produced is laughable.
    Management always targets the wrong things, we could easily replace the "robots" in HR with chatGPT and a good portion of the executive committee but definitely not the engineers :-)

  • @simonlundberg8194
    @simonlundberg8194 3 หลายเดือนก่อน

    I use Copilot a lot to fill out boilerplate and such. I had a fun experience a while back when I started writing a function that converts an image to SVG by tracing it. Copilot suggest that instead of tracing, just generate a single 1x1 pixel rectangle for each pixel in the input image. It was obviously stupid but I decided to try it anyway. And sure, it did actually work. The code it generated compiled and ran. And it generated a 1 gigabyte SVG file that promptly crashed every single software I tried to open it in. Sooo.... yeah. AI is doing great.

  • @tmthyha
    @tmthyha 3 หลายเดือนก่อน

    my job has me writing a lot of scripts, which has some boilerplate, in Ruby, a language with a ton of options, makes copilot pretty fun to use. I'd drop it but it eliminates boiler plate, typing, and thinking about off the wall Ruby methods. I like it but could live without. I'd prefer a better, faster LSP

  • @damiondreggs3500
    @damiondreggs3500 3 หลายเดือนก่อน

    Okay? It's like brand new technology, and there is a lot of room for improvement.
    If you're saying 'AI makes shit code don't use AI', you're going to miss the boat when it gets out of the uncanny valley and into masterpiece territory and everyone who scoffs with you will be lost and behind the times, like a lumberjack who only knows how to swing an axe in a world of chainsaws.
    Good luck!

  • @DannyDaemonic
    @DannyDaemonic 3 หลายเดือนก่อน

    Bad facts. Text written on the internet is the same way. Most of it is terribly written/formatted with horrible grammar and punctuation. Yet AI speaks with perfect grammar, even knowing to capitalize proper nouns and which words are proper nouns. AI can do this with code too, it's just a matter of time. I'm not saying it's going to be good at reasoning, but translating what you want into code - as long as you're describe what you want technically/specifically - is very much in reach.

  • @chasebrower7816
    @chasebrower7816 3 หลายเดือนก่อน

    this would be an argument if LLMs consisted of only their pre-training segments. RLHF exists to solve exactly this problem, to train the model to produce not just statistically representative, but useful, data. steadily this guy's videos piss me off more and more, and at this point I firmly believe most of what he says is made up bullshit to justify his naive and self-satisfying worldview. this is literally the 'i spent like 3 minutes on wikipedia' description of LLMs.

  • @decoyslois
    @decoyslois 23 วันที่ผ่านมา

    Yeah but LLMs that are trained well also take into account things like the issues around code, which should help it (with enough data) to figure out which code is good or bad. I.e. the normal is multi-dimensional

  • @emjizone
    @emjizone 3 หลายเดือนก่อน

    No, it's *worse than that.*
    It's not only because copilot was trained with poor code. It's also because *Copilot has no fucking idea what you're trying to optimise* , and *average is not what you want.*
    The average between a good solution to problem A and a good solution to problem B is generally *not* a good solution to problem A+B/2.
    Average are *compromises* and compromises are generally *bad* in algorithms.
    Not making the difference between *result* on one hand and *method to get the result* in the other had is a typical *plain noob error.*
    You fundamentally *cannot write good code by language imitation* in general, because the test functions you need to solve your problems with a given language are *never ever used in the IA's training!* So such IA can always learn the language you want to use, but it *cannot learn your problem* , by design.
    Specialized IA could be used to solve your all programming problems, but *not* IA designed for statistical language imitation. What you need is a totally different kind of IA. More specifically, you need an *algebra solver* and a *machine simulator.* The only part where a LLM might be useful is to translate your request written in silly human language into a logical problem to solve.

  • @Leouon
    @Leouon 3 หลายเดือนก่อน

    I'm experimenting with gpt recently.
    I'm primarily a c++ developer, so I get quite impatient when doing web frontend things, so for basic starts, gpt helps me a bit. Ofc everything is revised, but the last time while testing, gpt was pretty good into replacing bootstrap with materializecss for a performance test.

  • @storymode9085
    @storymode9085 29 วันที่ผ่านมา

    I am only starting in programming, and I have to not only be very specific in what code I am asking for, but I need to be aware what it gives me is more just "guidelines". I get the approach, not the answer