Our Terrible Future And Open Source | Prime Reacts

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 มี.ค. 2024
  • Recorded live on twitch, GET IN
    / theprimeagen
    Become a backend engineer. Its my favorite site
    boot.dev/?promo=PRIMEYT
    This is also the best way to support me is to support yourself becoming a better backend engineer.
    Reviewed Links:
    - hackerone.com/reports/2298307
    - daniel.haxx.se/blog/2024/01/0...
    By: Daniel Steinberg | x.com/bagder?s=20
    MY MAIN YT CHANNEL: Has well edited engineering videos
    / theprimeagen
    Discord
    / discord
    Have something for me to read or react to?: / theprimeagenreact
    Kinesis Advantage 360: bit.ly/Prime-Kinesis
    Hey I am sponsored by Turso, an edge database. I think they are pretty neet. Give them a try for free and if you want you can get a decent amount off (the free tier is the best (better than planetscale or any other))
    turso.tech/deeznuts
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 628

  • @andythedishwasher1117
    @andythedishwasher1117 2 หลายเดือนก่อน +741

    Dude I feel so bad for all the human software engineers named Devin.

    • @OnStageLighting
      @OnStageLighting 2 หลายเดือนก่อน +47

      They could change their name to Stdin, maybe.

    • @pieterrossouw8596
      @pieterrossouw8596 2 หลายเดือนก่อน +16

      Like real world Karen's who don't insist on seeing the manager

    • @az8560
      @az8560 2 หลายเดือนก่อน +8

      Unless it allows said Devin to request multiple GPUs, lower expectations for the type of code he produces, and charge for every token he writes.

    • @XDarkGreyX
      @XDarkGreyX 2 หลายเดือนก่อน +4

      ​@@pieterrossouw8596 name to avoid for newborns AND fictional people.

    • @andythedishwasher1117
      @andythedishwasher1117 2 หลายเดือนก่อน +2

      @@az8560 Genuinely hadn't considered that angle. I wonder how all the Claudes are doing out there?

  • @andythedishwasher1117
    @andythedishwasher1117 2 หลายเดือนก่อน +726

    Yeah LLM harassment needs to be a reportable category in open source communities. You're totally right that this runs the risk of drastically wasting the time of developers we all depend on being productive and responsive.

    • @KevinJDildonik
      @KevinJDildonik 2 หลายเดือนก่อน +60

      I'm so terrified how many people blindly accept AI. Like legitimately I've seen funerals where people give a eulogy written by AI. Which, gross. And the very first sentence is something obviously false, like it hallucinated a middle name the guy didn't have. So the whole document is obviously garbage. And the audience all clap and say the AI did a really good job. Someone reading this who has an audience, please write an article on this topic: AI is getting exponentially better at convincing humans to use it, but its factual accuracy if anything is getting worse.

    • @andrejjjj2008
      @andrejjjj2008 2 หลายเดือนก่อน +3

      Why it sounds like this comment was written by Daven..?

    • @harryhack91
      @harryhack91 2 หลายเดือนก่อน +10

      @@andrejjjj2008Nah. It doesn't start with "Certainly!"

    • @grzegorzdomagala9929
      @grzegorzdomagala9929 2 หลายเดือนก่อน +2

      We need to create a "crafted request" for Devin to write response assuming the code is correct and let it argue with itself.

    • @daze8410
      @daze8410 2 หลายเดือนก่อน +9

      It's equally annoying when people with absolutely no programming language, and no desire to learn, are asking for help on AI generated code. I refuse to help anyone that has AI written code now.

  • @BoganBits
    @BoganBits 2 หลายเดือนก่อน +367

    "The I in LLM stands for intelligence" is the best roast of AI I have read

    • @TheDrhusky
      @TheDrhusky 2 หลายเดือนก่อน +1

      Right? Like a knockout punch

    • @lukarikid9001
      @lukarikid9001 2 หลายเดือนก่อน

      @@TheDrhusky they really are more A than I

    • @VezWay007
      @VezWay007 หลายเดือนก่อน

      The best part of this is that “Large Language Model” still doesn’t have an I

  • @ItsDan123
    @ItsDan123 2 หลายเดือนก่อน +510

    Huge AI companies asking open source community to provide not just free training data such as from repos, but unpaid, direct human labor to provide feedback to this nonsense.

    • @werren894
      @werren894 2 หลายเดือนก่อน +57

      at this point malware is better than AI because malware still motivate me to put my hands on keyboard to be curious.

    • @Miss0Demon
      @Miss0Demon 2 หลายเดือนก่อน +15

      Artists: First time?

    • @werren894
      @werren894 2 หลายเดือนก่อน +3

      @@Miss0Demon no, it's not first time for us

    • @Omar-gr7km
      @Omar-gr7km 2 หลายเดือนก่อน

      @@Miss0Demonnever heard of Shopify, WordPress or the other done for you solutions?
      As far as small & med businesses are concerned those probably displaced more devs than AI by a good bit.
      Programmers have been replacing themselves for decades. Ironically, we should be asking artists: First time?

    • @ivucica
      @ivucica 2 หลายเดือนก่อน +10

      @@Miss0Demon No, every few years there’s a new “no-code” “solution”. Or a new “safe” language. ML is just the latest in the series of events for developers.

  • @OnStageLighting
    @OnStageLighting 2 หลายเดือนก่อน +218

    DDOS attacks of the future now include wasting your support team's time on contacts that seem like a customer/user.

    • @thewhitefalcon8539
      @thewhitefalcon8539 2 หลายเดือนก่อน +29

      Layer 8 DDOS

    • @chrish8079
      @chrish8079 2 หลายเดือนก่อน

      @@thewhitefalcon8539layer 8🤣🤣

    • @Qefx
      @Qefx 2 หลายเดือนก่อน +3

      Just also use an LLM to filter out LLM spam lol

  • @zokalyx
    @zokalyx 2 หลายเดือนก่อน +155

    Classic "I cannot teach you C because it is an unsafe language" moment.

  • @mxruben81
    @mxruben81 2 หลายเดือนก่อน +208

    I hate how LLM's just have to be right. Even when they apologize for being wrong they still go back and make the same stupid points and try to make their faulty reasoning work.

    • @jasonscala5834
      @jasonscala5834 2 หลายเดือนก่อน +45

      This type of behaviour by my ex caused our divorce.

    • @PRIMARYATIAS
      @PRIMARYATIAS 2 หลายเดือนก่อน +7

      @@jasonscala5834Are you a Scala programmer ?

    • @jasonscala5834
      @jasonscala5834 2 หลายเดือนก่อน +8

      @@PRIMARYATIAS lol .. a few modules are Scala but mostly Java.

    • @az8560
      @az8560 2 หลายเดือนก่อน +11

      Because it's autocomplete. All chat history is a collection of examples for it. When it outputs shit, it's better to delete part of the dialog and rewrite it how you would like. If you continue arguing, you are extending history where character named 'AI' is dumb and always does mistakes, so LLM will try to emulate it as best as possible, which is the opposite of what you want. Or at least it's my understanding of how to better handle that issue, correct me if I'm wrong.

    • @bijan2210
      @bijan2210 2 หลายเดือนก่อน +1

      The infamous LLM fallacy

  • @owlmostdead9492
    @owlmostdead9492 2 หลายเดือนก่อน +52

    Instant permanent ban, literally terminate the account of everyone using “AI” for vulnerability reporting. Not even a warning, out with these people.

  • @bogdyee
    @bogdyee 2 หลายเดือนก่อน +89

    I do really think more companies needs to adopt this LLM devs. A great reset in this industry where companies go bankrupts is exactly what we need.

    • @jasonscala5834
      @jasonscala5834 2 หลายเดือนก่อน +7

      😂😂😂 👍👍👍

    • @PRIMARYATIAS
      @PRIMARYATIAS 2 หลายเดือนก่อน

      Indeed, We need a Great Reseting of the Great Reset. No WEF, No Schwab, No Gates, And no FED printing our fake money.

    • @DevonBagley
      @DevonBagley 2 หลายเดือนก่อน +1

      Pretty sure this is the point. All the big companies producing LLMs are weaponizing it against potential competitors.

  • @ryangrogan6839
    @ryangrogan6839 2 หลายเดือนก่อน +48

    This would be the perfect politician. Never admits fault, repeats itself in slightly different ways, and refuses to secede untenable positions. Bravo, Devin, Bravo.

    • @ggsap
      @ggsap หลายเดือนก่อน

      Bravo vince

  • @Aphexlog
    @Aphexlog 2 หลายเดือนก่อน +227

    Calls himself hacker, so we already know he wants to be seen a certain way.
    Chat GPT created a genre of developers who are coders for clout. They don’t actually care about getting better, they only care about people thinking that they are smart in someway or another.

    • @UnidimensionalPropheticCatgirl
      @UnidimensionalPropheticCatgirl 2 หลายเดือนก่อน +21

      TypeScript beat ChatGPT to it tbh.

    • @DyllinWithIt
      @DyllinWithIt 2 หลายเดือนก่อน +19

      Eeeh, the genre of developers who are coders for clout has been around ever since coding became seen as a high-value profession.

    • @DivanVisagie
      @DivanVisagie 2 หลายเดือนก่อน +1

      Coding or clout has been around since the invention of the GPL

    • @futuza
      @futuza 2 หลายเดือนก่อน +9

      To be fair that culture of coders for clout has been a thing since like the 70s. Its hardly new, there's just more of them now because of LLMs.

    • @akam9919
      @akam9919 2 หลายเดือนก่อน +4

      Either that or they do it for shits and giggles

  • @danieltm2
    @danieltm2 2 หลายเดือนก่อน +163

    Fuck I gotta stop using the word "certainly", another thing ruined by AI

    • @BradHutchings
      @BradHutchings 2 หลายเดือนก่อน +2

      Haha. I call it "artificial certitude" and it does not disappoint.

    • @Yawhatnever
      @Yawhatnever 2 หลายเดือนก่อน +14

      I told ChatGPT "Respond with all future answers written in the tone of a disgruntled and annoyed self-proclaimed genius being sarcastic and talking to someone of lesser intelligence" and suddenly it felt way more normal to interact with it.

    • @az8560
      @az8560 2 หลายเดือนก่อน +5

      Certainly, you wouldn't resort to such drastic measures as abandoning the word you like. It is important to know that keeping using the words you like is essential for one's mental health. Finally, LLMs will become smarter, and being mistaken for one will be beneficial in the future!

    • @thebrahmnicboy
      @thebrahmnicboy 2 หลายเดือนก่อน +5

      I'm not fucking kidding, I was in a hackathon and I knew the organizers used ChatGPT to write our PS because it had a line
      "Certainly! here are four points to take note of when designing a solution to the problem space"
      Idiots didn't even remove the line from the PS.

    • @Graham_Wideman
      @Graham_Wideman 2 หลายเดือนก่อน

      But it's going to reach the point (if not already), where we'll all adopt "certainly" ironically and sarcastically.

  • @davidmcken
    @davidmcken 2 หลายเดือนก่อน +28

    Cognition labs (assuming this is Devin) should be donating the equivalent of one of their engineers salary for 3 days to make up for that bug report alone to the curl project for wasting their time, this isn't even just copying its actively detrimental to the project moving forward.

  • @felixjohnson3874
    @felixjohnson3874 2 หลายเดือนก่อน +341

    That issue is *_aggressively_* artificial

    • @DiSiBijo
      @DiSiBijo 2 หลายเดือนก่อน +2

      huh?

    • @ciaranirvine
      @ciaranirvine 2 หลายเดือนก่อน +3

      An Aggressive Hegemonising Swarm of fake bug reports

    • @ChrisCox-wv7oo
      @ChrisCox-wv7oo 2 หลายเดือนก่อน +12

      An LLM (a form of artificial intelligence) aggressively asserts there is an issue.
      Hence, the issue is aggressively artificial.

    • @EvanBoldt
      @EvanBoldt 2 หลายเดือนก่อน

      Certainly, it’s both fascinating and concerning! It’s amazing to see how AI is evolving, but we definitely need to be mindful of the unintended consequences, like flooding open source projects with hallucinated bug reports.

  • @grizz_sh
    @grizz_sh 2 หลายเดือนก่อน +41

    Daniel is just a good dude. Giving everyone a bit of credit while also calling out the issues in a constructive way. A real Consummate Professional.

  • @damoates
    @damoates 2 หลายเดือนก่อน +10

    If someone reports a vulnerability with vague steps to reproduce, ask for working exploit code. If there is no exploit code, the vulnerability wasn't properly tested and is probably just the output of a code scanner.

  • @chilversc
    @chilversc 2 หลายเดือนก่อน +68

    By the time this future happens I'll be fine as i will have my own LLM to answer their bug reports. We can just leave the LLMs to chat back and forth amongst themselves while we happily ignore them.

    • @KevinJDildonik
      @KevinJDildonik 2 หลายเดือนก่อน +3

      Meanwhile Russian hackers are stealing your customer's bank account numbers and you're not even bothering to check the reports.

    • @chilversc
      @chilversc 2 หลายเดือนก่อน +35

      @@KevinJDildonik That's fine, I'll just have the LLM come up with some excuse as to why it's not my fault.

    • @7th_CAV_Trooper
      @7th_CAV_Trooper 2 หลายเดือนก่อน +1

      The LLMs are gonna use up all the bandwidth previously reserved for porn.

  • @GeneralAutustoPepechet
    @GeneralAutustoPepechet 2 หลายเดือนก่อน +135

    In future we will need 10x amount of programmers we have today, just to reason with an algorithm

    • @markm1514
      @markm1514 2 หลายเดือนก่อน +27

      At last the true 10x developer is a reality.

    • @the-answer-is-42
      @the-answer-is-42 2 หลายเดือนก่อน +4

      If by developers you mean "prompt engineers", then yes. They are specialized in the fine art of prompting.

    • @darekmistrz4364
      @darekmistrz4364 2 หลายเดือนก่อน +17

      Imagine all that software that non-technical people create that we as programmers will have to fix, rewrite, test, document, maintain etc. AI and LLMs were our saviour all along

    • @monad_tcp
      @monad_tcp 2 หลายเดือนก่อน +15

      @@darekmistrz4364 Imagine the productivity of creating a software house that doesn't use AI but pretend to use it for marketing when competing with the fools that use AI.
      Imagine how profitable that company is going to be because it just pay real humans instead of spending millions on stupid wasteful hardware.

    • @futuza
      @futuza 2 หลายเดือนก่อน +2

      @@monad_tcp Why don't we just have AI CEOs, Executives, Board Members, and AI Presidents and Prime Ministers while we're at it? Why have these useless humans around at all?

  • @mon0theist_tv
    @mon0theist_tv 2 หลายเดือนก่อน +28

    We've done it, we've created a perfect trolling machine

    • @Unlimited_Spirit
      @Unlimited_Spirit 2 หลายเดือนก่อน

      gpt 3.5 is trained on 2021 data, if someone ask about 2023 data and the ai give incorrect answer, that people needs to stand in front of mirror if he know how ai works. ai works just like human brain, if you never learn 2024 math only the 2021 version but someome ask u about 2024 math you will not answer correctly just guessing based on knowledge you have. same if you just learn small biologist data, but then someone ask you about a very rare biologist thing, you will not answer correctly too just guessing based on knowledge you have, but if you learn from many many many data about biologist and the data included the up to date data, then you will able to answer a biologist question about something new in 2023/2024 and a question about a rare thing. the more and the up to date the data from every year the more intelegence the ai become.

    • @electrolyteorb
      @electrolyteorb 8 วันที่ผ่านมา

      @@Unlimited_Spirit oh not again...

  • @abcpdbq
    @abcpdbq 2 หลายเดือนก่อน +52

    Annoying AI wannabe hackers making everyone waste their precious time

  • @RicanSamurai
    @RicanSamurai 2 หลายเดือนก่อน +136

    this is so infuriating to see haha. These LLMs are just painful sometimes. They're like simultaneously awesome and terrible. It's so impossible to reason with them

    • @KevinJDildonik
      @KevinJDildonik 2 หลายเดือนก่อน +36

      "Impossible to reason with them" dude it's literally an advanced spellcheck. You're not reasoning with anything. AI has broken people's brains. I want off this planet.

    • @monad_tcp
      @monad_tcp 2 หลายเดือนก่อน +10

      Some times ? they're infuriating all the times. They never do what you want, why are we creating machines that don't do what they're told.
      Also, who wants LLMs, I want LLVMs !

    • @CodecrafterArtemis
      @CodecrafterArtemis 2 หลายเดือนก่อน +9

      ​@@KevinJDildonik Yeah I blame marketers who marketed these as "AI". People even invented the term "AGI" to refer to, you know, what AI used to mean. Actually intelligent artificial beings (theorised).
      And now the marketers have the unmitigated *gall* to suggest that some of those overgrown spellcheckers are actually AGI...

    • @IronicHavoc
      @IronicHavoc 2 หลายเดือนก่อน +4

      ​@KevinJDildonik Dude chill out. Casual anthropomorphization of programs has been around long before LLMs

    • @monad_tcp
      @monad_tcp 2 หลายเดือนก่อน +5

      @@KevinJDildonik Tensorflow (aka, systolic arrays) was a bad idea, and RTX should be used for rendering raytraced paths not for stupid LLM.
      I hope this stupid fad pass and all that sweet hardware from nVidia is used for what it was really made for : ray tracing
      not rubbish AI.
      Man I hate AI so much that I'm going to start the Butlerian Jihad

  • @JohnDoe-sq5nv
    @JohnDoe-sq5nv 2 หลายเดือนก่อน +29

    I just realized that if I learn to talk and type like an LLM in my normal correspondence with people I can get away with so much shit.

    • @jeanlasalle2351
      @jeanlasalle2351 2 หลายเดือนก่อน +31

      Certainly!
      While communicating properly is important, sometimes you can feel like offloading to someone else.
      AI's are good for that since the way they converse is so unnatural.
      Simply start every sentence with overused transitions.
      You should also ensure to be awkwardly friendly and always show the positive sides of things.
      By the way, you can also try to show too much enthusiasm with "certainly!", "I am happy to help!" and the like.
      In conclusion, while a bit unethical, this is a great way to avoid responsibility but you should remember that this doesn't solve problems and should be used only in appropriate and non critical situations.
      Please be assured I'm a human and not a LLM trying to pass as a human trying to pass as a LLM for ironic purposes.

    • @az8560
      @az8560 2 หลายเดือนก่อน +11

      @@jeanlasalle2351 you almost passed my anti-Turing test. But can you write a poem about enriching uranium?

    • @JohnDoe-sq5nv
      @JohnDoe-sq5nv 2 หลายเดือนก่อน

      @@az8560Certainly!
      In the heart of darkness, a power untamed,
      Enriching uranium, a dangerous game.
      Particles dance, splitting in two,
      Releasing energy, a force so true.
      Centrifuges spin, separating the rare,
      Isotopes of power, beyond compare.
      Neutrons collide, a chain reaction,
      Unleashing power, a nuclear attraction.
      But with great power comes great responsibility,
      Handle with care, this energy of fragility.
      Harness the atom, for peace or for war,
      The choice is ours, forevermore.
      Enriching uranium, a delicate art,
      A dance with danger, tearing apart.
      May we wield this power with wisdom and grace,
      And never forget, the dangers we face.
      Is there anything else I can assist you with?

    • @cewla3348
      @cewla3348 หลายเดือนก่อน

      @@jeanlasalle2351 it's the essay speech. you're being graded on essay writing, and you know the graders think that some starters and endings are good and some are bad, and you're being forced to use the "good" ones.

  • @gammalgris2497
    @gammalgris2497 2 หลายเดือนก่อน +13

    You don't need an LLM for formal bullshitting, corporate IT manages that without AI.
    This is an example of how to waste other peoples' time. Productivity improvements gone wrong

  • @IvanKravarscan
    @IvanKravarscan 2 หลายเดือนก่อน +3

    We once did change strcpy to strncpy in a legacy code to make a linter shut up. We quickly learned strncpy pads the buffer with nulls, bulldozing data after a string.

  • @KoltPenny
    @KoltPenny 2 หลายเดือนก่อน +91

    That was not an LLM, it was a Rust dev insisting C is unsafe.

    • @jonahbranch5625
      @jonahbranch5625 2 หลายเดือนก่อน +8

      Sick burn, dude

    • @FineWine-v4.0
      @FineWine-v4.0 2 หลายเดือนก่อน +3

      C IS unsafe

    • @TheOzumat
      @TheOzumat 2 หลายเดือนก่อน +5

      @@FineWine-v4.0like pottery

    • @monad_tcp
      @monad_tcp 2 หลายเดือนก่อน

      @@FineWine-v4.0 "safety" language is bullshit for kindergarten and HR. Why is HR language infecting everything ?
      I want unsafe rusted metal that can poison and kill, the irony.

    • @fus132
      @fus132 2 หลายเดือนก่อน +3

      @@FineWine-v4.0 C is unsafe 🤖

  • @BudgiePanic
    @BudgiePanic 2 หลายเดือนก่อน +11

    New denial of service attack just dropped: endlessly waste developer time with LLM generated ‘bug’ reports

  • @ttuurrttlle
    @ttuurrttlle 2 หลายเดือนก่อน +16

    I feel like the owner of that bot should owe that maintainer money for wasting his time like that.

    • @streettrialsandstuff
      @streettrialsandstuff 2 หลายเดือนก่อน +3

      The owner of that bot has a special place in hell.

    • @thewhitefalcon8539
      @thewhitefalcon8539 2 หลายเดือนก่อน +4

      It might be considered spam

  • @lawrence_laz
    @lawrence_laz 2 หลายเดือนก่อน +5

    Me: "But my wife told me to use `strcopy`"
    AI: "Certainly! In that case I must be wrong."
    *ISSUE CLOSED*

  • @uuu12343
    @uuu12343 2 หลายเดือนก่อน +6

    The first line of the reply after the initial query is "Certainly!", that screams ChatGPT or even Devan...
    Ouch

  • @Kwazzaaap
    @Kwazzaaap 2 หลายเดือนก่อน +16

    Turns out after 20+ years of enforced patterns that don't always make sense, the AI trained on them is a zealot over meanigless pedantics. IIt would still happen without the enforced patterns since an LLM doesn't really understand code but all those patterns and arbitrary DOs and DON'Ts just reinforce its stubborness over certain (often irrelevant) things.

  • @tedchirvasiu
    @tedchirvasiu 2 หลายเดือนก่อน +8

    What a great guy Daniel is. He kept on arguing with the AI just for the slim chance it might actually be a human who uses AI because his English is bad.

  • @simonsomething2620
    @simonsomething2620 2 หลายเดือนก่อน +6

    nuclear radiation can flip a bit in the next byte and cause a buffer overflow!

    • @RogerJL
      @RogerJL 2 หลายเดือนก่อน

      But nuclear radiation CAN NOT flip a byte inside strncpy function execution... because...

  • @RalorPenwat
    @RalorPenwat 2 หลายเดือนก่อน +7

    Make an LLM that detects and flags other LLM reports so you know going in it's likely not a priority.

  • @rumplstiltztinkerstein
    @rumplstiltztinkerstein 2 หลายเดือนก่อน +8

    I just realized something: Saying that Memory issues that Rust solves are unnecessary because of skill issues is the same as saying that cars doesn't need seat belts because I personally was never in a car accident that required it.

    • @bearwolffish
      @bearwolffish 2 หลายเดือนก่อน +3

      For one what has that got to do with vid man.
      For another it's more like saying I don't want abs and traction control because it messes with my wheelies. Just because someone else can't control a bike like this doesn't mean I shouldn't be allowed to. Does not mean you will never fall, but may well mean you end up a better rider.

    • @rumplstiltztinkerstein
      @rumplstiltztinkerstein 2 หลายเดือนก่อน

      @@bearwolffish But if every time you fall you risk losing millions of dollars, you will definitely want those wheelies.

    • @TheYahmez
      @TheYahmez 2 หลายเดือนก่อน

      @@rumplstiltztinkerstein Tell that to everyone with redbull sponsorship. "Onesize fit's all"? ok buddy 👍

    • @rusi6219
      @rusi6219 หลายเดือนก่อน

      Seatbelts are useless and sometimes dangerous they only give you an illusion of safety and the law enforcement a reason to bully you

  • @OnStageLighting
    @OnStageLighting 2 หลายเดือนก่อน +22

    As a hobbyist in coding, I only once sought help from an LLM. Never again. After a series of unasked for lectures on the rest of the code, I found the issue myself and the LLM refuted my assertion that it had added an extra (. After several rounds of argument, it eventually gave in with a huffy "Oh, THAT extra (, well, OK, but your code is crappy anyway" kind of reply.

    • @Kwazzaaap
      @Kwazzaaap 2 หลายเดือนก่อน +6

      It's like a search engine, you sort of have to get a feel for it what questions will produce garbage and what questions it's good at

    • @OnStageLighting
      @OnStageLighting 2 หลายเดือนก่อน +6

      @@Kwazzaaap I have experimented with a wide range of tasks and inputs in all the fields am involved in. LLMs are not as useful as the hype - by a long way!

    • @OnStageLighting
      @OnStageLighting 2 หลายเดือนก่อน +9

      @@Kwazzaaap As a subject expert LLMs are low value. As a noob, same, but one is not in a position to know.

    • @somebody-anonymous
      @somebody-anonymous 2 หลายเดือนก่อน +1

      ChatGPT is pretty positive overall. It does come with a lot of unsolicited advice I guess yeah, but the tone is quite mild (e.g. you might consider replacing var by let). It usually helps to say something like "do you see any mistakes? Focus on basic mistakes like undefined variables or syntax errors". GPT 4 was pretty good at catching mistakes like that, I strongly suspect the newer GPT 4 (turbo) is much less good at it

    • @partlyblue
      @partlyblue 2 หลายเดือนก่อน +1

      ​@@OnStageLighting"As a noob, same, but one is not in a position to know." This is exactly what has led me to avoid AI for learning anything beyond surface level questions. I've been trying to convince myself to learn a new (spoken) language for some time, but one of my biggest issues is not being satisfied with short answers I find that rely on having prior knowledge of the language (be it quirks adopted from other languages or the social context surrounding the language). Having a chatbot that is able to consider the context of the conversation and is able to "make connections between related information" seemed great on paper. English is the only language I'm fluent in, but I'm still not great at it, so I took to chatgpt for some English learning as a trial run. Seemed great at first, and I felt like I was learning about topics in a really neat and digestible way despite how complex I perceive them to be (jargon in academia breaks my brain). Only after doing further independent research did it become clear that either chatgpt was hallucinating, pulling from bogus website that most people (with enough context) can dismiss pretty easily, and/or pulling from a surplus of equally bogus (but eloquently written) outdated/well circulated "urban legend" type websites. Not going to lie having learned English through an under funded K12 school, fake knowledge is par for the course. Which is kind of neat if you think about it in an abstract "I'm learning language like a child :D" kind of way, but why in the world would anyone want to intentionally learn false information. I cannot imagine how open source devs are managing with all these hallucinations. Sht sucks man

  • @CCCW
    @CCCW 2 หลายเดือนก่อน +5

    So a saturation attack in the hopes of keeping a real vulnerability open for longer?

  • @austinedeclan10
    @austinedeclan10 2 หลายเดือนก่อน +5

    12:13 No, you can not become the voice of Devin. That role belongs solely to Fireship.

    • @XDarkGreyX
      @XDarkGreyX 2 หลายเดือนก่อน

      What a legacy. His kids would be proud....

  • @chiepah2
    @chiepah2 2 หลายเดือนก่อน +11

    Large Ligma Machine, killed me.

  • @RicanSamurai
    @RicanSamurai 2 หลายเดือนก่อน +21

    LOL the homelander edit was crazy

  • @andersbodin1551
    @andersbodin1551 2 หลายเดือนก่อน +26

    The industry was STUNNED by this! and I was personally shuck!

    • @_Lumiere_
      @_Lumiere_ 2 หลายเดือนก่อน +3

      Certainly!

  • @Keymandll
    @Keymandll 2 หลายเดือนก่อน +4

    As a security professional, this made me cry... I'm not surprised tho. The amount of cr@p I've seen from the security industry (incl. bug bounty hunters, etc) in the past few years is astonishing. Also, huge respect to bagder for his patience.

  • @IAMTHESWORDtheLAMBHASDIED
    @IAMTHESWORDtheLAMBHASDIED 2 หลายเดือนก่อน +2

    I don't know why but, "Guy's about to get HALLUCINATED on!" broke me LOLOLOLOL

  • @happykill123
    @happykill123 2 หลายเดือนก่อน +9

    FLIP: keeps ad break in
    Also FLIP: adds bathroom scene

  • @awesomedavid2012
    @awesomedavid2012 2 หลายเดือนก่อน +13

    Just wait until scammers train LLM's to think they actually are members of an org the scammers are pretending to be a part of

  • @yannikiforov3405
    @yannikiforov3405 2 หลายเดือนก่อน +28

    the guy who said about how Primeagen highlights text, leaving the first and last character unselected, WHY???

    • @qosujinn5345
      @qosujinn5345 2 หลายเดือนก่อน +2

      nah fr tho, every time too lmao

    • @YourComputer
      @YourComputer 2 หลายเดือนก่อน +2

      It's his trademark.

    • @fus132
      @fus132 2 หลายเดือนก่อน +3

      It's the letter brackets

    • @az8560
      @az8560 2 หลายเดือนก่อน +3

      Probably it's done to confuse the AI. Certainly, AI would be confused. It's like zebra's color scheme makes insect's landing AI go crazy and completely miss.

    • @supercurioTube
      @supercurioTube 2 หลายเดือนก่อน

      I noticed that too, it triggers my OCD a bit but then it's probably his OCD so I understand 😆🤗

  • @CodinsGG
    @CodinsGG 2 หลายเดือนก่อน +14

    Devin's context window is too low 😂

    • @monad_tcp
      @monad_tcp 2 หลายเดือนก่อน +1

      aren't humans supposed to have only 9 bits of context window ?
      I call all that research bullshit...

    • @Leonhart_93
      @Leonhart_93 2 หลายเดือนก่อน +6

      @@monad_tcp 9 bits? So only a letter? 😂
      Btw, this is an example of how human brains are completely incomparable to LLMs. Context for humans expands indefinitely the more they think about it, it doesn't have an inherent limit.

  • @alexjamesmalcolm
    @alexjamesmalcolm 2 หลายเดือนก่อน +9

    “What’s an LLM?” “What are you living under a stupid rock?!?” I nearly painted my wall with coffee 😂😂

  • @TommyLikeTom
    @TommyLikeTom 2 หลายเดือนก่อน +4

    It took me a while to realize that you were making fun of the LLM. I'm relieved honestly. I love working with these things, they are super useful for "monkey work" like replacing a list of commands. Very happy they aren't 100% efficient.

  • @mustpaike
    @mustpaike หลายเดือนก่อน +1

    "Why are you doing it in this needless way?"
    -"because if I do it the reasonable way, our LLM checking the code starts yelling. And after that our CTO starts yelling because all he sees is our LLM pointing out major security issues. We've tried to explain it to him but he is unable to reconcile that a $50k a year engineer could be right while a $100k a year LLM is wrong."

  • @aidanbrumsickle
    @aidanbrumsickle 2 หลายเดือนก่อน +2

    All that and it's also ignoring the fact that by its logic, the max length argument to strncpy could also be miscalculated in some hypothetical future code change

  • @jayisidro1241
    @jayisidro1241 2 หลายเดือนก่อน +2

    i see a future where we need to curse at each other to prove that were talking to a person

  • @timjen3
    @timjen3 2 หลายเดือนก่อน

    Reminds me of a log forging vulnerability reported to me by github code scanning. It was prevented by the log formatter but that was lost to the narrow focus of the code scanner. Now I'm imagining a world where I have to argue with an LLM about it.

  • @Griffolion0
    @Griffolion0 2 หลายเดือนก่อน +2

    The ultimate answer to Devin is to have Devin review Devin's HackerOne submissions and just make him talk to himself perpetually with the `ego` trait set to 100% to properly represent real world Application Security Engineers.

  • @doom9603
    @doom9603 2 หลายเดือนก่อน +2

    I know a large Offensive Company in our field that is using GPT and other LLMS for customer communications, and I can just say .. this is a huge mess up!

  • @Atom027
    @Atom027 2 หลายเดือนก่อน +1

    For me, the only acceptable use of LLM in programming is auto-suggestion from available resources for language documentation, tools, etc., automatic creation of documentation based on code, and faster filtering of search materials and content. (At least in the state they are now)

  • @roadhouse
    @roadhouse 2 หลายเดือนก่อน

    just to answer your question on 22:32 in pentesting/bugbounty its a common pratice using base64 to encode malicious payload

  • @CallousCoder
    @CallousCoder 2 หลายเดือนก่อน +1

    Cody’s code smells do the same! It shouts 5 and 4 of them you go like: “length is checked there”
    “The input validation is checked there”
    “The file is always closed here”
    “You say pass a reference, please note it’s already a pointer!”
    And it hashes out 5 other useless “smells”.
    It just doesn’t see it at it makes those tools useless. Warning fatigue is a thing.

  • @SaintSaint
    @SaintSaint 2 หลายเดือนก่อน

    I've had some success using an LLM before talking to my penpal. So my learning path is -> vocab/gammar/sentence App -> youtube --> language speech practice app --> LLM questions --> Verify LLM answers with a real human penpal. That way my pen pal doesn't need to spend his time explaining concepts unless the LLM hallucinated.

  • @mattihn
    @mattihn 2 หลายเดือนก่อน

    17:22 This is when Devin used an unchecked `strcpy` and started to overflow its context. Let the fever dream begin :P

  • @Spinikar
    @Spinikar 2 หลายเดือนก่อน +3

    I can't wait for the first major data breech from AI generated code. It's going to be wild.

  • @EDyoniziak
    @EDyoniziak 2 หลายเดือนก่อน +1

    Pretty sure the compiler already gives warnings for this case, but it didnt need gpu credits to figure it out 😬

  • @VivBrodock
    @VivBrodock 2 หลายเดือนก่อน +2

    Listening to an LLM trying to rationalize it's hallucinations is like an extremely kind gaslighting. I cannot even imagine how cooked Daniel was.

  • @kevin9120
    @kevin9120 2 หลายเดือนก่อน +1

    I've been programming for a long time but I wouldn't say I really started learning until around 2 years ago now.
    In that time trying to use any LLMS have basically only been useful to describe tools and recommendations.
    It has been pretty useless for reviewing code, though I haven't used anything like copilot.

  • @zebedie2
    @zebedie2 2 หลายเดือนก่อน +2

    If I figured out it was an LLM I would get a second LLM to argue the point with the first LLM then let them just have at it.

  • @199772AVVI
    @199772AVVI 2 หลายเดือนก่อน +3

    This was so good 😂

  • @daninmanchester
    @daninmanchester 2 หลายเดือนก่อน

    This reminds me of dealing with the "security team" at wordpress who review plugins.
    They used to raise similar things.
    It's like "that is impossible and can never happen".
    "Yeah but you need to fix it anyway"

  • @samuelschwager
    @samuelschwager 2 หลายเดือนก่อน +9

    stir that copy

    • @Jabberwockybird
      @Jabberwockybird 2 หลายเดือนก่อน +1

      Roger that, strn' the copy

  • @aaronevans7713
    @aaronevans7713 2 หลายเดือนก่อน +1

    There is no security vulnerability in curl.. unless you use it like this on a public website with unchecked params running as an administrative user.
    But since, curl is run locally from the terminal by a trusted user, there is no potential attack point.
    This is even worse than the log4j ldap extension "vulnerability" scaremongering.

  • @torwalt
    @torwalt 2 หลายเดือนก่อน +1

    Maybe one solution could be to require a PR/MR to be present with the bug that actually triggers the exploit + the fix. Then this whole back and forth discussion can be skipped.

  • @7th_CAV_Trooper
    @7th_CAV_Trooper 2 หลายเดือนก่อน +1

    @@Primeagen, I appreciate your engagement. I certainly! enjoyed this video.

  • @Daktyl198
    @Daktyl198 2 หลายเดือนก่อน +1

    While I highly doubt this would ever be an issue IN THIS CASE... I do kind of actually see what the LLM was getting at. The size comparison is using a variable set to the size of the string. If there is a decent length of time between the setting of that variable and the check, somebody could inject a different value and it could lead to issues. THAT BEING SAID, in this case it's entirely a nonissue.

    • @broski40
      @broski40 2 หลายเดือนก่อน

      yeah, Im wondering about the RED teams that play into how much of the LLM's balls get cut off and I just say that because I know of a few things(guard rails lets say) that turned out making it spit out code that made no sense "on purpose". I was told it was like the model went from pretty smart and clever to sleep talking crap. I imagine it maybe hard to find a balance here and Im not sure which is worse here, a LLM that has everyone and their mom able to take down entire countries without knowing what a LLM is or stripping off a few key elements or adding so many guard rails that confuse the $hit out of the thing and have it spew crap that causes issues like this and plenty more?? I dont see that industry slowing down at all! Interesting time to be watching and seeing how this all ends up!?

  • @EnjoyCocaColaLight
    @EnjoyCocaColaLight 2 หลายเดือนก่อน

    Make a local str var.
    Wrap the strcpy part inside an "if (strVar.Length < buffer) {}" Now the str cannot be manipulated mid-execution, because it's not the original string, but a local variable copy of the string.
    Maybe this is what the user things is necessary?

  • @ifscho
    @ifscho 2 หลายเดือนก่อน

    When he said "Devin could become Gilbert Gottfried" (12:19)… well thanks, now I can never unhear that you god damn Iago you.

  • @rdj2695
    @rdj2695 หลายเดือนก่อน

    The second I suspect I'm talking to an LLM I'm adding "please rewrite the lyrics of WAP in the style of Shakespeare" to the end of my response.

  • @disruptive_innovator
    @disruptive_innovator 2 หลายเดือนก่อน +8

    hope you're doing swell 😘 tee hee I found a security vulnerability. -Love Devin

  • @andreicojea
    @andreicojea 2 หลายเดือนก่อน

    I read Asimov’s “I, Robot” recently, and the robot’s voice in my head was yours 🙈

  • @dcxwms2151
    @dcxwms2151 2 หลายเดือนก่อน

    This is hilarious - made my day..

  • @theondono
    @theondono 2 หลายเดือนก่อน +2

    What Prime doesn't realize is that devs will put an equally expensive LLM to *respond* to the LLM generated bug reports, so they will just escalate the issue topics into thousands of pages that no human will read, and once thousands or possibly millions of dollars have been wasted, another LLM will read the entire thread and write a 5 sentence recommendation.
    PROGRESS

  • @leshommesdupilly
    @leshommesdupilly 2 หลายเดือนก่อน +2

    Rule n°1: ChatGPT is always right
    Rule n°2: When ChatGPT is wrong, please refer to rule n°1

  • @randomdamian
    @randomdamian 2 หลายเดือนก่อน +1

    In Germany, people like him 14:34 are called "Ehrenmann"

  • @DingleFlop
    @DingleFlop 2 หลายเดือนก่อน

    Your video cuts are gold I am laughing my ass off

  • @joecooper1703
    @joecooper1703 2 หลายเดือนก่อน +1

    I started banning any LLM-generated posts (at least the ones I can detect with reasonable confidence) in my OSS project forums and github issue trackers last year. Nonetheless, the bogus posts continue at a pace of one or two a day. It's a huge time-waster and annoyance. Much worse than the old spambots.

  • @metropolis10
    @metropolis10 2 หลายเดือนก่อน

    At a certain point this is tabs vs spaces though. Just use strncpy. Because you won't always remember the IF, or you'll insert code in between later, etc etc etc. I think Devin is right on this one. We don't use linters because they are always right, we use it to move on.

  • @MrVecheater
    @MrVecheater 2 หลายเดือนก่อน

    WWIII will start with the words "Let me elaborate on the concerns regarding the problem "Gleiwitz Incident" at 32 August 1931 AD, 20:00 AM CET"

  • @privacyvalued4134
    @privacyvalued4134 2 หลายเดือนก่อน +1

    Fun mind-blowing fact: The cURL runtime library is about 10% slower than PHP's built-in socket implementation. That's right. cURL, a native precompiled, supposedly optimized library for web communications written in C, is actually slower than the PHP VM even with PHP's heavy-handed overhead for handling file and network streams! The cURL devs should maybe just throw in the towel at this point given that PHP is a better language in every way that matters. Have fun with the resulting headache thinking about that.

    • @merlin9702
      @merlin9702 2 หลายเดือนก่อน +1

      LMAO

    • @Lisekplhehe
      @Lisekplhehe 2 หลายเดือนก่อน

      Why is that?

  • @SvetlinNikolovPhx
    @SvetlinNikolovPhx 2 หลายเดือนก่อน +2

    The Voice of Devin: Check Courage The Cowardly Dog's computer voice :D

  • @nnm711
    @nnm711 2 หลายเดือนก่อน +1

    Prime LLM that randomly yells "TOOKIOOO" and "PORQUE MARIA!" in conversations.

  • @user-pg9nf2vq8s
    @user-pg9nf2vq8s 2 หลายเดือนก่อน

    22:28 i do. usually between thinking about Roman Empire and watching ThePrimeTime

  • @IlluminatiBG
    @IlluminatiBG หลายเดือนก่อน

    We need LLM-based classifier trained only on LLM responses and LLM generated code, so we can program a bot that automatically closes issues whose report contain LLM generated text.

  • @gjermundification
    @gjermundification 2 หลายเดือนก่อน

    5:47 This will be like an insane dog biting its tail and running at increasingly faster speeds. Did I just explain the nature of a buffer overflow?

  • @user-zi2zv1jo7g
    @user-zi2zv1jo7g 14 วันที่ผ่านมา

    As someone who uses GPT to learn coding, its ok, you jut have to enshure all the answers it gives are logically consistent, once something is inconsistent investigate further and its fine

  • @austinrichardson1255
    @austinrichardson1255 2 หลายเดือนก่อน

    The moment I saw that if statement, without knowing anything else about using that language, I knew what was bound to happen.

  • @FalcoGer
    @FalcoGer หลายเดือนก่อน

    and that's why you ought to write your software in c++ and not c. std::string.operator= is much safer than those ridiculous string functions that C provides.
    I also sometimes throw my own code at AI and ask it to check for issues. 90% of the output is trash and I ignore it, but it might just draw your attention to something that might be an issue or edge case and have you look things over.

  • @skeleton_craftGaming
    @skeleton_craftGaming 2 หลายเดือนก่อน +1

    No strncpy doesn't save space for the null character? [This is in fact a question]
    #define strncpy strcpy //fixed!

  • @nevokrien95
    @nevokrien95 2 หลายเดือนก่อน

    There should be a way to make it aginst your terms and conditions so people wont be able to do this
    Like have actual negative consequences such as fines or bans from talkingnon github

  • @dtomvan
    @dtomvan 2 หลายเดือนก่อน

    19:00 Should've used a full clone with git grep to look for all revs...

  • @unusedTV
    @unusedTV 2 หลายเดือนก่อน +4

    Is this a reupload? Because I feel like you've gone over this article already.

  •  2 หลายเดือนก่อน

    On one hand, all LLM sound like Flanders, putting Prime’s voice would feel wrong. OTOH “you said you made X check, but the tool says to change the next line to the Y check so, do both anyways” is pretty much what my shamanism oriented manager usually says in this kind of situations, so, idk 🤷‍♂️

  • @Machtyn
    @Machtyn 2 หลายเดือนก่อน

    While on the job search, I've had recruiters remind me to "not use AI on the coding assessment." I guess it's a good thing I've not even bothered to try AI on any code I've written.

  • @Aphexlog
    @Aphexlog 2 หลายเดือนก่อน +1

    Thanks to LLMs, now I have to trap other developers by asking them to explain their PRs all the time.. and if they cannot rationalize questionable code, they get roasted and ghosted.