OpenAI o1 Released!

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ก.ย. 2024
  • Recorded live on twitch, GET IN
    Links
    - • Coding with OpenAI o1
    By: / @openai
    - openai.com/ind...
    My Stream
    / theprimeagen
    Best Way To Support Me
    Become a backend engineer. Its my favorite site
    boot.dev/?prom...
    This is also the best way to support me is to support yourself becoming a better backend engineer.
    MY MAIN YT CHANNEL: Has well edited engineering videos
    / theprimeagen
    Discord
    / discord
    Have something for me to read or react to?: / theprimeagen
    Kinesis Advantage 360: bit.ly/Prime-K...
    Get production ready SQLite with Turso: turso.tech/dee...

ความคิดเห็น • 736

  • @fiendishhhhh
    @fiendishhhhh 4 วันที่ผ่านมา +775

    I for loop therefore I am.

    • @TPF00T
      @TPF00T 4 วันที่ผ่านมา

      I while loop therefore I am?
      while (i = alive) {
      cout

    • @djcardwell
      @djcardwell 4 วันที่ผ่านมา

      you think an ai model is a for loop therefore you're an idiot.

    • @blarghblargh
      @blarghblargh 4 วันที่ผ่านมา +20

      I for therefore I loop

    • @abh1yan
      @abh1yan 4 วันที่ผ่านมา +3

      There I for loop

    • @taesheren
      @taesheren 4 วันที่ผ่านมา +1

      for what?

  • @providenceifeosame6126
    @providenceifeosame6126 3 วันที่ผ่านมา +139

    openai rebranded latency as “thinking”?
    9-dimensional chess my bro.

    • @flipperiflop
      @flipperiflop 2 วันที่ผ่านมา +2

      Well, it's not that new of a thing to say. How many times has your dad said that the "computer is thinking" while it was loading something?

    • @TheGuillotineKing
      @TheGuillotineKing วันที่ผ่านมา +2

      But that's exactly what it's doing it's not latency when it's thinking it's not sending any data it's doing calculations

  • @jorgeo4443
    @jorgeo4443 4 วันที่ผ่านมา +410

    The whole "humanizing" thing of AI is for marketing, ease of adoption and as first legal deterrent:
    -Our AI does not go through its processes... our AI *thinks*
    -Our AI is not trained on scraped and stolen data... our AI gets *inspiration* from creations
    -Our AI does not contain errors or commit actions for which we should be held accountable... our AI makes little innocent *mistakes* like everyone
    they know what they are doing

    • @rickymort135
      @rickymort135 4 วันที่ผ่านมา +15

      How do you know human L2 thinking isn't just for loops, or human creativity just a byproduct of training our biological neural nets on copyrighted creations. The thing is they could literally create an actually intelligent human brain trained on the internet and you would complain it learned from stuff other people created yet you don't apply that standard to human artists

    • @tiredcaffeine
      @tiredcaffeine 4 วันที่ผ่านมา +3

      @@rickymort135 No one mentioned anything about art.

    • @rickymort135
      @rickymort135 4 วันที่ผ่านมา +9

      @@tiredcaffeine look up the hierarchy of disagreement, you're in the lower middle somewhere. Swap artists for creators if it makes you happy

    • @XQzmeeMusic
      @XQzmeeMusic 4 วันที่ผ่านมา +3

      I think humans still first think of or visualize a concept and extrapolate from there. Instead an AI has no concept of concepts. only words stringed together by letters and the next word that probably follows.

    • @quantum5768
      @quantum5768 4 วันที่ผ่านมา

      @@rickymort135 That's a bold claim there, the burden of proof is on you to prove that human thinking is just for loops. You can't make a claim and ask someone else to prove you wrong, that's not how that works, it's on you to provide evidence for your claim.
      There is however plenty of evidence to prove your assertion wrong anyway, in order to get close to simulating a biological neural net, you need 1 CPU per net. Whereas 1 neural net node in AI is a simple weighting. That's not just an order of magnitude difference in complexity, that's an entire paradigm shift in complexity, and that's just for a single node. Biological computers are way beyond current artificial "intelligence" models, to the extent that we can't even simulate smaller mammal brains for mice yet (although we're getting there). Take a look at the SpiNNaker project for insight to human attempts to simulate biological neural nets if you want more detail.

  • @maniacZesci
    @maniacZesci 4 วันที่ผ่านมา +57

    Hype after hype after hype after hype.

    • @rock3tcatU233
      @rock3tcatU233 4 วันที่ผ่านมา +5

      No hype here son. This model is good enough to make software devs obsolete.

    • @EvomanPl
      @EvomanPl 4 วันที่ผ่านมา +34

      @@rock3tcatU233 Sure, just like the previous one, and the one before and…

    • @JP-jf1oc
      @JP-jf1oc 3 วันที่ผ่านมา

      @@rock3tcatU233 There are millions of people that have still have their jobs....

    • @realkyunu
      @realkyunu 3 วันที่ผ่านมา +12

      ​@rock3tcatU233 We are now in the 28th iteration into tech making it's creator obsolete. Doesn't that shit get boring?

    • @andrewyork3869
      @andrewyork3869 2 วันที่ผ่านมา +2

      ​@@rock3tcatU233lol you have literally no clue what you are talking about do you?

  • @CaptTerrific
    @CaptTerrific 4 วันที่ผ่านมา +55

    I just spent 6 hours yesterday using o1 to build a C++ allocator - it worked SPLENDIDLY after enough back-and-forths... something I could have coded in 90 minutes including unit tests

    • @my_online_logs
      @my_online_logs 4 วันที่ผ่านมา +5

      if its true then its your skill issue in using it because i just tried it and easy

    • @CaptTerrific
      @CaptTerrific 4 วันที่ผ่านมา +1

      @@my_online_logs Yes, it'll crank out a toy allocator based on college coursework posted to github zero-shot. Building something useful is quite different :)

    • @my_online_logs
      @my_online_logs 4 วันที่ผ่านมา +3

      @@CaptTerrific but i just build it using o1, so it shows your own skill problem

    • @pxkdrake
      @pxkdrake 4 วันที่ผ่านมา +1

      @@my_online_logs don't tell people it's a skill issue when you can't even type english properly mouthbreather. back to your esl class.

    • @carmen_13
      @carmen_13 3 วันที่ผ่านมา +1

      @@my_online_logs o1 is only good at solving toy problems in python/js.

  • @kc12394
    @kc12394 4 วันที่ผ่านมา +186

    Great, now not only do you use enormous amounts of energy to train the model you also need a literal sun to power the inference chain. What a great idea!

    • @maidenlesstarnished8816
      @maidenlesstarnished8816 4 วันที่ผ่านมา +44

      Fun fact, using this model in the API costs 100x more than 4o does haha

    • @VivekYadav-ds8oz
      @VivekYadav-ds8oz 4 วันที่ผ่านมา +5

      I think you can use it like 5 times a week even when it's paid for 😂 insane

    • @my_online_logs
      @my_online_logs 4 วันที่ผ่านมา +4

      why the problem? of course better model need better power. the hate against ai is just stupid

    • @kiase978
      @kiase978 4 วันที่ผ่านมา +1

      Nuclear power

    • @kendrit4885
      @kendrit4885 4 วันที่ผ่านมา +5

      Great, maybe this will force the mass adoption of nuclear.

  • @FOF275
    @FOF275 4 วันที่ผ่านมา +358

    I love how ChatGPT being slower is being sold as a feature. "Oh, it's just thinking" 😂

    • @RillianGrant
      @RillianGrant 4 วันที่ผ่านมา +121

      Being able to trade quantity for quality is a significant feature.

    • @docgraal485
      @docgraal485 4 วันที่ผ่านมา +34

      You must not think alot. There is a difference between outputting answers that it has been trained on and actually coming up with your own answer.

    • @FOF275
      @FOF275 4 วันที่ผ่านมา +8

      ​@RillianGrant true, but it's still funny

    • @IronMeDen1
      @IronMeDen1 4 วันที่ผ่านมา

      @@docgraal485it's adding extra steps, it's not reasoning

    • @chrisjsewell
      @chrisjsewell 4 วันที่ผ่านมา

      @@docgraal485no this is wrong, it’s still using data it was trained on, it has not magically started coming up with its own ideas

  • @hamm8934
    @hamm8934 4 วันที่ผ่านมา +174

    Im glad you mentioned the tech-as-mind metaphor stuff, i.e. the steam engine example. This is rife in the CS field. To the point that people think graph nodes are analogous to actual biological neurons. Its so goofy if you know anything from a neuroscience 101 class lol

    • @HobbitJack1
      @HobbitJack1 4 วันที่ผ่านมา +27

      It's a good way to describe it to a CS major, and at the very least it helps new students learn the ideas, but analogies mustn't be taken too far.
      In this case it ends up giving the comical result that makes it sound like Human brains are just a bunch of numbers.

    • @nuvotion-live
      @nuvotion-live 4 วันที่ผ่านมา +6

      I think it’s fine as long as you understand our tendency to do it. “I need to recharge”. Not harmful

    • @maimee1
      @maimee1 4 วันที่ผ่านมา +4

      Biology sometimes uses programming as metaphor as well though. Well, maybe not that common.

    • @sandrin0
      @sandrin0 4 วันที่ผ่านมา +13

      The amount of nonsense I started hearing about language processing and production in humans after chatgpt was first released was honestly maddening

    • @greengoblin9567
      @greengoblin9567 4 วันที่ผ่านมา +5

      neurons in the brain are completely different. The even go through mitosis, and exist in physical space, and are impacted by things such as epigenitics.

  • @connormc711
    @connormc711 4 วันที่ผ่านมา +52

    I remember the first time I got the ai to “think”. I turned to my roommate and said “oh no I think I gave the robot schizophrenia”

    • @dmitriyrasskazov8858
      @dmitriyrasskazov8858 4 วันที่ผ่านมา +2

      When ai gets its own thoughts company pulls the plug.

  • @WMDB4637
    @WMDB4637 4 วันที่ผ่านมา +26

    Don’t want to learn code
    Spend 100+ hrs to learn how to prompt
    Generate buggy code
    Don’t know how to fix buggy code
    Learn to code….

    • @misaelcampos5589
      @misaelcampos5589 3 วันที่ผ่านมา +1

      me rn, using ai just made me want to learn cs to a deeper level

    • @RawrxDev
      @RawrxDev ชั่วโมงที่ผ่านมา

      @@misaelcampos5589 Was my experience as well, started to learn to code a year before 3.5, once it came it out I felt I had such a big boost, slowly realized that it's not the magic tool I thought it was, knowing how to do what I wanted would still be way more efficient than not knowing and crutching on AI, still seems to be that way with all newer models.

  • @start7047
    @start7047 4 วันที่ผ่านมา +64

    "Have a good life!
    Enjoy the time you have with your family, with your friends.
    Develop a craft that makes you excited every day.
    Become energized by what you've get to do."
    @48:40
    Thank you!

    • @kurutah85
      @kurutah85 4 วันที่ผ่านมา +2

      I have a feeling this advice didn't work out too well for elevator operators back in the day. He forgot to mention the part where you end up feeling like the whole world's done you wrong for not noticing the changes you made.

  • @stt.9433
    @stt.9433 4 วันที่ผ่านมา +29

    As someone who works in the field, Chain of thought isn''t anything new or that interesting, it's a prompt-engineering technique. The reason it wasn't rampant is because the costs are very high. It's not a terrible idea but you'll notice if you try to multi hop, that is prompt several times to rectify the llm you will get the same if not worst answers. It's basically like trying converge towards min loss except via the llm and not the user whom before used to prompt a couple times until they got a satisfactory answer. Here it does for you but that means that when it fucks up, you can steer in the right direction it's already converged towards that local minimum.
    Use case for it is where the steps for reasoning are very obvious and you're too lazy to steer the llm.
    It's more an indicator that we're going agentic (which is a bad sign this early). Btw you can build this with gpt-4o or any other llm in fact someone on twitter did it with claude 3 with a simple prompt.

    • @hamza1543
      @hamza1543 3 วันที่ผ่านมา +4

      This is not just a prompt engineering technique. That's old news like you say. The new thing here is that they trained a model to do much better chain of thought. They compare its ability to reason with 4o+CoT and it does a lot better.

    • @captnoplan3926
      @captnoplan3926 3 วันที่ผ่านมา

      What do you think their highest tier o1 model does, which apparently is behind $2000 per month subscription? Just more compute or context window or whatever? (Sorry I'm a noob, hence the question).

    • @sen943
      @sen943 3 วันที่ผ่านมา +4

      I would recommend reading the paper before commenting on how the model works. Seriously trash take.

    • @stt.9433
      @stt.9433 3 วันที่ผ่านมา +2

      @@hamza1543 Yeah they used some RL to improve the CoT. But it still CoT at the core. Which again is a prompt engineering technique that's been around since 2022. Yes it's better, as expected, we already knew that if we prompted an LLM 5 times through each reasoning steps instead of just once we get significantly better results.

    • @hamza1543
      @hamza1543 2 วันที่ผ่านมา +1

      @@stt.9433 Sure, but statements like "you can build this with gpt-4o or any other llm" are a bit misleading, because you can't.

  • @yaboy7120
    @yaboy7120 4 วันที่ผ่านมา +27

    6:46 the setpaste thing actually blew my mind thank you

    • @thederpykrafter
      @thederpykrafter 4 วันที่ผ่านมา +3

      I'm still in shock lol

  • @porky1118
    @porky1118 4 วันที่ผ่านมา +9

    I did the same test with the Gab AI.
    I got this answer:
    To remove the female silhouette and recreate the background, I recommend using an image editing software like Photoshop or GIMP. These tools allow you to select and remove specific elements from an image and then fill in the background using various techniques like content-aware fill or manual cloning. As for your second question, there are two "R"s in the word "silhouette".

  • @Hermegn
    @Hermegn 2 วันที่ผ่านมา +2

    I just recently became in touch with your content and I can not express how happy 48:44 ‘s statement made me.
    I have been struggling in trying to find other people within my field that share this point of view. Is refreshing hearing someone else say it..
    Thank you!

  • @allenr.williams3211
    @allenr.williams3211 4 วันที่ผ่านมา +9

    Hidden chain of thought means they can patch things like counting letters in a word or comparing decimals by using Python calls that they “cannot” show us.

  • @someoneunknown6894
    @someoneunknown6894 4 วันที่ผ่านมา +17

    I bet they also added "don't hallucinate"

  • @jacksonstone693
    @jacksonstone693 4 วันที่ผ่านมา +22

    FYI: The phrase for when the incumbent stomps your small business by bundling /copying it is "Sherlocking"

  • @edwardserfontein4126
    @edwardserfontein4126 4 วันที่ผ่านมา +7

    Prime is the only TH-camr I trust to give me an honest review or opinion about AI.

  • @defipunk
    @defipunk 4 วันที่ผ่านมา +54

    Haven't watched the full video yet, but don't forget that (a) Claude is eating their lunch right now, (b) they are trying to run one of the largest funding rounds ever and (c) Sam Altman isn't the most moral character (worldcoin and friends). You draw your own conclusions.

    • @TheExodusLost
      @TheExodusLost 4 วันที่ผ่านมา +4

      It does what it does. So far it’s proven to solve one of the core LLM problems, basic reasoning. That’s a big step, not very flashy on its own though.

    • @lewiemarks6418
      @lewiemarks6418 4 วันที่ผ่านมา +17

      It’s not reasoning. It’s just recursion on its output/prompt. Until it gets a satisfactory result.

    • @beyondlost660
      @beyondlost660 4 วันที่ผ่านมา +2

      ​@@TheExodusLostAs the previous guy mentioned. You can only use this 30 times in a week for a reason. Hopefully it gets better, but currently it's just reiterating on what 4 or 4o does hundreds of times. There's definitely room for optimization, but I'm quite excited to see what it can do in the future.

    • @TheExodusLost
      @TheExodusLost 4 วันที่ผ่านมา

      @@beyondlost660 right now, it’s very expensive. Plus, I gotta guess that the amount of prompts into a model are 100x in the first few weeks. To me, this is “solving” a fundamental problem that haters have focused on since GPT-3. Big step, maybe saying it’s “basic reasoning” is too far but the prompts it’s solving that were unsolvable a week ago are hard to ignore.

    • @vaolin1703
      @vaolin1703 4 วันที่ผ่านมา

      @@lewiemarks6418 How is that different from what we do? I don't write good code first try, and neither can I come up with a mathematical proof first try. There's always tons of recursion involved.

  • @unowenwasholo
    @unowenwasholo 4 วันที่ผ่านมา +8

    Are we going to start calling loading "thinking" now? Is this what we've come to?

  • @hackmedia7755
    @hackmedia7755 4 วันที่ผ่านมา +23

    If everyone's job were lost, then nobody would be earning money. Then all the companies will go bust because nobody is buying anything.

    • @zzrroott6459
      @zzrroott6459 4 วันที่ผ่านมา +1

      They wouldn't be paying employees either because they wouldn't have any. They dont pay anyone. Nor does anyone pay them to buy their products. Economics is useless

  • @stacksmasherninja7266
    @stacksmasherninja7266 4 วันที่ผ่านมา +10

    since you can't really see the reasoning trace of o1 models, what's to say it exists at all? they can use the "thinking" time to hide the inference latency and potentially overcharge consumers at the same time. kinda based ngl.

  • @brennan123
    @brennan123 15 ชั่วโมงที่ผ่านมา +1

    Primeagen: You honestly think OpenAI won't just steamroll you the moment they can?
    OpenAI: "We are unable to share your chat because it contains an tag"

  • @navi2710
    @navi2710 4 วันที่ผ่านมา +36

    No one calling out the use of Var instead of let or const in JavaScript?

  • @keyboard_g
    @keyboard_g 4 วันที่ผ่านมา +16

    Greatest business model. Embrace models and become platform. Extend by creating the next gen models. Extinguish by becoming the wrapping tool.

  • @kc12394
    @kc12394 4 วันที่ผ่านมา +9

    Honestly this sounds like a clever way to get people to spend more using more api calls lol

  • @christopherdessources
    @christopherdessources 4 วันที่ผ่านมา +48

    When i saw he was wearing a turtleneck that wasn't black i knew he was a wildcard

    • @carultch
      @carultch 4 วันที่ผ่านมา

      Your creator has the same name as you, except he's Kris with a K.

  • @eclipse6859
    @eclipse6859 4 วันที่ผ่านมา +32

    If someone could make an ai model that could efficiently write correct code with minimal revisions and matainance, they wouldn't let people use it. They would make it a subscription for $1M/year and large tech companies would be the target audience

    • @rickymort135
      @rickymort135 4 วันที่ผ่านมา +2

      Finally a good take in the comments. The thing is I think it'll go into the billions but I don't think it'll be incorrectly priced and I disagree it'll be locked away from us. It'll just be the price of that much inference time compute and no normal person will be able to afford it. Instead we'll get heavily cut down super "lite" versions of it that won't be able to generate anything competitive. End result, they rule the world

    • @BrazzilianDev
      @BrazzilianDev 4 วันที่ผ่านมา +1

      Yeah.. I mean 20 bucks a month to replace programmers? Too cheap to what they are offering

    • @Kwazzaaap
      @Kwazzaaap 4 วันที่ผ่านมา +2

      Or they would not sell it at all and use it internally to supplant everyone.

    • @2DReanimation
      @2DReanimation 3 วันที่ผ่านมา

      Remember that they are competing with pretty much every major tech company, and they have to release improvements to stay on top. Sure, if it could automatically run huge programming projects, it would probably not be released.
      But I'm having it write a window manager from scratch right now, and it's great. Though it wasn't so great at resolving external problems with the testing environment. However it resumed working autonomously on the project after being sidetracked. We'll have to see if it can implement more advanced features though. But it's been really impressive. I'll try other quite advanced projects as well, like writing a package manager for Debian in Awk, lol.
      Oh, and it's definitely not copying other WM's as it's using XCB instead of Xlib (so examples are less), and I have certain design goals that are different.

    • @backfischritter
      @backfischritter 3 วันที่ผ่านมา

      Exept this is not the smart thing to do. As a company they first want to create demand and dependency (eg. by letting companies fire their employees in the hope they can replace most of them by cheap AI tools.) Then they will gradually increase prices.

  • @rumblehansi
    @rumblehansi 4 วันที่ผ่านมา +6

    dude your final statement is therapy :)

  • @frontalachivment3604
    @frontalachivment3604 4 วันที่ผ่านมา +44

    I tried today netcode refactor with it - result was it produced code 3 times slower than original. It looks good on presentations but its shit when it comes to understanding why coders do memory tricks, gpt dont understand perfomance difference. GPT is good only for one thing -> paste code -> explain in easy words what does it do. Better google but nothing more.

    • @my_online_logs
      @my_online_logs 4 วันที่ผ่านมา +3

      at least read the openai post first before yapping. the idea is to give the model thinking time in order to produce more robust answer. i created 5 new architeture with 0 error with it. your bad prompt skill say all of them

    • @TechnoMageCreator
      @TechnoMageCreator 4 วันที่ผ่านมา

      Try telling it to do what you actually know in detail, it works extremely well but it needs guidance. It's a tool to help you as a coder and up your skills. It is sad how many will be left behind because they refuse to understand it...

    • @thenonsequitur
      @thenonsequitur 4 วันที่ผ่านมา +5

      ​@@TechnoMageCreator I had a couple old projects I made that had some very thorough READMEs and asked o1 to write code that corresponded to the READMEs. I was blown away about how similar the results it produced were to my own work that took me hours. Not only did it perfectly understand the readme, it produced almost perfect code. The same prompts given to 4o produced broken code with numerous errors that totally missed entire portions of the README.
      For one of the o1 productions, it failed to account for an obscure race condition and I said "can you please fix the race condition" without identifying what I was talking about, and o1 figured it out and correctly fixed the race condition exactly as I did in my original code.
      The point is o1 does an incredibly good job of translating intention into code if you provide enough detail.

    • @rmidifferent8906
      @rmidifferent8906 4 วันที่ผ่านมา +3

      ​@@thenonsequitur There is non zero chance that AI was trained on your old project if they were public

    • @neruneri
      @neruneri 4 วันที่ผ่านมา +2

      @@my_online_logs The fact that you're unironically calling it "thinking time" gives away the fact that you don't understand the technology.

  • @biggerdoofus
    @biggerdoofus 4 วันที่ผ่านมา +5

    I would argue that the fun alone time can be helpful for a marriage in that it can be a tool for handling mismatched libido levels and short-term cases of unfortunate scheduling.
    Don't disregard the value of your side-tools just because they're not the main tools.

  • @aarholodian
    @aarholodian 4 วันที่ผ่านมา +45

    This "technical" paper is such an abstractly written steaming pile of marketing trash. Oh wow so putting more compute power into generating an answer and making the process iterative at the expense of time is going to produce mildly better results on a completely arbitrary scale? We're so doomed!!!111

    • @alibarznji2000
      @alibarznji2000 4 วันที่ผ่านมา +1

      Exactly lol

    • @tear728
      @tear728 4 วันที่ผ่านมา +9

      It's all nonsense. We're at the top of the s curve

    • @NnamdiNw
      @NnamdiNw 4 วันที่ผ่านมา +2

      You call going from 13% to 83% in this Olympiad tests mild?

    • @rickymort135
      @rickymort135 4 วันที่ผ่านมา

      ​@@NnamdiNw it's cope, let em be

    • @shyshka_
      @shyshka_ 4 วันที่ผ่านมา +5

      @@NnamdiNw olympiad tests and actual programming in the industry are completely different things that have almost nothing to do with each other lol

  • @ethanannane8783
    @ethanannane8783 4 วันที่ผ่านมา +28

    Poor helpdesk people are cooked

    • @hazelcraze
      @hazelcraze 3 วันที่ผ่านมา +3

      Helpdesk here. Everytime i work up the courage to start learning to program, An AI company releases some new model that is slightly better at coding, which sends me into a spiral of panic and I end up not starting. Cycle of stress :(

    • @jennifercooljeo6552
      @jennifercooljeo6552 3 วันที่ผ่านมา +5

      @@hazelcrazejust start, new things don’t always last long and it’s meant to make you feel helpless. You can do it

    • @andrewyork3869
      @andrewyork3869 2 วันที่ผ่านมา

      ​@@hazelcrazeit's been 20+years of them trying to make BS like this to work. It never once has done anything but cause long term vile rot.

    • @titandino
      @titandino วันที่ผ่านมา

      Use the AI to help you learn to code more effectively.

    • @andrewyork3869
      @andrewyork3869 22 ชั่วโมงที่ผ่านมา

      @@titandino lol

  • @ikusoru
    @ikusoru 5 ชั่วโมงที่ผ่านมา

    6:59 Love Prime's recation when the guy says: "When I hoover ..." 😄

  • @geraldM1
    @geraldM1 4 วันที่ผ่านมา +24

    Seems they integrated the technology behind tools like Lang chain into gpt 4o to get gpt o1

    • @user-pt1kj5uw3b
      @user-pt1kj5uw3b 4 วันที่ผ่านมา +3

      Yep + synthetic data curated with RL, I think.

    • @renzo3939
      @renzo3939 4 วันที่ผ่านมา

      "gpt" is dead, its just o1

  • @abdulkadiraminu262
    @abdulkadiraminu262 4 วันที่ผ่านมา +4

    Just wanna say i love your content so much ,youre such an inspiration. i saw your talk at laracon and it was really inspiring

  • @SoopaDoopaGamer
    @SoopaDoopaGamer 4 วันที่ผ่านมา +7

    With the 10x improvement talk - OpenAI o1 isn't a 10x or 100x bigger model. It's a fundamentally different framework to allow test-time inference. This allows a smaller model to *think* so that it behaves or performs like a 100x larger model. GPT 5 or w/e seems to be coming out in the winter. I would wait and see what gpt 5 + o1 is able to do and if that would be economically valuable. The 100B datacenters aren't even built yet.

  • @hansu7474
    @hansu7474 4 วันที่ผ่านมา +4

    Folks, until the AI can plan and 'reason' and improve itself with the result of its own reasoning, there is nothing to be worried about.
    And when the AI can do all of that, still don't worry about it because it's not just 'you' who will be replaced. It's everyone.
    It means that humans replaced itself with something superior to themselves to roam about on the Earth.

  • @StrengthOfADragon13
    @StrengthOfADragon13 4 วันที่ผ่านมา +3

    Ok, if the statement 'it can produce a chain of thought ' means it can actually show why it came up with something I like it as an improvement. Actually having it show it's work makes fixing things easier, and that's going to be more important in it being an actual useful tool

  • @kristinapianykh9445
    @kristinapianykh9445 4 วันที่ผ่านมา +4

    Introducing the term "think" to justify even longer response times. Classic OpenAI

  • @keyboard_g
    @keyboard_g 4 วันที่ผ่านมา +33

    Sam Altman invented a way for the service to generate the tokens that we pay for and can’t see, but it’s totally legit pls give billions.

    • @rickymort135
      @rickymort135 4 วันที่ผ่านมา +2

      Do you think things you don't say? If so it's just the natural next step in improving quality

    • @matejpesl1
      @matejpesl1 3 วันที่ผ่านมา

      Except they are showing it to you, you just got to have the brains to click the chevron icon of the Thinking text... I know this was supposed to be a joke first, but holy shit is this a lie.

    • @miguelangelmartinezcasado8935
      @miguelangelmartinezcasado8935 2 วันที่ผ่านมา

      ​@@matejpesl1 Funny thing, they don't even show the chain of though, just a summary of it. So actually you waste double the tokens, first in the hidden one and then creating the summaru

  • @steve_jabz
    @steve_jabz 4 วันที่ผ่านมา +14

    You've once again demonstrated you don't understand machine learning. It isn't feeding back it's own errors in a for loop.
    It was trained with reinforcement learning for the first time (RLHF is not real rl). They got it to generate synthetic data for "thought processes" to solve every problem in it's dataset, then let it use rl to find the path that produces only the right answers, and then retrain alongside that path finding algorithm so that it learns how to do that "thinking" more generally. If you don't know why rl is a paradigm shift, look at AlphaZero, AlphaFold and so on that learn to be better than the best experts in the world and then play each other to self-improve, often finding exploits in the games they play and breaking out of them. o1 actually broke out of it's VM too.

    • @my_online_logs
      @my_online_logs 4 วันที่ผ่านมา

      he and his fans are just useless contributors. like his fans complaining the use of thinking word as the high level language of ai. its just stupid and useless. and the fans complaining the speed is slower when the idea is giving more time as thinking time to the model to make it able to produce more robust answer. just like human needs more time when thinking complex thing aka doing deeper computation in order to get better results. his fans are just stupid and useless

    • @xt-89907
      @xt-89907 4 วันที่ผ่านมา +11

      He downplays the flexibility and potential of ML so much. I’m not sure if he’s trying to push the cultural conversation away from AGI because it makes him uncomfortable, it gets him views, or he has genuine intellectual disagreements with the possibilities of ML

    • @steve_jabz
      @steve_jabz 4 วันที่ผ่านมา +1

      @@xt-89907 I try not to hype too much on AGI myself, it's certainly not impossible, to me it just means a piece of software that can replace the vast majority of jobs, which I mean most modern jobs don't require much intelligence anyway, machines already handle strength, precision, speed, large calculations, toil and reliability for us and we're just kinda doing the plumbing for them.
      We set up the CNC machine and press start and then call the precision gears it made "our" work.
      This is a level of ignorance far beyond that though, he's literally trying to claim they added some extra if statements and for loops. If it's that easy, he should just code it himself. Who needs backprop, multi-head self attention and gradient descent? Grade school level if statements in javascript are all you need apparently.

    • @djbcubed
      @djbcubed 2 วันที่ผ่านมา +2

      Calling it a "Paradigm Shift" is just drinking the Tech Bro hype Koolaid. Reinforcement learning in this context only gives better answers if you can say one of the outputs is 100% correct without a doubt and train using that chain of "logic". That approach has given better scores in Math and Physics...with the added caveat that testing was done with 10,000+ submissions over multiple hours. So no, I do not think a model taking multiple hours to sometimes more correctly answer math and physics related questions is a "Paradigm Shift". In literally any other tech branch this would be considered a normal, incremental improvement, but because we are in an AI bubble this is, "a tOtAL pARaDiGM sHIfT thAt THINKS and wIlL rePlaCe pRoGrAMMerS".

  • @calmhorizons
    @calmhorizons วันที่ผ่านมา +3

    Wait, 6 months ago the AI fanatics were telling me that Chat-GPT-4 was already surpassing humans and able to think. Doesn't that suggest o1, if it is a significant improvement, is now superhuman? Or were they lying to themselves before?

    • @gamma4053
      @gamma4053 วันที่ผ่านมา +2

      Trust me. Just two more weeks.

    • @calmhorizons
      @calmhorizons วันที่ผ่านมา

      @@gamma4053 🙏

    • @_fuji_studio_
      @_fuji_studio_ 23 ชั่วโมงที่ผ่านมา

      its if given all context just like humans need context before speak their answer. tell me human that can read 10000 thousand pages of pdf and can tell all the contents of it. only very very very very couple human that can do it and you cant do it

  • @ertwro
    @ertwro 4 วันที่ผ่านมา +6

    Funny thing is neetcode just made a video where it analyzes the new o1 and it says is waaaay better than he expected.

    • @blubblurb
      @blubblurb 4 วันที่ผ่านมา +2

      Yeah, I hope neetcode will try it out with real code. I tried it yesterday myself. The results were mixed, but I have to try more to come to a final conclusion.

  • @thekwoka4707
    @thekwoka4707 4 วันที่ผ่านมา +7

    Bruh really used vim instead of just echoing into the file...

    • @kevinscales
      @kevinscales 3 วันที่ผ่านมา +2

      It was just "I use vim btw"

  • @hackmedia7755
    @hackmedia7755 4 วันที่ผ่านมา +5

    competition math B
    competition code B+
    PHD Level Questions D
    That was openAI's report card.

    • @NnamdiNw
      @NnamdiNw 4 วันที่ผ่านมา

      83% is a C and 89% is a B+?😅

    • @hackmedia7755
      @hackmedia7755 4 วันที่ผ่านมา

      @@NnamdiNw there I fixed it you clown.

  • @vinialves12362
    @vinialves12362 4 วันที่ผ่านมา +5

    Just learned :set paste
    I don't copy paste too often

  • @nikola.yanchev
    @nikola.yanchev 4 วันที่ผ่านมา +23

    He is anthropomorphisating the model :)

    • @lanelesic
      @lanelesic 4 วันที่ผ่านมา +7

      Yeah its like pretending your cat understands you and can communicate back, except cats are somewhat intelligent beings.

    • @amazingsoyuz873
      @amazingsoyuz873 20 ชั่วโมงที่ผ่านมา

      @@lanelesic Yeah I remember when my cat answered math Olympiad problems and generated novel programs from simple English descriptions...

    • @5epo
      @5epo 13 ชั่วโมงที่ผ่านมา

      @@amazingsoyuz873 who hurt you?

    • @amazingsoyuz873
      @amazingsoyuz873 6 ชั่วโมงที่ผ่านมา

      @@5epo just correcting bad takes 🤷

  • @redminute6605
    @redminute6605 4 วันที่ผ่านมา +2

    13:29 DAMN! That’s some Python timing for a for loop, for ya!

  • @theaugur1373
    @theaugur1373 4 วันที่ผ่านมา +4

    OpenAI is taking a page out of Apple’s book and is about to Sherlock a bunch of folks.

  • @soi8739
    @soi8739 4 วันที่ผ่านมา +10

    Regarded AI-bros already overhyping it again.. love to see it.

  • @temprd
    @temprd 4 วันที่ผ่านมา +54

    Imagine if instead of spending trillions on LLMs we actually just spent that money on educating the youth and making university broadly financially accessible. So much money being spent on parlor tricks.

    • @rickybobby7276
      @rickybobby7276 4 วันที่ผ่านมา +4

      Or we can train the youth on these new tools. It will be like a calculator a tool they couldn’t live without once you have it.

    • @temprd
      @temprd 4 วันที่ผ่านมา +27

      @@rickybobby7276 isn’t the entire point of an LLM that you don’t need to train the user? Also you need a fundamental knowledge base in your domain to create useful prompts, and then determine if the output is useful.

    • @enginerdy
      @enginerdy 4 วันที่ผ่านมา

      $52 billion a QUARTER.
      The reason they don’t want to spend it on human workers is because slavery is illegal.

    • @gramioerie_xi133
      @gramioerie_xi133 4 วันที่ผ่านมา

      What the fuck.
      Did you just claim that trillions are spent on LLMs.
      Like.
      Are you people bots?

    • @flair5469
      @flair5469 4 วันที่ผ่านมา +1

      Money should be spent on both innovative science and our country

  • @Kane0123
    @Kane0123 4 วันที่ผ่านมา +6

    Technically they didn’t release GPT5.

  • @krunkle5136
    @krunkle5136 4 วันที่ผ่านมา +4

    It'll do thinking for you, so you humans can go on doing ... whatever humans do.

    • @sansithagalagama
      @sansithagalagama 4 วันที่ผ่านมา +1

      Humans don't think when they are dead

  • @pierredupont6526
    @pierredupont6526 2 วันที่ผ่านมา

    just learned about ':set paste' thanks to this video.
    Thank you sir, I will go to bed smarter than yesterday.

  • @voxdiary
    @voxdiary 3 วันที่ผ่านมา +1

    models are not really complicated if statements. at that point you cant compare them to if statements. there is a lot of math, distance calclatuons, loss functions and voting mechanisms.

  • @我的家-j4b
    @我的家-j4b 2 วันที่ผ่านมา

    0:25 "chatgippiddi" is wild

  • @rezwanarefin3493
    @rezwanarefin3493 4 วันที่ผ่านมา +3

    29:00 No IOI does not have any submission penalty, just that 50 submissions per problem. ICPC and Codeforces does have time and wrong submission penalty.

  • @defipunk
    @defipunk 4 วันที่ผ่านมา +3

    This is basically going from Kahnemann's thinking fast to thinking slow...

    • @NnamdiNw
      @NnamdiNw 4 วันที่ผ่านมา

      Only if we all thought slow some times

  • @andguy
    @andguy 4 วันที่ผ่านมา +5

    5:08 wtf casual Wheel of Time reference here ?

  • @dankprole7884
    @dankprole7884 2 วันที่ผ่านมา

    9:38 i also do the air quotes around "reasoning" in meetings and it drives the "scientists" crazy 😂

  • @VladReble
    @VladReble 4 วันที่ผ่านมา +10

    i asked o1-mini the strawberry question when it first dropped and it still got it wrong lmao

    • @TheArrowedKnee
      @TheArrowedKnee 4 วันที่ผ่านมา +2

      Strange, i got GP4o to answer it correctly on the first try

    • @avocadoarmadillo7031
      @avocadoarmadillo7031 4 วันที่ผ่านมา +6

      @@TheArrowedKnee Not too strange, there's randomness baked into LLMs. The exact same prompt, to the exact same model, can yield different results.

    • @defenestrated23
      @defenestrated23 4 วันที่ผ่านมา +1

      O1-preview had no problem with the strawberry problem for me

    • @gramioerie_xi133
      @gramioerie_xi133 4 วันที่ผ่านมา +1

      Almost like these things are incapable of perceiving individual letters

    • @victorcadillogutierrez7282
      @victorcadillogutierrez7282 4 วันที่ผ่านมา

      ​@@gramioerie_xi133because they don't detect individual letters, a group of letters are grouped into a token, openai tokenizer is aprox 1.3 tokens per word. To solve that problem the Ai has to split the word in pieces to ensure that each letter has asigned a single token.

  • @hey_benjamin
    @hey_benjamin 4 วันที่ผ่านมา +2

    A steam engine won’t steal your job. A steam engine using a steam engine will cloud your cubicle

  • @mkspind3l
    @mkspind3l 3 วันที่ผ่านมา +1

    vertical integration always beats outsourcing or wrappers as Prime put it!

  • @ToddWBucy-lf8yz
    @ToddWBucy-lf8yz 4 วันที่ผ่านมา +21

    Yes tree of thought has been a thing for a while but it's always been external to the LLM and the logic was contained int he scripts. What make o1 special is that that logic and reasoning has been encoded into the LLM and is no longer a feature of the Python scripts but an essential part of the LLM giving it a more cohesive internal logic processes

    • @OculusAbsconditus
      @OculusAbsconditus 4 วันที่ผ่านมา +11

      Theres no reasoning bro. If the output was shit, now it's refined shit

    • @fauge7
      @fauge7 4 วันที่ผ่านมา +3

      It takes literally months upon months to train these models with tens of thousands of gpus. When they release a model like this it's really an API that runs the and scripts we are running just maybe more refined

    • @steve_jabz
      @steve_jabz 4 วันที่ผ่านมา +8

      It's more than just tree of thought, it was trained with rl in order to generate and select the trees of thought that lead to right answers rather than relying on pre-scripted generic "think about that again" / "think step by step" style prompts

    • @ToddWBucy-lf8yz
      @ToddWBucy-lf8yz 4 วันที่ผ่านมา +4

      @@OculusAbsconditus I always find it funny when people expect perfection from machines built on imperfect data by imperfect beings.

    • @ToddWBucy-lf8yz
      @ToddWBucy-lf8yz 4 วันที่ผ่านมา +1

      @@fauge7 consider what it is the big guys are building, they are building LLMs that are designed to service millions of queries at a second. Not all LLMs need be built to that scale and as Mistral with their 128b foundation model and now OpenAI as well, are proving that you don't need hundreds of billions of parameters to reach AGI that it can be done with the right datasets with less than 100 billion parameters.

  • @JoshJu
    @JoshJu 17 ชั่วโมงที่ผ่านมา

    The dataset is growing so fast that AI researchers are not far from draining all data from human activities.
    On the other hand, creating bigger models actually benefiting NVDA more than themselves.
    Therefore they are working on agentic workflows and chains of thought to improve the performance in relatively low cost.

  • @Kevin-us3qb
    @Kevin-us3qb 4 วันที่ผ่านมา +1

    I think where this is going with model O1 is getting Chad-Gippity to re-prompt itself and get feedback from the user until they get enough data from the us to build datasets that mimic the “thinking process” of intelligence for multiple tasks that require step-by-step actions, such as programming, math, physics, chemistry, etc. Some time in the future they will sell us these gpts optimized per skillset to work together in a K8s clusters.

  • @nicosoftnt
    @nicosoftnt 4 วันที่ผ่านมา +2

    Guys it's just the same model redigesting its own answer, that's why it's so expensive. It's not the first time it's been done (where's Devin?)

  • @ikusoru
    @ikusoru 5 ชั่วโมงที่ผ่านมา

    7:53 Shots fired 😄

  • @defipunk
    @defipunk 4 วันที่ผ่านมา +7

    Those performance graphs do not make any sense.
    For the X-Axis, the ticks are BS. The one after the shortest interval would be the next power of 10. Then the longest "distance" is between 10^n and 2*10^n, etc.
    However, compare the first set of ticks with the second and the respective distances between them, then another large gap would be to the next tick (which is why you don't see it), i.e., the rightmost tick would be 10^{n+1}.But that means there is a tick missing. Are they using base9?
    They also removed the left-most tick and start at 2*9^n or something... Nevertheless, you're seeing a 100x increase in compute time in that graph.
    But first and second graph are also BS if compared with each other. Longer training leads to a better model, but then why is the right-most starting at a lower level than even the shortest trainign time???

    • @Kwazzaaap
      @Kwazzaaap 4 วันที่ผ่านมา +1

      The graphs are so bad they should be an exam question in academia to spot bad faith data representation.

  • @flixtechs
    @flixtechs 4 วันที่ผ่านมา +4

    "Middle out"

  • @manolisbach2380
    @manolisbach2380 4 วันที่ผ่านมา +2

    I start to fill sad for the ai companies. They gave them so many BILLIONS, they have to do scam like presentations with little real results.
    So after like 600 billions the biggest improvement is recursive prompts that give practically the same results as the previous model with good prompts ?

  • @HyperCodec
    @HyperCodec 2 วันที่ผ่านมา

    Was not expecting to hear prime say “let it cook”

  • @alexandrecolautoneto7374
    @alexandrecolautoneto7374 4 วันที่ผ่านมา +7

    o1 = multi-agent GPT4o?

  • @mkitche3
    @mkitche3 2 วันที่ผ่านมา

    Whoa Wheel of Time reference, wasn't expecting that. lol

  • @SnowyMan95
    @SnowyMan95 3 วันที่ผ่านมา

    Become energized by what you have to do is such a good phrase.

  • @nerdylittlesideprojects9141
    @nerdylittlesideprojects9141 4 วันที่ผ่านมา +1

    If it has some new reasoning engine / model; could it be called thinking? If it is not some static step by step algorithm, thinking/reasoning sounds pretty close what it could be doing. 2:50 splitting hair between reasoning and thinking then is thing for philosophers to argue.

  • @gund_ua
    @gund_ua 3 วันที่ผ่านมา +1

    Forgot to add "touch grass" in the end =)

  • @Kwazzaaap
    @Kwazzaaap 4 วันที่ผ่านมา +30

    They always test on past competitions after they specifically train for them.. after the results and solutions are out, they never actually participate. The more in debt OpenAI is the more irrelevant their self reporting is.

    • @cjfoster5034
      @cjfoster5034 4 วันที่ผ่านมา +2

      The robot relies on data from the internet and how we continue to train it through usage. Basically, it needs a human engineers theoretical ideation to learn from.

    • @RawrxDev
      @RawrxDev 4 วันที่ผ่านมา +3

      I'm actually so sick of their "benchmarks." I have not taken a single data point released from OpenAI seriously since the 3.5 release (which itself had problems) they mask their corporate sales pitch with pretentious "research".

  • @draken5379
    @draken5379 4 วันที่ผ่านมา +1

    You are right about the 'looping', but what they have done, is now fine tune a model, on this concept.
    Aka, GPT4o with 'looping' is way worse than o1, whenever thing else is 'equal' lets say.
    Aka, the whole looping back in, has been 'trained' and 'reinforced' into a model, to make the model much much better at it.

  • @s3rit661
    @s3rit661 วันที่ผ่านมา

    I think "when it's not working" can be traslated with "When it's not allined to what you asked and the accuracy of the answer is low", basically LLM can derail from what you asked and just do random stuff, give it a measure, now measure it

  • @jesjes-hr3di
    @jesjes-hr3di 2 วันที่ผ่านมา

    indeed, devving on openAI is like giving your ideas away

  • @_stix
    @_stix 4 วันที่ผ่านมา +24

    its going to be so annoying when people say thinking for AI, because the laymen will equate that to human thinking

    • @my_online_logs
      @my_online_logs 4 วันที่ผ่านมา +1

      not. its the high level language. you are just salty hater

    • @rickymort135
      @rickymort135 4 วันที่ผ่านมา +2

      How do you know a human thinking isn't just us following the same kind of internal monologue. Did you have the same object to models "predicting"? Nu-uh humans predict not models!! 😡😡

    • @Shm00ly
      @Shm00ly 4 วันที่ผ่านมา +7

      @@rickymort135hOw dO yOU KnOw hUManS aReN’T cOmPUTerS?

    • @rickymort135
      @rickymort135 4 วันที่ผ่านมา +2

      @@Shm00ly argumentum ad-change-fontum? Is it that ridiculous when chain of thought is literally modelling our L2 reasoning?

    • @I_am_who_I_am_who_I_am
      @I_am_who_I_am_who_I_am 4 วันที่ผ่านมา

      You should read Daniel Kehneman's Thinking Fast and Slow. O1 IS THINKING. This is thinking, not the blurping of thousands of words per second. This is it.

  • @foxwhite25
    @foxwhite25 4 วันที่ผ่านมา +2

    This is just langchain agent style conversation, you can do this like 1 year ago

  • @RobEarls
    @RobEarls 4 วันที่ผ่านมา +2

    Regarding o1 passing exams. It's not cheating to allow the model to train on that type of question. Human students do past papers to train too.

    • @ea_naseer
      @ea_naseer 4 วันที่ผ่านมา

      It is not an ef**ng human Lord help us

    • @RobEarls
      @RobEarls 4 วันที่ผ่านมา

      @@ea_naseer are you replying to another post because it's ef* all to do with mine.

  • @visartistry
    @visartistry 4 วันที่ผ่านมา +72

    We are officially cooked. It was a good run boys. Its over for the beta jr.

    • @dan-cj1rr
      @dan-cj1rr 4 วันที่ผ่านมา +38

      yea if their work is to build a snake game which is available all over the net.

    • @Outplayedqt
      @Outplayedqt 4 วันที่ผ่านมา +17

      @@dan-cj1rr The thing is, this is the worst it'll be. With the release of 3.5 Sonnet, I honestly thought we hit a plateau. With the release of o1, we now have to change our priors, as the bar has been raised again.

    • @lclemons123
      @lclemons123 4 วันที่ผ่านมา +5

      @@Outplayedqt don't bust their bubble, ai is just a word predictor /s

    • @mattymattffs
      @mattymattffs 4 วันที่ผ่านมา +1

      Not even close

    • @mattymattffs
      @mattymattffs 4 วันที่ผ่านมา +11

      ​@@Outplayedqtlol, that's hilarious. Of course it'll keep improving, the improvement is just pretty minor

  • @SethGrantham-k1x
    @SethGrantham-k1x 4 วันที่ผ่านมา +7

    The thinking / reasoning process is Q*, meaning they basically combined the Q Learning algorithm along with the A* algorithm. Both are popular algorithms within machine learning, the former used within REL to find the most accurate answer, and A* used all over the place to find the quickest path from a starting node to a destination node.

  • @QwertyNPC
    @QwertyNPC 4 วันที่ผ่านมา +3

    What I'm most scared of is the declining need for human intelligence will result in a less intelligent society. We still have to choose politicians and fight misinformation and for that education is crucial. The high schooler who always said 'maan this is stupid it isn't practical what is this even for' during every class might finally be onto something because who's gonna convince him it's for you being a better person.

    • @calmhorizons
      @calmhorizons วันที่ผ่านมา

      Conversely, the inverse will also be true. Humans who make the effort to become true experts will be more valuable than ever since most won't even step into the shallow end.

    • @QwertyNPC
      @QwertyNPC วันที่ผ่านมา

      @@calmhorizons "true experts" - and how many people can realistically achieve that ? This is precisely what I mean. People without inherent capacities can easily be dissuaded from bettering themselves altogether. And then something like an election comes.

    • @calmhorizons
      @calmhorizons วันที่ผ่านมา

      @@QwertyNPC - yep. In fact I would argue this has already happened given that high quality information on the internet is hidden in research papers and barely subscribed TH-cam channels, while Elon Musk and fellow morons have a credulous audience of, arguably, billions.
      That being said, a thousand years ago the only people who had any knowledge of worth were professionals and monks, so perhaps we are no worse off ultimately.

  • @arseniy_viktorovich
    @arseniy_viktorovich 3 วันที่ผ่านมา +5

    I just keep watching these "AI won't replace you" bloggers to break the dooming loop I'm in since the release of Claude. But it still keeps me from learning new technologies and growing as a developer, cause everything already seems pointless at this point.

    • @kingofmontechristo
      @kingofmontechristo 3 วันที่ผ่านมา

      It is pretty much. Coding in itself is pointless, but if you treat it as solving problems then you could apply to life as well

    • @arseniy_viktorovich
      @arseniy_viktorovich 3 วันที่ผ่านมา +1

      @@kingofmontechristo I meant in terms of career and professional growth

    • @kingofmontechristo
      @kingofmontechristo 3 วันที่ผ่านมา

      @@arseniy_viktorovich that is also true.
      If you look at mathematicians for example. There used to be jobs where people calculated.... eventually calculator were invented and it reduced the amount of jobs by a lot in modern times to basically 1 person who copy pastes inside excel.
      For currently 10 jobs you will have 1, but I suppose people who have CS degrees and are generally good in Software Engineering will be able to find other career paths because coding is pretty much logical. It requires problem solving skills and the ability to persist.
      These qualities are useful for any kind of profession.

  • @GlorytoTheMany
    @GlorytoTheMany 4 วันที่ผ่านมา +1

    Shout out to hegyikecske69 from the stream; a fellow goat lover who also speaks my language! 🐐🇭🇺

  • @alwayslearningtech
    @alwayslearningtech 2 วันที่ผ่านมา

    Devon said they're developing the capability to replace a Hunan software engineer. Hence they aren't able to do it yet and still need to hire one to create that capability.

  • @AiExplicado0001
    @AiExplicado0001 3 วันที่ผ่านมา

    The issue isn’t about the cost or financial loss. If the Enterprise believes this is the future, much like the infrastructure being built at Los Alamos, which is both costly and temporarily disadvantageous, the real question is whether this investment will lead to significant breakthroughs in science. If it does, then it’s worth it.

  • @AlJey007
    @AlJey007 4 วันที่ผ่านมา +1

    something about that presentation looked like a skit to me, I kept waiting for a guy with a glue on beard to show up

  • @markmcdonnell
    @markmcdonnell 4 วันที่ผ่านมา +2

    Been using vim for 13 years (years back migrated to neovim). Ive never used or even knew about :set paste 😅🤦‍♂️

  • @sciez22
    @sciez22 วันที่ผ่านมา

    If one of the thinking messages isn't "reticulating splines", we should riot.

  • @JP-jf1oc
    @JP-jf1oc 3 วันที่ผ่านมา +3

    The other day there was a real worry on wall street about companies becoming way to efficient with AI and how that wound affect products like slack that sell their subscription based on a per seat.
    I have come to the conclusion that in the median term AI is not worth it for literally anyone. Let's just say that most companies are able to reduce the workforce like you said from 10x to 1x who is this company going to sell this product to it would be destabilizing to everyone both the workers and capital. Capital doesn't what valuations to go down and workers want to keep their jobs. It would just be a race to the bottom a destabilizing deflation like never before seen. This thought is based only in the short to median term based only in a significant reduction of the workforce not even AGI. Society would imploded.

  • @OBGynKenobi
    @OBGynKenobi 4 วันที่ผ่านมา +37

    IT'S NOT THINKING!

    • @RillianGrant
      @RillianGrant 4 วันที่ผ่านมา +16

      letting it cook > thinking

    • @Kane0123
      @Kane0123 4 วันที่ผ่านมา +6

      How do you know? Maybe there is a prompt in there that tells it to think?

    • @kv4648
      @kv4648 4 วันที่ผ่านมา +9

      It's prompt recursion -> a sort of thought chain

    • @ps3guy22
      @ps3guy22 4 วันที่ผ่านมา

      ​@@kv4648 alternative CoT

    • @gramioerie_xi133
      @gramioerie_xi133 4 วันที่ผ่านมา

      Define ‘thinking’.

  • @theangelofspace155
    @theangelofspace155 4 วันที่ผ่านมา +1

    22:02 they did not saying Chappity 4 won, they said Chappity 01 fid not supassed (maybe loke the english category where they were tied).

  • @SmartK8
    @SmartK8 2 วันที่ผ่านมา

    Me now: "Just use me on this project, I'm better than AI, duh"
    Me in ten years: "Just use me on this project, AI is too good and valuable to waste on a foot soldier programming"