AI Risks No One is Talking About

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ม.ค. 2025

ความคิดเห็น • 443

  • @teej_dv
    @teej_dv  13 วันที่ผ่านมา +79

    Thanks for all the nice comments :)
    People are asking how I made the presentation, it's using my plugin:
    github.com/tjdevries/present.nvim
    Also, if you want to support the channel, check out boot dev where I have one course done and am working on one for lua :)
    boot.dev/teej (promo code TEEJ for 25% off)

    • @teej_dv
      @teej_dv  13 วันที่ผ่านมา +23

      and yes, i heart my own pinned comments

    • @SuperGauravgautam
      @SuperGauravgautam 13 วันที่ผ่านมา

      @@teej_dv i approve this message

    • @Kane0123
      @Kane0123 12 วันที่ผ่านมา +2

      I didn’t see anyone asking, but now that you mention it - how do I install in VsCode? 😘

    • @RameshSutar-k3j
      @RameshSutar-k3j 12 วันที่ผ่านมา

      Son: PaPa ye china wall kaise bani batao???
      Father: free mein son

    • @numbershapes
      @numbershapes 12 วันที่ผ่านมา

      Based takes.
      😂 VS***d*

  • @gradycdenton
    @gradycdenton 13 วันที่ผ่านมา +373

    "AI will enslave us all!"
    "Meh"
    "AI will make Typescript the only programming language!"
    "Oh no!"

    • @JeremyAndersonBoise
      @JeremyAndersonBoise 12 วันที่ผ่านมา +5

      😂💀 literally

    • @Definesleepalt
      @Definesleepalt 12 วันที่ผ่านมา +2

      time to panic

    • @クールなビデオ
      @クールなビデオ 12 วันที่ผ่านมา +3

      No brother man here did a real Marxist analysis of the tech sector and still think it's not capitalism, this stuff is incredible honesty 😂😂

    • @Leonhart_93
      @Leonhart_93 11 วันที่ผ่านมา

      You just be thankful it's not just plain JS or Python.

    • @ZZWWYZ
      @ZZWWYZ 9 วันที่ผ่านมา

      ​@@クールなビデオ
      Describe capitalism
      Call it communism
      We're all gonna be slaves

  • @progste
    @progste 13 วันที่ผ่านมา +535

    The main risk in my opinion is that people trust AI as if it was intelligent.

    • @wbezs
      @wbezs 13 วันที่ผ่านมา +4

      This. However as well to some degree.

    • @nocodenoblunder6672
      @nocodenoblunder6672 13 วันที่ผ่านมา +10

      Well its not like its complelty dumb either, which some people make it out to be. "Oh its JUST predicting the next token not a big deal"

    • @jboss1073
      @jboss1073 13 วันที่ผ่านมา +1

      as if it _were_

    • @therflash
      @therflash 13 วันที่ผ่านมา +44

      @@nocodenoblunder6672 But it IS literally just doing that.
      LLM's give you back the modified versions of the text that they've been trained on. What you're getting back is mostly the "average response" based on the statistical probabilities in the training data. They're extremely good at memorization.
      They do some limited reasoning, but it's very difficult to figure out whether the model is reasoning or whether it's just using a memorized knowledge in the moment. On top of that, they're virtually unable to say "i don't know", and will just confidently make stuff up.
      Pair that with people thinking "the machine must know better", and it's a recipe for a disaster.

    • @InfiniteQuest86
      @InfiniteQuest86 13 วันที่ผ่านมา

      Lol I know. But the Average American IQ is 98. There's just not much we can do about it.

  • @hank9th
    @hank9th 13 วันที่ผ่านมา +196

    One quote I read in "The Alignment Problem" really stuck with me:
    "AI/ML is a fantastic tool, provided the future looks like the past."
    If you your goal is to change the status quo of some sector, tools built with ML are the absolute last thing you should reach for.
    The context in the book was a jurisdiction that used an ML system to determine whether to detain someone pre-trial, with the goal of removing human bias. In reality, they encoded human bias into the system and AMPLIFIED it, resulting in far more biased outcomes than before, while removing any mechanism for holding anyone accountable for the biases.
    In certain domains, AI/ML are a form of "bias laundering" whereby you preserve expand human biases, but remove any reliable accountability mechanism.

    • @MrMysticphantom
      @MrMysticphantom 13 วันที่ผ่านมา +6

      I heard this in Dr Robert Miles voice

    • @mr.rabbit5642
      @mr.rabbit5642 12 วันที่ผ่านมา +4

      ​@@MrMysticphantomOh my god yes!

    • @Luclecool123
      @Luclecool123 11 วันที่ผ่านมา +1

      "The past is the best predictor we have for the future."

    • @morezombies9685
      @morezombies9685 4 วันที่ผ่านมา

      I mean if you look at alpha go this premise is false. The bot is known explicitly for unorthodox, inhuman approaches.

  • @Mantorp86
    @Mantorp86 13 วันที่ผ่านมา +104

    It all comes down to: “why learn math when there are calculators which do math better than you”.
    Life isn’t about convenience but it’s about learning.

    • @Flackon
      @Flackon 13 วันที่ผ่านมา +17

      Yeah, it seems people think that LLMs can do X in the same way there are people who think calculators can "do math", when calculators only do computations. The person using them is doing the math

    • @Kane0123
      @Kane0123 12 วันที่ผ่านมา +4

      Just the same way that we picked search engines, people are picking LLMs.
      Just like LLMs, search engines started out pure.
      The fact that they will become normal while “pure” and then stay normal after they become co-opted and altered to be financially viable / maximumally profitable is the problem.
      The fact that most people don’t think critically enough to realise is the root cause.

    • @Luis-bd9vt
      @Luis-bd9vt 6 วันที่ผ่านมา +2

      yes but simplyfing AI to a Calculator is like simplyfing a Human to a dog, so its really a pointless comparison, calculators do as much math as a hammer builds a building

  • @dezly-macauley
    @dezly-macauley 13 วันที่ผ่านมา +135

    I saw post where some Software Engineers were talking about how their coding speed tremendously dropped when the tried coding without the AI code editor they had been using for months.
    Apparently they were struggling to remember simple things (and not just syntax), and how in general they felt a loss of motivation to learn new things (the accept the first AI answer thing that TJ was talking about).
    Imagine paying a company to lobotomize you like this. Maybe I'm just old and grumpy. Maybe I'm just an ex-creative writer, turned programmer who is pissed of at what tech (the last of 3 things I'm passionate about) is turning into.

    • @TMF149
      @TMF149 13 วันที่ผ่านมา +13

      Completely agree. I think AI just makes you too dependent on itself, or at least if you use it the way most people(and programmers) do.
      In the long-term, if you use it for everything(or mostly everything) coding related, you might become a worse programmer than when you started and didn't have AI. Since you saw short-term improvements in coding speed, you commit to it and integrate it into your setup as if it was just another VSCode plugin.
      I also think this principle applies to way more than just programming. And of course, this benefits the companies that provide LLM-related services. Making you more or less forced to use their product is a great business model for them.

    • @pluto8404
      @pluto8404 13 วันที่ผ่านมา +6

      but if ai can take some mental burden off your brain then that is good. Do you still do long division or just use a calculator? The whole premise of society is to stand on the shoulders of the people before you, you dont have to know everything, because you cant.

    • @SaHaRaSquad
      @SaHaRaSquad 13 วันที่ผ่านมา

      @@pluto8404 The difference is that a calculator still requires you to understand what you calculate, all the important decisions are done by you. But an LLM can (or can at least pretend to) do many things on top and that WILL make people think less about what they're doing. Some people already blindly trust Tesla's autopilot etc, literally sleeping in the driving car. You can bet they won't lose any sleep over accidents that indirectly affect others because they won't even consider it.
      The only upside I see is that people not using AI will have more job security in places that (need to) care about the result, but the negative consequences will definitely outweigh everything else. It will still be profitable for the quasi-monopolies though, so it will happen.

    • @creamOnDs
      @creamOnDs 12 วันที่ผ่านมา

      ​@@pluto8404comparing the simplicity of a repetitive task like long division and the complexity involved in writing code sounds ridiculous.
      mental burden is necessary for writing code, if you dont want to think about making a website for example then use tools like wordpress.

    • @bkerzy46
      @bkerzy46 12 วันที่ผ่านมา

      @@TMF149 That can also be said to what people were doing before the AI boom. Most programmers today don't really want to learn to solve deep technical challenges, they just want to write code that "works". That's what is making languages like JS and Python be so popular and the usage of huge frameworks that mostly do everything for us. People get too dependent on that too and when that people is put to do something a bit lower level they just can't because they don't know how to solve challenges, they just know how to use the API they learned to get a job.

  • @GOTHICforLIFE1
    @GOTHICforLIFE1 13 วันที่ผ่านมา +133

    My biggest issue with AI right now is the push for it. It feels like the business world has an unhealthy obsession with AI, and as a result i suspect that we will push AI forward at the expense of moral and security concerns/issues.

    • @SaHaRaSquad
      @SaHaRaSquad 13 วันที่ผ่านมา +20

      The obsession comes from their desperation to recoup the massive costs for training the models. They *need* this to be successful.

    • @ccj2
      @ccj2 12 วันที่ผ่านมา +3

      I totally agree. I feel like if they truly believed the technology would be massively revolutionary, they could take their time to make sure they execute it at the best of their abilities.
      Instead, it looks like this massive pressure to get anything done as fast as possible because of an “AI Arms Race”. But it comes off to me as hype and grifting.

    • @toybeaver
      @toybeaver 12 วันที่ผ่านมา +2

      "Ok, my business is doing so good, and tech is the future! Let's give huge salary ranges to everybody bc we're that good!"
      "What? You're telling me an engineer is costing us 500k a year?? Whose idea is that?? If only we were able to cut engineers off...."
      "Ok, this new AI thingy may replace engineers. How much will it cost? Millions of dollars a year? That's awesome! We can afford it"
      "What do you mean we've lost billions of dollars??? No, we're too far in. We need to sell this anyway possible.... I have an idea"
      *CEO of says it will replace all engineers by the end of 2025*

    • @toybeaver
      @toybeaver 12 วันที่ผ่านมา

      That is, tech companies made bad decisions, want more money, had more bad ideas trying to fix the bad decisions, and now has this whole "end of engineers" fallacy in order to try to recover from spending billions of dollars

    • @Luclecool123
      @Luclecool123 11 วันที่ผ่านมา +5

      Really feels like the good old internet bubble. It's really 1:1 what happened.

  • @steftrando
    @steftrando 12 วันที่ผ่านมา +32

    I like your style of presenting videos. It feels more like a conference talk and less like a TH-camr. Subscribed.

    • @teej_dv
      @teej_dv  12 วันที่ผ่านมา +7

      thanks! that's actually exactly the goal & vibe I'm aiming for :) even if it makes it sometimes a bit harder to get it all correct in a single take... haha

  • @paveljagos6589
    @paveljagos6589 13 วันที่ผ่านมา +61

    Quote: "LLMs don't seek truth." Exactly! Great point! Thank you!

    • @AntoninKral
      @AntoninKral 13 วันที่ผ่านมา +6

      Well, a lot of people don't seek truth as well :) But 💯

    • @KaliTheCatgirl
      @KaliTheCatgirl 9 วันที่ผ่านมา +3

      "Language models deal with language, not knowledge" - paraphrased from No Boilerplate

  • @jamtart22
    @jamtart22 12 วันที่ผ่านมา +15

    The enshittification of LLMs via SEO is something I had not considered

  • @shApYT
    @shApYT 12 วันที่ผ่านมา +37

    "This is not about capitalism"
    Describes problems with capitalism in detail

    • @teej_dv
      @teej_dv  12 วันที่ผ่านมา +10

      I'd be interested to hear of your alternative system that prevents regulatory capture from occuring!

    • @sayven
      @sayven 12 วันที่ผ่านมา

      @@teej_dv Capitalism allows individual, not democratically legitimized organizations (e.g. corporations) to accumulate a lot of money, which is a major enabling factor for regulatory capture. And since there are systems that prevent organizations without democratic legitimacy to accumulate these amounts of money, this is in fact a problem of capitalism. Of course there is no perfect system that "prevents regulatory capture from occurring" with 100% certainty, if you're looking for that, then you aren't looking for anything really. But there are policies that could reduce the effectiveness/risk of regulatory capture, e.g. policies that:
      - Improve the control of the people over the government (e.g., allowing more then two parties (de jure or de facto), reducing barriers for referendums (including referendums with initiative from the people))
      - Democratize to some degree corporations of which the general public is a major stakeholder (e.g. democratically elected boards that have some degree of control)
      But regardless of what you think of these policies, capitalism includes certain mechanisms that facilitate regulatory capture in ways that are unique to capitalism. Pretending that the issue of regulatory capture is not linked to capitalism seems more ideologically than rationally motivated.

    • @cuzsleepisthecousinofdeath
      @cuzsleepisthecousinofdeath 12 วันที่ผ่านมา +4

      He described how gov regulations enable monopolies. Think about how patents work for a second: a government will back up the inventor in the court of law just because said inventor paid some fee to the gov some time ago. That's the essence of it.

    • @GlazedGoose
      @GlazedGoose 12 วันที่ผ่านมา +7

      right wingers always self reporting when they bring up capitalism lmfao

    • @alexeyayzin8512
      @alexeyayzin8512 12 วันที่ผ่านมา +9

      Precisely. When the state exists to protect the interest of capitalism we are in the trenches of capitalism, unfortunately.

  • @logsnroll
    @logsnroll 11 วันที่ผ่านมา +11

    Things learned from this video:
    1- VS Code is a curse word.
    2- Keep learning what your want to learn, don't worry about LLMs

  • @BitMonkeyJed
    @BitMonkeyJed 13 วันที่ผ่านมา +12

    This is a good take. The analogy to a degraded google search with “ads” is interesting.

  • @paryzfilip
    @paryzfilip 13 วันที่ผ่านมา +7

    I missed you TJ. We need more of you on YT.
    That take is so true and a bit scary. I've been coding for 7 years professionally, and around 12 years total, and I see so many new people being fascinated by LLM's without seeing the issues they bring 🙃

  • @Fishd1
    @Fishd1 13 วันที่ผ่านมา +16

    Good to hear some rational concerns aired... rather than just "ai ate my hamster" ... nice work.

  • @SantiagoCento
    @SantiagoCento 13 วันที่ผ่านมา +6

    I actually took notes from this video. It's very rewarding to see your thoughts validated by other people, i agree 100% with everything. Bias in LLMs are a huge concern.

  • @clarkio
    @clarkio 13 วันที่ผ่านมา +11

    Hey VS C**e fan here and I share your concerns. This definitely needs to be considered and discussed more. Good take 👍

    • @clarkio
      @clarkio 13 วันที่ผ่านมา +1

      Also I've watched other videos from you which I liked but I also liked this style of video from you too

    • @sealsharp
      @sealsharp 11 วันที่ผ่านมา

      Did you selfcensor the word "code" or is that something different?

    • @clarkio
      @clarkio 10 วันที่ผ่านมา +1

      @@sealsharp I noticed TJ bleeped out his use of it in the video so I did this censoring as a joke in response

  • @baejisoozy
    @baejisoozy 13 วันที่ผ่านมา +12

    Problem solving never goes out of fashion. In the land of the blind proompt engineers (those who have outsourced their faculty of thinking) the guy with the one eye (one who actually understands the code) will be king.

  • @S1M0N38-yt
    @S1M0N38-yt 13 วันที่ผ่านมา +80

    The prompters are alchemists, the programmers are chemists.

    • @ninocraft1
      @ninocraft1 13 วันที่ผ่านมา +18

      but i want to turn shit code into gold 😂😂😂

    • @RobertFletcherOBE
      @RobertFletcherOBE 13 วันที่ผ่านมา +1

      a nugget of the purest green!
      th-cam.com/video/TkZFuKHXa7w/w-d-xo.html

    • @davixx1995
      @davixx1995 12 วันที่ผ่านมา +23

      prompters are astrologists, programmers are astronomers

    • @patricknelson
      @patricknelson 11 วันที่ผ่านมา +1

      Such a good quote.

    • @JOK120
      @JOK120 11 วันที่ผ่านมา

      Prompters are literally tech-priests praying to the machine spirit hoping it works.

  • @DCoolMan08
    @DCoolMan08 6 วันที่ผ่านมา +1

    Really love these types of videos. Keep it up!

  • @samvarcoe
    @samvarcoe 7 วันที่ผ่านมา

    Great video addressing some important and underrepresented points. I watched it via Prime's reaction so just stopping by to repay some engagement

  • @1data059
    @1data059 12 วันที่ผ่านมา +3

    I 100% agree, when something is abstracted away from your understanding, more and more companies will take advantage of it. Examples; getting your computer fixed, getting your car fixed, the plumber telling you that you need a new "special value on your boiler".

  • @georgebeierberkeley
    @georgebeierberkeley 5 วันที่ผ่านมา

    Dude you are on to something- the end of creativity, corruption of results by polluting data, boxing out competitors via govt manipulation, llm models with programmed bias. All this is happening right now. Subscribed.

  •  13 วันที่ผ่านมา +2

    Loving these incursions into philosophy from you TJ.

  • @JosephLiaw
    @JosephLiaw 11 วันที่ผ่านมา +5

    It reminds me that the most accurate language to describe functionality is a programming language. Natural language has too much ambiguity.

  • @prashanthb6521
    @prashanthb6521 วันที่ผ่านมา

    Very valid concerns. I think it will go this way only as you have predicted.

  • @PunmasterSTP
    @PunmasterSTP 13 วันที่ผ่านมา +6

    Accepting default answers seems like such a natural evolution in human laziness...

  • @LBCreateSpace
    @LBCreateSpace 3 วันที่ผ่านมา +1

    The beep for vs code killed me 😂

  • @lightlegion_
    @lightlegion_ 3 วันที่ผ่านมา

    You’re truly making a difference!

  • @jbellero
    @jbellero 12 วันที่ผ่านมา +1

    YES 🙌 TJ this is what i’ve been talking about for the last 6 months with anyone who will listen. It’s a movement towards homogeny. Sure we can be more specific in prompts to get more creative and dialed in results but ultimately the models will produce very similar results consistently with little variation. So what does that mean for using it.
    This is every industry by the way - marketing content, sales tools that auto chat with clients, anything we can use AI to produce. It will slowly coalesce into one big homogenous output and us accepting this reduced creativity.

  • @patneal_codes
    @patneal_codes 9 วันที่ผ่านมา +1

    Fantastic takes, especially the parallels between SEO & LLM relevance. I remember back in the day there was the the ol' `position: absolute; left: -999999999px;` for a bunch of content to help boost sites' search rankings. Only a matter of time before that sort of practice becomes commonplace in the LLM frontier

  • @lanfeust06
    @lanfeust06 12 วันที่ผ่านมา +1

    What a great presentation. AI poisoning is something that isn't discussed enough. I always used the example of an attack from a government, but lots of companies for have incentives (especially MS) to start creating biases toward certain company or tech and monetize it, and I think that's the most likely direction LLM takes.
    I'm pretty certain ChatGPT believes in string theory, just because it got communicated on so much, not because it produced actually anything useful.

  • @Lorne_at_work
    @Lorne_at_work 12 วันที่ผ่านมา

    Love the vid. Lot's of great points! I maintain a healthy skepticism of AI but interested to see where it goes.

  • @ohchristusername
    @ohchristusername 13 วันที่ผ่านมา

    Really appreciate you articulating these thoughts TJ, there are tricky waters ahead to navigate with this type of technology!

  • @k2c2
    @k2c2 13 วันที่ผ่านมา +19

    My experience with the LLMs is that when they work it's magical, and I'm truly impressed, but I don't think they will replace programmers anytime soon. The perspective nowadays is very often from the programmers, who already know what they want to achieve, and they try to force the LLM to deliver. For me personally it's super annoying, when I know what I want, and I get wrong answers

  • @ChrisGVE
    @ChrisGVE 13 วันที่ผ่านมา +3

    An excellent and balanced presentation. I've started a small project (plugin for neovim) with a version for each ChatGPT and Claude. So far, I'm more impressed by Claude than ChatGPT, but overall, we are not there yet, and we are at least one disruption away from AI taking programming jobs away.
    I also liked your interview of Mitchell Hashimoto with Prime, particularly the part about GPL vs. MIT licensing when using autocompletion/autosuggestions from AI.
    Thanks for sharing your thoughts!

    • @Originalimoc
      @Originalimoc 7 วันที่ผ่านมา

      try get to elo 2700 first.

  • @hovat
    @hovat 13 วันที่ผ่านมา +8

    Yeah not to comment three times in a row but you’re literally saying exactly what I’ve been expecting to happen right now. These LLMs are going to funnel non engineers into all manner of services and tools that they don’t understand or need. They’re gonna rack up huge cloud infrastructure bills and all manner of mistakes are going to happen outside of coding itself

  • @jojotrujillo_o
    @jojotrujillo_o 13 วันที่ผ่านมา

    Thanks for sharing your thoughts and opinions on some things folks in the industry aren't really talking about when it comes to AI. Loved "small cabal" and "llmnop."

  • @spicybaguette7706
    @spicybaguette7706 12 วันที่ผ่านมา +3

    6:45 this risk is discussed in topics about systematic discrimination. You can see how AI (in general) can perpetuate biases. It makes predictions on the data that it already has,which is almost always biased. This is especially problematic with stuff like predictive policing

  • @jelenv
    @jelenv 12 วันที่ผ่านมา

    Great points TJ, thanks for the enjoyable video. I have similar worries, hope the real future plays out better than that

  • @Jostradamus2033
    @Jostradamus2033 7 วันที่ผ่านมา +1

    Yeah some people believe AI will remove the necessity for education. I believe the opposite. Greater education will be needed to effectively use and evaluate the AI you are leveraging. And I believe your first point about AI using the most likely solution based on trained data as opposed to the most effective solution is a point to that. You don't want your boss not to know how to your job while telling you how to do it. Same goes for AI.

  • @Desi-qw9fc
    @Desi-qw9fc 11 ชั่วโมงที่ผ่านมา

    For me, the biggest danger of AI is that it gets used to implement policy that no one can be held responsible, because “the computer did it”. We have seen this for years already; one example is Australia’s ‘robodebt’ scheme where a machine learning model told people that they needed to pay debts that didn’t exist over money they didn’t get, leading to at least 3 suicides. No one has gone to jail for making such a system.

  • @Atmos41
    @Atmos41 10 วันที่ผ่านมา

    I need this nvim present plugin though. Thank you for all of those awesome nvim ecosystem contributions!

  • @MattRockwell-f2m
    @MattRockwell-f2m 12 วันที่ผ่านมา

    TJ. You are spot on. Pragmatic way of thinking.

  • @arcrad
    @arcrad 12 วันที่ผ่านมา

    These are really good points. Very insightful!

  • @NoFkingUsername
    @NoFkingUsername 12 วันที่ผ่านมา +1

    Great video! Very interesting presentation.
    One more concern of mine: LLMs are black boxes so it'll be easy to just reject any accountability when the results are biased or bad.

  • @nikfp
    @nikfp 12 วันที่ผ่านมา

    I'm glad you are discussing LLM's perpetuating and amplifying bias, because I've been on this soapbox for years and I felt like the only one. Any system that creates a feedback loop into itself runs a high risk of reaching a homogeneous output, either by reaching stable equilibrium and the output remains the same, or by reaching runaway amplification and again the output remains the same. LLM's are far too complex for humans to understand - let alone control - the scope and tenor of the inputs. So the result is people using biased systems to write the next generation of training data that will create even more biased systems.
    You're talking about LLM's allowing React to crowd out other contenders, and cloud services gaining higher monopoly. What I'm wondering is what happens when too many people start asking LLM's about political, moral, and ethical best practices and then feeding that back into the machine and that bias starts to amplify. It quickly becomes "whatever was said the most wins above literally all else" If we aren't careful, it can erode a lot of our breadth of thought in roughly a generation.
    My view is this: Don't use LLM's as an excuse to stop thinking and challenging the status quo. It's a very sophisticated parrot. It can't create, have intuitive insights, etc. That's up to us, and we ignore that responsibility at our own peril.

  • @____2080_____
    @____2080_____ 11 วันที่ผ่านมา +1

    I can corroborate the conclusions being shared.
    Even when you give your LLM‘s specific instructions to use a technology stack, package or other things, including giving the precise documentation and examples, the large language model will default and start to hallucinate, producing incompatible code based on your standard python libraries like Numpy, pandas
    Don’t let any of these things have version incompatibilities.
    On the bright side, going through these errors will get you to become an expert in actual coding really quick if you want to fix the problem

  • @DaniloFavato
    @DaniloFavato 12 วันที่ผ่านมา

    Great point! Sure with the current losses the LLM hosts are facing there will be an increasing pressure for this kind os sposored suggestions and the enshitification then starts.

  • @npc-drew
    @npc-drew 10 วันที่ผ่านมา

    Great content TJ, i already feel like i know you from Prime videos.

  • @SydneyApplebaum
    @SydneyApplebaum 7 วันที่ผ่านมา

    You've got Casey's shtick down pretty good. Great points

  • @Skydleroo
    @Skydleroo 13 วันที่ผ่านมา

    I really like this content, great video!

  • @liquid2499
    @liquid2499 10 วันที่ผ่านมา

    You are spot on, and this is basically what the people in AI safety and ethics like Gebru et al. have been saying, and instead of listening, people just screamed that they were “woke” and thoroughly pushed them out of the conversation.

  • @tgriff007
    @tgriff007 12 วันที่ผ่านมา

    Before LLMs, there were books. And in many cases, everyone was learning from the same one. And then there were search engines. And social media algorithms.
    LLMs are an evolution of this, but also an anecdote to it. It doesn't take long to figuring out that just asking an LLM to write your code is a bad idea.
    But, where I find LLMs really useful are in those moments where I think there's got to be a better way and wonder what if I tried this tool or that data structure? Being able to ask Chat GPT, how could I make this idea and getting many options to try back is a game changer.

  • @dziobusz
    @dziobusz 7 วันที่ผ่านมา

    Saying that AI systems are just predictors is a pretty good description. That is the technology at its heart even if the results can be absolutely fantastic.

    • @georgebeierberkeley
      @georgebeierberkeley 5 วันที่ผ่านมา

      Yes…but. Isn’t that what we humans are?

    • @dziobusz
      @dziobusz 5 วันที่ผ่านมา

      @@georgebeierberkeley I'd say so. At least whenever I've tried to reason about how humans think, my naive impressions are that we are just impressive pattern matching machines. The pattern matching is the linking of thought that eventually leads to an outcome and this seems to be pretty much at the heart of how neural networks and AI work. I of course don't know what I'm talking about :), but those are the impressions I have.

  • @joshuagermon2169
    @joshuagermon2169 11 วันที่ผ่านมา +3

    Top tier stuff - goes far beyond the classic doomer "AI taking all CS jobs"

  • @bigbabyjack
    @bigbabyjack 11 วันที่ผ่านมา

    Really interesting take. Thanks for sharing!

  • @kevin.malone
    @kevin.malone 7 วันที่ผ่านมา

    The easiest way to put this into a few words is "everything will be derivative".
    Not that nothing can be novel, but the novel things are just going to be a repackaging of existing thought in a way that is relevant to the context of a given task. Instead of using the existing information as building blocks to reach the next step forward in advancement, it stays at the same level, shifting things around.
    That results in finite advancement. You can advance until you have tried every single implementation of existing information. But you can't develop new information. But that's not going to be the case with models like O1. So far, it seems possible that test-time-compute methods may create new information.

  • @istyyyyyy
    @istyyyyyy 11 วันที่ผ่านมา

    If you know you know, excellent. 👏👏👏

  • @a-plans
    @a-plans 4 วันที่ผ่านมา

    The LLMs promoting products seems like the natural conclusion to me too. Especially since we're seeing trends that with the inclusion of the AI summaries in search sites like Wikipedia see traffic down 20%+ and this trend will likely follow other websites, the next logical step is to pay Google/Bing to promote your content in the summary.

  • @TylerHillery
    @TylerHillery 13 วันที่ผ่านมา +1

    I share very similar thoughts as you. I too think AI will be completely disruptive in the industry but fear people will now mistakenly think they don't have to understand the code LLM generates for them.
    Very similar to the "Copy Paste" dev we have always had but it will be amplified 100x.
    The easiest way to stand out from the crowd is to be the person who truly understands. You can still use LLMs to help in your understanding but you have to approach it with that mindset.
    The people who are going to be the most effective at using AI code assistant tools are the ones who still have a fundamental understanding of how the code that gets generated works.
    I am worried many people are seeing notable figures in tech state how much they love using these tools without realizing that they've had years and years of experience programming before LLMs were a thing. They have developed a level of intuition they can use when evaluating the output of LLMs.
    You are never going to build this intuition yourself if you blindly tab away.

  • @daltonyon
    @daltonyon 13 วันที่ผ่านมา +1

    Its very unusual to see TJ don't make funny about something that I feel strange watching and thinking: "This is TJ?"
    Great points, I think similar!!

  • @Qqquuaa
    @Qqquuaa 11 วันที่ผ่านมา

    Great take tj

  • @hivemind9643
    @hivemind9643 7 วันที่ผ่านมา

    I love your slidedeck

  • @user-eg6nq7qt8c
    @user-eg6nq7qt8c 13 วันที่ผ่านมา

    Lots of great insights!

  • @rddavies
    @rddavies 13 วันที่ผ่านมา

    Agree 1000%. A lot of wishful and magical thinking going on amongst our "thinking class".

  • @ivanmaglica264
    @ivanmaglica264 6 วันที่ผ่านมา

    There was a Star Trek episode, where people were artists and all their needs were served by AI. Problem was, everybody forgot how it worked and it needed fixing. Forgot the name of the episode...

  • @aaronshahriari2171
    @aaronshahriari2171 13 วันที่ผ่านมา

    Teej with the elite takes!!!

  • @HoboGardenerBen
    @HoboGardenerBen 7 วันที่ผ่านมา

    I'm kinda excited about all this craziness, it might be enough to scare me into a mostly offline life. I'm feeling super creeped out by the level of invasion we have already. I'm still addicted to this website and the convenience of texting and online maps and portable gaming, but when I compare all that to what I lost it doesn't seem like a good trade. Everyone used to hang out, no one hangs out anymore. It's a special thing now, something scheduled. Some of that is just getting older and busier, but a lot is everyone spending all their freetime online. I barelyt even read books anymore. I used to read all the time. Ebooks suck, subjects you to the rest of the distractions in a phone. A physical book feels restful, only words making a story in my inner world, no notifications. I want to de-tech, bring back boredom and stillness to my life. And yet here I am barfing my words into null-space :(

  • @alexrook5604
    @alexrook5604 12 วันที่ผ่านมา

    I think OS and compting commercial models will offset some of these concerns but, also, it's good to highlight these concerns.

  • @sahebjotsingh6306
    @sahebjotsingh6306 12 วันที่ผ่านมา

    Great take, totally agree

  • @ComputationalAlchemist
    @ComputationalAlchemist 7 วันที่ผ่านมา

    Open Source seems like a nice hedge against some of this in the long run, at least so long as we keep finding ways to make training/running LLMs efficient enough to be used on end-user hardware.

  • @pooyaestakhry
    @pooyaestakhry 11 วันที่ผ่านมา

    I was thinking about this few hours ago. I guess we still would have people who like to explore things and also LLMs now can learn with fewer examples because it's a fine tuning task for them rather than training from scratch

  • @MrMysticphantom
    @MrMysticphantom 13 วันที่ผ่านมา

    you know one of the things ive found handy in a weird way is to tell the LLM in detail what i want it to do architecturally and flow wise... then modify it function/piece by function/piece... could i have coded it myself ssure... but it actually lets me pilot more rather than drive and think more on how i want things.... and then even more important document it across.... LLMs are so much better at that.... but again youre right this entire thing depends on me having a certain level of expertise/internalization of best practices, pitfalls and tradeoffs etc

  • @lucasteo5015
    @lucasteo5015 10 วันที่ผ่านมา +1

    I still don't use any AI completion, I find it annoying to have big block of lego code just magically pops in instead of me using the lego myself. I personally use it for asking library stuff most of the time to quickly get a rough idea of what apis and feature that the tool provide, so just a good old documentation reading use case, atleast this is what i find it the most useful for work and maybe for none work like electronic shopping random facts checking etc. Personally, privacy isn't a concern for me, if it being a better tool that requires exploiting somebody else or being exploited? I'll still use it as long as it is a good tool and I have other bigger concern in life, not gonna worry about it replacing people's job or wiping out the humanity, those are not entirely out of reach but not the kind of problem I can fix, thus I don't care.

  • @neniugrava
    @neniugrava 13 วันที่ผ่านมา +1

    I didn't think about the problems of enshittified AI...
    I think this is just another point in favor of my belief that vertical integration is inherently anti-competitive in the steady state, and so should be considered an anti-competitive practice legally. The current doctrine that it's fine as long as it's better for the consumer *right now* is not cutting it.
    The same thing is also true for loss-leading IMHO, and that is *definitely* something these companies are doing with AI tools. It's truly anti-competitive to offer any product at a loss, and we really ought to treat it as such. It makes it impossible to compete with large companies with deep pockets, even if you had a golden goose of a product.
    Of course I have no idea how we'd enforce any of this, but it would be a good start to force companies to break up into very focused domains when there is a clear bias, or even a potential for bias.
    All AI companies should be financially motivated to produce products with the best most accurate results for the consumer, not for some parent company or significant donor/shareholder.

  • @DomainObject
    @DomainObject 7 วันที่ผ่านมา

    That Typescript threat hit hard. Damn.
    We gotta do something fast.

  • @jamesc2810
    @jamesc2810 12 วันที่ผ่านมา

    It’s helped me learn rust, helped me when I got stuck.

  • @zeal514
    @zeal514 12 วันที่ผ่านมา

    I am a firm believer that LLM devs need to have a overlap in proficiency between dev and psychology. All of the issues you've described are very human like issues, and we are trying to get LLMs to respond in a human like way. Unfortunately, humans aren't all good lol, we have bad traits, we have traits that simply are and can go bad or good.
    For instance, we want LLMs to be able to see 100 busses, than any other bus picture is than detectable. Well every detection is going to be a hallucination, it's just that hallucination is right. Big issue. In humans we call this the problem of perception, and the solution to it is we assume. The problem is we can't know all, it would be literally impossible. So we learn enough, and make guesses and assumptions about things. When these assumptions are correct, we barely even can tell they are assumptions, we just assume it's true (in other words, we believe our beliefs). When it's wrong, well, we can cling foolishly to our beliefs, or we can update them... So LLMs won't be able to eliminate hallucination in that sense, but it's worse. Humans have 5 senses, and we live in the objective world. LLMs have 1 sense, data/code/electricity, and they are digital. If a LLM assumes incorrectly that something is safe, well there is no immediate feedback loop. If Humans decide you hing a hot stove is safe, we know instantly. And this problem may seem simple, but there are millions of tiny decisions made every day by both LLMs and Humans, and they ripple, and we don't know the full consequence. It was less than a century ago we thought asbestos was the miraculous building material.
    That said, it's here and the only way through history is forward. I'm not a doomer, and I use LLMs daily myself. It's just really dangerous. Hell, look at the damage social media and the like button caused, (like a 140% increase in teen girl depression, suicide and anxiety). It's something I don't think we approach with nearly enough caution. But perhaps that's for the best, if anyone gave it the caution it deserved, we might not even touch the tech at all.

  • @desuburinga
    @desuburinga 12 วันที่ผ่านมา

    That's the most original, based AI take I've ever heard anyone talked about AI, In Teej we trust!

  • @milo3733
    @milo3733 11 วันที่ผ่านมา

    Wow, these are really important concerns I haven't seen people bring up before. I hope more see this

  • @josevargas686
    @josevargas686 11 วันที่ผ่านมา

    Need more Teej thoughts in my yt feed

  • @SanjayB-vy4gx
    @SanjayB-vy4gx 13 วันที่ผ่านมา +3

    Bro beeped vscode🤣

    • @teej_dv
      @teej_dv  13 วันที่ผ่านมา +1

      glad you appreciated that :)

    • @toyflish
      @toyflish 13 วันที่ผ่านมา

      recovered my brain from we-are-doomed-state into fun-state

    • @neniugrava
      @neniugrava 13 วันที่ผ่านมา

      Watch your language, man, there's junior engineers here 😂. You can't just mention ****de without censoring.

  • @playultimate
    @playultimate 11 วันที่ผ่านมา

    Best TH-camr out there

  • @amanharwara
    @amanharwara 11 วันที่ผ่านมา

    As much as I agree with the concerns you pointed out, its probably good that AI suggests typescript over Ruby/Rails :P

  • @ranjithkumar-xt2zw
    @ranjithkumar-xt2zw 13 วันที่ผ่านมา

    I thought teej was gone for another 6 minutes, but he is back

  • @aaronabuusama
    @aaronabuusama 12 วันที่ผ่านมา

    3:42 when you build enough of these systems the answer is obvious, the libraries will be written with AI in mind. Real engineering with AI today consists of massaging the right prompt and context to get the best results. Codebases and libraries simply evolve into tools llms can use

  • @jimhrelb2135
    @jimhrelb2135 11 วันที่ผ่านมา

    LMNOP is my favorite phrase on the most popular song

  • @256k_
    @256k_ 12 วันที่ผ่านมา

    TJ,
    I might not be the biggest fan of your content, but i respect the work you've done and the efforts you've put into educating people.
    I wanted to say, as an AI doomer (as you named it), that this was a very well presented and nuanced presentation that underlines a looot of the issues i have with the current AI bubble right now. thank you for that. thank you for sharing this to your followers.

  • @darkreaper4990
    @darkreaper4990 12 วันที่ผ่านมา

    Great video

  • @zehph
    @zehph 13 วันที่ผ่านมา +1

    Gotta admit, I really didn’t consider this inevitable comercial bias once these tools become the standard as google became in the past.
    Research should go into developing open source powerful alternatives to keep these balances in check, establishing a community managed system with transparent reports that help everyone understand how these tools get results.
    This level of power being guided solely by the interest of capital gain looks like a recipe for even more loss of agency over our lives.

  • @cariyaputta
    @cariyaputta 13 วันที่ผ่านมา

    I use Aider and APIs extensively (all free and unlimited, shout out to Gemini Experiment and Codestral) and I have to say at the current state AI coding is like you guiding a very dump junior developer, you have to extensively review their code and stop them mid way if they start hallucinating on long context codebase (100k+ tokens). But they're a pretty good as a side kick for scaffolding and dirty work. The system prompt and prompting technique make all the different tbh. With new technologies you have to sometimes give them the entire docs or at least the cheatsheet (in case of Raylib, 6 pages cheatsheet is nothing) and they'll do somewhat fine. You still have to do all the heavy lifting tho, but that's what fun about coding using AIs.

  • @ahmadaccino
    @ahmadaccino 13 วันที่ผ่านมา

    great vid dawg

    • @teej_dv
      @teej_dv  12 วันที่ผ่านมา +1

      thanks my man (send more funny edits, i will play them next time LOL)

  • @sewera.account
    @sewera.account 12 วันที่ผ่านมา

    Those LLMs will definitely push bias aligned with the issuing company's business goals. That's why every big company wants to create its own AI, they want to control the default narrative.
    They can also legitimize its output by citing sources that are also generated, that themselves cite generated sources, so fact-checking would require the depth of analysis no one can achieve.
    Your worries are correct but were presented too narrowly. All those things you mentioned, but applied to information in general. Choosing your framework, reading vacuum cleaner reviews, finding out which movie to watch, ending on reading some research in public vs private healthcare, for instance.
    You can clearly see how companies would want to control the default narrative.

  • @hovat
    @hovat 13 วันที่ผ่านมา +1

    There’s gonna be so many people that start businesses with the outputs from LLMs and wind up in huge debt from networking fees and other shit

  • @JoseBrowne
    @JoseBrowne 13 วันที่ผ่านมา

    I'm glad there's at least some real competition in this space or this would be an even bigger concern.

  • @miguelperezpal
    @miguelperezpal 12 วันที่ผ่านมา

    really good point!

  • @LHCB6
    @LHCB6 12 วันที่ผ่านมา

    Great points

  • @robonator2945
    @robonator2945 10 วันที่ผ่านมา

    In the business we call this natural selection.

  • @vectorhacker-r2
    @vectorhacker-r2 12 วันที่ผ่านมา

    11:28 I am not worried about this part, because more and more hardware is capable of training small models and the knowledge to do so is already everywhere.

    • @redbrick808
      @redbrick808 12 วันที่ผ่านมา

      Right, but it’s not about hardware or knowledge, it’s about billionaires being able to convince congress (in the US) that these “unregulated” LLMs must be made illegal for safety reasons. I personally don’t think that’s a stretch at all.

    • @vectorhacker-r2
      @vectorhacker-r2 12 วันที่ผ่านมา

      @@redbrick808 Even if they manage to convince them, which I don't think they can given how tech illiterate congress is, one could argue something about the first amendment or some such and second you really can't stop people from using these models privately anyways. They'll just download them all before they get banned and share them in secret and run them on existing hardware. It's a really dumb take to think that they're even capable of banning them because that would require controls that are unfeasible.