Thanks for all the nice comments :) People are asking how I made the presentation, it's using my plugin: github.com/tjdevries/present.nvim Also, if you want to support the channel, check out boot dev where I have one course done and am working on one for lua :) boot.dev/teej (promo code TEEJ for 25% off)
@@nocodenoblunder6672 But it IS literally just doing that. LLM's give you back the modified versions of the text that they've been trained on. What you're getting back is mostly the "average response" based on the statistical probabilities in the training data. They're extremely good at memorization. They do some limited reasoning, but it's very difficult to figure out whether the model is reasoning or whether it's just using a memorized knowledge in the moment. On top of that, they're virtually unable to say "i don't know", and will just confidently make stuff up. Pair that with people thinking "the machine must know better", and it's a recipe for a disaster.
One quote I read in "The Alignment Problem" really stuck with me: "AI/ML is a fantastic tool, provided the future looks like the past." If you your goal is to change the status quo of some sector, tools built with ML are the absolute last thing you should reach for. The context in the book was a jurisdiction that used an ML system to determine whether to detain someone pre-trial, with the goal of removing human bias. In reality, they encoded human bias into the system and AMPLIFIED it, resulting in far more biased outcomes than before, while removing any mechanism for holding anyone accountable for the biases. In certain domains, AI/ML are a form of "bias laundering" whereby you preserve expand human biases, but remove any reliable accountability mechanism.
Yeah, it seems people think that LLMs can do X in the same way there are people who think calculators can "do math", when calculators only do computations. The person using them is doing the math
Just the same way that we picked search engines, people are picking LLMs. Just like LLMs, search engines started out pure. The fact that they will become normal while “pure” and then stay normal after they become co-opted and altered to be financially viable / maximumally profitable is the problem. The fact that most people don’t think critically enough to realise is the root cause.
yes but simplyfing AI to a Calculator is like simplyfing a Human to a dog, so its really a pointless comparison, calculators do as much math as a hammer builds a building
I saw post where some Software Engineers were talking about how their coding speed tremendously dropped when the tried coding without the AI code editor they had been using for months. Apparently they were struggling to remember simple things (and not just syntax), and how in general they felt a loss of motivation to learn new things (the accept the first AI answer thing that TJ was talking about). Imagine paying a company to lobotomize you like this. Maybe I'm just old and grumpy. Maybe I'm just an ex-creative writer, turned programmer who is pissed of at what tech (the last of 3 things I'm passionate about) is turning into.
Completely agree. I think AI just makes you too dependent on itself, or at least if you use it the way most people(and programmers) do. In the long-term, if you use it for everything(or mostly everything) coding related, you might become a worse programmer than when you started and didn't have AI. Since you saw short-term improvements in coding speed, you commit to it and integrate it into your setup as if it was just another VSCode plugin. I also think this principle applies to way more than just programming. And of course, this benefits the companies that provide LLM-related services. Making you more or less forced to use their product is a great business model for them.
but if ai can take some mental burden off your brain then that is good. Do you still do long division or just use a calculator? The whole premise of society is to stand on the shoulders of the people before you, you dont have to know everything, because you cant.
@@pluto8404 The difference is that a calculator still requires you to understand what you calculate, all the important decisions are done by you. But an LLM can (or can at least pretend to) do many things on top and that WILL make people think less about what they're doing. Some people already blindly trust Tesla's autopilot etc, literally sleeping in the driving car. You can bet they won't lose any sleep over accidents that indirectly affect others because they won't even consider it. The only upside I see is that people not using AI will have more job security in places that (need to) care about the result, but the negative consequences will definitely outweigh everything else. It will still be profitable for the quasi-monopolies though, so it will happen.
@@pluto8404comparing the simplicity of a repetitive task like long division and the complexity involved in writing code sounds ridiculous. mental burden is necessary for writing code, if you dont want to think about making a website for example then use tools like wordpress.
@@TMF149 That can also be said to what people were doing before the AI boom. Most programmers today don't really want to learn to solve deep technical challenges, they just want to write code that "works". That's what is making languages like JS and Python be so popular and the usage of huge frameworks that mostly do everything for us. People get too dependent on that too and when that people is put to do something a bit lower level they just can't because they don't know how to solve challenges, they just know how to use the API they learned to get a job.
My biggest issue with AI right now is the push for it. It feels like the business world has an unhealthy obsession with AI, and as a result i suspect that we will push AI forward at the expense of moral and security concerns/issues.
I totally agree. I feel like if they truly believed the technology would be massively revolutionary, they could take their time to make sure they execute it at the best of their abilities. Instead, it looks like this massive pressure to get anything done as fast as possible because of an “AI Arms Race”. But it comes off to me as hype and grifting.
"Ok, my business is doing so good, and tech is the future! Let's give huge salary ranges to everybody bc we're that good!" "What? You're telling me an engineer is costing us 500k a year?? Whose idea is that?? If only we were able to cut engineers off...." "Ok, this new AI thingy may replace engineers. How much will it cost? Millions of dollars a year? That's awesome! We can afford it" "What do you mean we've lost billions of dollars??? No, we're too far in. We need to sell this anyway possible.... I have an idea" *CEO of says it will replace all engineers by the end of 2025*
That is, tech companies made bad decisions, want more money, had more bad ideas trying to fix the bad decisions, and now has this whole "end of engineers" fallacy in order to try to recover from spending billions of dollars
thanks! that's actually exactly the goal & vibe I'm aiming for :) even if it makes it sometimes a bit harder to get it all correct in a single take... haha
@@teej_dv Capitalism allows individual, not democratically legitimized organizations (e.g. corporations) to accumulate a lot of money, which is a major enabling factor for regulatory capture. And since there are systems that prevent organizations without democratic legitimacy to accumulate these amounts of money, this is in fact a problem of capitalism. Of course there is no perfect system that "prevents regulatory capture from occurring" with 100% certainty, if you're looking for that, then you aren't looking for anything really. But there are policies that could reduce the effectiveness/risk of regulatory capture, e.g. policies that: - Improve the control of the people over the government (e.g., allowing more then two parties (de jure or de facto), reducing barriers for referendums (including referendums with initiative from the people)) - Democratize to some degree corporations of which the general public is a major stakeholder (e.g. democratically elected boards that have some degree of control) But regardless of what you think of these policies, capitalism includes certain mechanisms that facilitate regulatory capture in ways that are unique to capitalism. Pretending that the issue of regulatory capture is not linked to capitalism seems more ideologically than rationally motivated.
He described how gov regulations enable monopolies. Think about how patents work for a second: a government will back up the inventor in the court of law just because said inventor paid some fee to the gov some time ago. That's the essence of it.
I missed you TJ. We need more of you on YT. That take is so true and a bit scary. I've been coding for 7 years professionally, and around 12 years total, and I see so many new people being fascinated by LLM's without seeing the issues they bring 🙃
I actually took notes from this video. It's very rewarding to see your thoughts validated by other people, i agree 100% with everything. Bias in LLMs are a huge concern.
Problem solving never goes out of fashion. In the land of the blind proompt engineers (those who have outsourced their faculty of thinking) the guy with the one eye (one who actually understands the code) will be king.
I 100% agree, when something is abstracted away from your understanding, more and more companies will take advantage of it. Examples; getting your computer fixed, getting your car fixed, the plumber telling you that you need a new "special value on your boiler".
Dude you are on to something- the end of creativity, corruption of results by polluting data, boxing out competitors via govt manipulation, llm models with programmed bias. All this is happening right now. Subscribed.
13 วันที่ผ่านมา +2
Loving these incursions into philosophy from you TJ.
YES 🙌 TJ this is what i’ve been talking about for the last 6 months with anyone who will listen. It’s a movement towards homogeny. Sure we can be more specific in prompts to get more creative and dialed in results but ultimately the models will produce very similar results consistently with little variation. So what does that mean for using it. This is every industry by the way - marketing content, sales tools that auto chat with clients, anything we can use AI to produce. It will slowly coalesce into one big homogenous output and us accepting this reduced creativity.
Fantastic takes, especially the parallels between SEO & LLM relevance. I remember back in the day there was the the ol' `position: absolute; left: -999999999px;` for a bunch of content to help boost sites' search rankings. Only a matter of time before that sort of practice becomes commonplace in the LLM frontier
What a great presentation. AI poisoning is something that isn't discussed enough. I always used the example of an attack from a government, but lots of companies for have incentives (especially MS) to start creating biases toward certain company or tech and monetize it, and I think that's the most likely direction LLM takes. I'm pretty certain ChatGPT believes in string theory, just because it got communicated on so much, not because it produced actually anything useful.
My experience with the LLMs is that when they work it's magical, and I'm truly impressed, but I don't think they will replace programmers anytime soon. The perspective nowadays is very often from the programmers, who already know what they want to achieve, and they try to force the LLM to deliver. For me personally it's super annoying, when I know what I want, and I get wrong answers
An excellent and balanced presentation. I've started a small project (plugin for neovim) with a version for each ChatGPT and Claude. So far, I'm more impressed by Claude than ChatGPT, but overall, we are not there yet, and we are at least one disruption away from AI taking programming jobs away. I also liked your interview of Mitchell Hashimoto with Prime, particularly the part about GPL vs. MIT licensing when using autocompletion/autosuggestions from AI. Thanks for sharing your thoughts!
Yeah not to comment three times in a row but you’re literally saying exactly what I’ve been expecting to happen right now. These LLMs are going to funnel non engineers into all manner of services and tools that they don’t understand or need. They’re gonna rack up huge cloud infrastructure bills and all manner of mistakes are going to happen outside of coding itself
Thanks for sharing your thoughts and opinions on some things folks in the industry aren't really talking about when it comes to AI. Loved "small cabal" and "llmnop."
6:45 this risk is discussed in topics about systematic discrimination. You can see how AI (in general) can perpetuate biases. It makes predictions on the data that it already has,which is almost always biased. This is especially problematic with stuff like predictive policing
Yeah some people believe AI will remove the necessity for education. I believe the opposite. Greater education will be needed to effectively use and evaluate the AI you are leveraging. And I believe your first point about AI using the most likely solution based on trained data as opposed to the most effective solution is a point to that. You don't want your boss not to know how to your job while telling you how to do it. Same goes for AI.
For me, the biggest danger of AI is that it gets used to implement policy that no one can be held responsible, because “the computer did it”. We have seen this for years already; one example is Australia’s ‘robodebt’ scheme where a machine learning model told people that they needed to pay debts that didn’t exist over money they didn’t get, leading to at least 3 suicides. No one has gone to jail for making such a system.
Great video! Very interesting presentation. One more concern of mine: LLMs are black boxes so it'll be easy to just reject any accountability when the results are biased or bad.
I'm glad you are discussing LLM's perpetuating and amplifying bias, because I've been on this soapbox for years and I felt like the only one. Any system that creates a feedback loop into itself runs a high risk of reaching a homogeneous output, either by reaching stable equilibrium and the output remains the same, or by reaching runaway amplification and again the output remains the same. LLM's are far too complex for humans to understand - let alone control - the scope and tenor of the inputs. So the result is people using biased systems to write the next generation of training data that will create even more biased systems. You're talking about LLM's allowing React to crowd out other contenders, and cloud services gaining higher monopoly. What I'm wondering is what happens when too many people start asking LLM's about political, moral, and ethical best practices and then feeding that back into the machine and that bias starts to amplify. It quickly becomes "whatever was said the most wins above literally all else" If we aren't careful, it can erode a lot of our breadth of thought in roughly a generation. My view is this: Don't use LLM's as an excuse to stop thinking and challenging the status quo. It's a very sophisticated parrot. It can't create, have intuitive insights, etc. That's up to us, and we ignore that responsibility at our own peril.
I can corroborate the conclusions being shared. Even when you give your LLM‘s specific instructions to use a technology stack, package or other things, including giving the precise documentation and examples, the large language model will default and start to hallucinate, producing incompatible code based on your standard python libraries like Numpy, pandas Don’t let any of these things have version incompatibilities. On the bright side, going through these errors will get you to become an expert in actual coding really quick if you want to fix the problem
Great point! Sure with the current losses the LLM hosts are facing there will be an increasing pressure for this kind os sposored suggestions and the enshitification then starts.
You are spot on, and this is basically what the people in AI safety and ethics like Gebru et al. have been saying, and instead of listening, people just screamed that they were “woke” and thoroughly pushed them out of the conversation.
Before LLMs, there were books. And in many cases, everyone was learning from the same one. And then there were search engines. And social media algorithms. LLMs are an evolution of this, but also an anecdote to it. It doesn't take long to figuring out that just asking an LLM to write your code is a bad idea. But, where I find LLMs really useful are in those moments where I think there's got to be a better way and wonder what if I tried this tool or that data structure? Being able to ask Chat GPT, how could I make this idea and getting many options to try back is a game changer.
Saying that AI systems are just predictors is a pretty good description. That is the technology at its heart even if the results can be absolutely fantastic.
@@georgebeierberkeley I'd say so. At least whenever I've tried to reason about how humans think, my naive impressions are that we are just impressive pattern matching machines. The pattern matching is the linking of thought that eventually leads to an outcome and this seems to be pretty much at the heart of how neural networks and AI work. I of course don't know what I'm talking about :), but those are the impressions I have.
The easiest way to put this into a few words is "everything will be derivative". Not that nothing can be novel, but the novel things are just going to be a repackaging of existing thought in a way that is relevant to the context of a given task. Instead of using the existing information as building blocks to reach the next step forward in advancement, it stays at the same level, shifting things around. That results in finite advancement. You can advance until you have tried every single implementation of existing information. But you can't develop new information. But that's not going to be the case with models like O1. So far, it seems possible that test-time-compute methods may create new information.
The LLMs promoting products seems like the natural conclusion to me too. Especially since we're seeing trends that with the inclusion of the AI summaries in search sites like Wikipedia see traffic down 20%+ and this trend will likely follow other websites, the next logical step is to pay Google/Bing to promote your content in the summary.
I share very similar thoughts as you. I too think AI will be completely disruptive in the industry but fear people will now mistakenly think they don't have to understand the code LLM generates for them. Very similar to the "Copy Paste" dev we have always had but it will be amplified 100x. The easiest way to stand out from the crowd is to be the person who truly understands. You can still use LLMs to help in your understanding but you have to approach it with that mindset. The people who are going to be the most effective at using AI code assistant tools are the ones who still have a fundamental understanding of how the code that gets generated works. I am worried many people are seeing notable figures in tech state how much they love using these tools without realizing that they've had years and years of experience programming before LLMs were a thing. They have developed a level of intuition they can use when evaluating the output of LLMs. You are never going to build this intuition yourself if you blindly tab away.
There was a Star Trek episode, where people were artists and all their needs were served by AI. Problem was, everybody forgot how it worked and it needed fixing. Forgot the name of the episode...
I'm kinda excited about all this craziness, it might be enough to scare me into a mostly offline life. I'm feeling super creeped out by the level of invasion we have already. I'm still addicted to this website and the convenience of texting and online maps and portable gaming, but when I compare all that to what I lost it doesn't seem like a good trade. Everyone used to hang out, no one hangs out anymore. It's a special thing now, something scheduled. Some of that is just getting older and busier, but a lot is everyone spending all their freetime online. I barelyt even read books anymore. I used to read all the time. Ebooks suck, subjects you to the rest of the distractions in a phone. A physical book feels restful, only words making a story in my inner world, no notifications. I want to de-tech, bring back boredom and stillness to my life. And yet here I am barfing my words into null-space :(
Open Source seems like a nice hedge against some of this in the long run, at least so long as we keep finding ways to make training/running LLMs efficient enough to be used on end-user hardware.
I was thinking about this few hours ago. I guess we still would have people who like to explore things and also LLMs now can learn with fewer examples because it's a fine tuning task for them rather than training from scratch
you know one of the things ive found handy in a weird way is to tell the LLM in detail what i want it to do architecturally and flow wise... then modify it function/piece by function/piece... could i have coded it myself ssure... but it actually lets me pilot more rather than drive and think more on how i want things.... and then even more important document it across.... LLMs are so much better at that.... but again youre right this entire thing depends on me having a certain level of expertise/internalization of best practices, pitfalls and tradeoffs etc
I still don't use any AI completion, I find it annoying to have big block of lego code just magically pops in instead of me using the lego myself. I personally use it for asking library stuff most of the time to quickly get a rough idea of what apis and feature that the tool provide, so just a good old documentation reading use case, atleast this is what i find it the most useful for work and maybe for none work like electronic shopping random facts checking etc. Personally, privacy isn't a concern for me, if it being a better tool that requires exploiting somebody else or being exploited? I'll still use it as long as it is a good tool and I have other bigger concern in life, not gonna worry about it replacing people's job or wiping out the humanity, those are not entirely out of reach but not the kind of problem I can fix, thus I don't care.
I didn't think about the problems of enshittified AI... I think this is just another point in favor of my belief that vertical integration is inherently anti-competitive in the steady state, and so should be considered an anti-competitive practice legally. The current doctrine that it's fine as long as it's better for the consumer *right now* is not cutting it. The same thing is also true for loss-leading IMHO, and that is *definitely* something these companies are doing with AI tools. It's truly anti-competitive to offer any product at a loss, and we really ought to treat it as such. It makes it impossible to compete with large companies with deep pockets, even if you had a golden goose of a product. Of course I have no idea how we'd enforce any of this, but it would be a good start to force companies to break up into very focused domains when there is a clear bias, or even a potential for bias. All AI companies should be financially motivated to produce products with the best most accurate results for the consumer, not for some parent company or significant donor/shareholder.
I am a firm believer that LLM devs need to have a overlap in proficiency between dev and psychology. All of the issues you've described are very human like issues, and we are trying to get LLMs to respond in a human like way. Unfortunately, humans aren't all good lol, we have bad traits, we have traits that simply are and can go bad or good. For instance, we want LLMs to be able to see 100 busses, than any other bus picture is than detectable. Well every detection is going to be a hallucination, it's just that hallucination is right. Big issue. In humans we call this the problem of perception, and the solution to it is we assume. The problem is we can't know all, it would be literally impossible. So we learn enough, and make guesses and assumptions about things. When these assumptions are correct, we barely even can tell they are assumptions, we just assume it's true (in other words, we believe our beliefs). When it's wrong, well, we can cling foolishly to our beliefs, or we can update them... So LLMs won't be able to eliminate hallucination in that sense, but it's worse. Humans have 5 senses, and we live in the objective world. LLMs have 1 sense, data/code/electricity, and they are digital. If a LLM assumes incorrectly that something is safe, well there is no immediate feedback loop. If Humans decide you hing a hot stove is safe, we know instantly. And this problem may seem simple, but there are millions of tiny decisions made every day by both LLMs and Humans, and they ripple, and we don't know the full consequence. It was less than a century ago we thought asbestos was the miraculous building material. That said, it's here and the only way through history is forward. I'm not a doomer, and I use LLMs daily myself. It's just really dangerous. Hell, look at the damage social media and the like button caused, (like a 140% increase in teen girl depression, suicide and anxiety). It's something I don't think we approach with nearly enough caution. But perhaps that's for the best, if anyone gave it the caution it deserved, we might not even touch the tech at all.
3:42 when you build enough of these systems the answer is obvious, the libraries will be written with AI in mind. Real engineering with AI today consists of massaging the right prompt and context to get the best results. Codebases and libraries simply evolve into tools llms can use
TJ, I might not be the biggest fan of your content, but i respect the work you've done and the efforts you've put into educating people. I wanted to say, as an AI doomer (as you named it), that this was a very well presented and nuanced presentation that underlines a looot of the issues i have with the current AI bubble right now. thank you for that. thank you for sharing this to your followers.
Gotta admit, I really didn’t consider this inevitable comercial bias once these tools become the standard as google became in the past. Research should go into developing open source powerful alternatives to keep these balances in check, establishing a community managed system with transparent reports that help everyone understand how these tools get results. This level of power being guided solely by the interest of capital gain looks like a recipe for even more loss of agency over our lives.
I use Aider and APIs extensively (all free and unlimited, shout out to Gemini Experiment and Codestral) and I have to say at the current state AI coding is like you guiding a very dump junior developer, you have to extensively review their code and stop them mid way if they start hallucinating on long context codebase (100k+ tokens). But they're a pretty good as a side kick for scaffolding and dirty work. The system prompt and prompting technique make all the different tbh. With new technologies you have to sometimes give them the entire docs or at least the cheatsheet (in case of Raylib, 6 pages cheatsheet is nothing) and they'll do somewhat fine. You still have to do all the heavy lifting tho, but that's what fun about coding using AIs.
Those LLMs will definitely push bias aligned with the issuing company's business goals. That's why every big company wants to create its own AI, they want to control the default narrative. They can also legitimize its output by citing sources that are also generated, that themselves cite generated sources, so fact-checking would require the depth of analysis no one can achieve. Your worries are correct but were presented too narrowly. All those things you mentioned, but applied to information in general. Choosing your framework, reading vacuum cleaner reviews, finding out which movie to watch, ending on reading some research in public vs private healthcare, for instance. You can clearly see how companies would want to control the default narrative.
11:28 I am not worried about this part, because more and more hardware is capable of training small models and the knowledge to do so is already everywhere.
Right, but it’s not about hardware or knowledge, it’s about billionaires being able to convince congress (in the US) that these “unregulated” LLMs must be made illegal for safety reasons. I personally don’t think that’s a stretch at all.
@@redbrick808 Even if they manage to convince them, which I don't think they can given how tech illiterate congress is, one could argue something about the first amendment or some such and second you really can't stop people from using these models privately anyways. They'll just download them all before they get banned and share them in secret and run them on existing hardware. It's a really dumb take to think that they're even capable of banning them because that would require controls that are unfeasible.
Thanks for all the nice comments :)
People are asking how I made the presentation, it's using my plugin:
github.com/tjdevries/present.nvim
Also, if you want to support the channel, check out boot dev where I have one course done and am working on one for lua :)
boot.dev/teej (promo code TEEJ for 25% off)
and yes, i heart my own pinned comments
@@teej_dv i approve this message
I didn’t see anyone asking, but now that you mention it - how do I install in VsCode? 😘
Son: PaPa ye china wall kaise bani batao???
Father: free mein son
Based takes.
😂 VS***d*
"AI will enslave us all!"
"Meh"
"AI will make Typescript the only programming language!"
"Oh no!"
😂💀 literally
time to panic
No brother man here did a real Marxist analysis of the tech sector and still think it's not capitalism, this stuff is incredible honesty 😂😂
You just be thankful it's not just plain JS or Python.
@@クールなビデオ
Describe capitalism
Call it communism
We're all gonna be slaves
The main risk in my opinion is that people trust AI as if it was intelligent.
This. However as well to some degree.
Well its not like its complelty dumb either, which some people make it out to be. "Oh its JUST predicting the next token not a big deal"
as if it _were_
@@nocodenoblunder6672 But it IS literally just doing that.
LLM's give you back the modified versions of the text that they've been trained on. What you're getting back is mostly the "average response" based on the statistical probabilities in the training data. They're extremely good at memorization.
They do some limited reasoning, but it's very difficult to figure out whether the model is reasoning or whether it's just using a memorized knowledge in the moment. On top of that, they're virtually unable to say "i don't know", and will just confidently make stuff up.
Pair that with people thinking "the machine must know better", and it's a recipe for a disaster.
Lol I know. But the Average American IQ is 98. There's just not much we can do about it.
One quote I read in "The Alignment Problem" really stuck with me:
"AI/ML is a fantastic tool, provided the future looks like the past."
If you your goal is to change the status quo of some sector, tools built with ML are the absolute last thing you should reach for.
The context in the book was a jurisdiction that used an ML system to determine whether to detain someone pre-trial, with the goal of removing human bias. In reality, they encoded human bias into the system and AMPLIFIED it, resulting in far more biased outcomes than before, while removing any mechanism for holding anyone accountable for the biases.
In certain domains, AI/ML are a form of "bias laundering" whereby you preserve expand human biases, but remove any reliable accountability mechanism.
I heard this in Dr Robert Miles voice
@@MrMysticphantomOh my god yes!
"The past is the best predictor we have for the future."
I mean if you look at alpha go this premise is false. The bot is known explicitly for unorthodox, inhuman approaches.
It all comes down to: “why learn math when there are calculators which do math better than you”.
Life isn’t about convenience but it’s about learning.
Yeah, it seems people think that LLMs can do X in the same way there are people who think calculators can "do math", when calculators only do computations. The person using them is doing the math
Just the same way that we picked search engines, people are picking LLMs.
Just like LLMs, search engines started out pure.
The fact that they will become normal while “pure” and then stay normal after they become co-opted and altered to be financially viable / maximumally profitable is the problem.
The fact that most people don’t think critically enough to realise is the root cause.
yes but simplyfing AI to a Calculator is like simplyfing a Human to a dog, so its really a pointless comparison, calculators do as much math as a hammer builds a building
I saw post where some Software Engineers were talking about how their coding speed tremendously dropped when the tried coding without the AI code editor they had been using for months.
Apparently they were struggling to remember simple things (and not just syntax), and how in general they felt a loss of motivation to learn new things (the accept the first AI answer thing that TJ was talking about).
Imagine paying a company to lobotomize you like this. Maybe I'm just old and grumpy. Maybe I'm just an ex-creative writer, turned programmer who is pissed of at what tech (the last of 3 things I'm passionate about) is turning into.
Completely agree. I think AI just makes you too dependent on itself, or at least if you use it the way most people(and programmers) do.
In the long-term, if you use it for everything(or mostly everything) coding related, you might become a worse programmer than when you started and didn't have AI. Since you saw short-term improvements in coding speed, you commit to it and integrate it into your setup as if it was just another VSCode plugin.
I also think this principle applies to way more than just programming. And of course, this benefits the companies that provide LLM-related services. Making you more or less forced to use their product is a great business model for them.
but if ai can take some mental burden off your brain then that is good. Do you still do long division or just use a calculator? The whole premise of society is to stand on the shoulders of the people before you, you dont have to know everything, because you cant.
@@pluto8404 The difference is that a calculator still requires you to understand what you calculate, all the important decisions are done by you. But an LLM can (or can at least pretend to) do many things on top and that WILL make people think less about what they're doing. Some people already blindly trust Tesla's autopilot etc, literally sleeping in the driving car. You can bet they won't lose any sleep over accidents that indirectly affect others because they won't even consider it.
The only upside I see is that people not using AI will have more job security in places that (need to) care about the result, but the negative consequences will definitely outweigh everything else. It will still be profitable for the quasi-monopolies though, so it will happen.
@@pluto8404comparing the simplicity of a repetitive task like long division and the complexity involved in writing code sounds ridiculous.
mental burden is necessary for writing code, if you dont want to think about making a website for example then use tools like wordpress.
@@TMF149 That can also be said to what people were doing before the AI boom. Most programmers today don't really want to learn to solve deep technical challenges, they just want to write code that "works". That's what is making languages like JS and Python be so popular and the usage of huge frameworks that mostly do everything for us. People get too dependent on that too and when that people is put to do something a bit lower level they just can't because they don't know how to solve challenges, they just know how to use the API they learned to get a job.
My biggest issue with AI right now is the push for it. It feels like the business world has an unhealthy obsession with AI, and as a result i suspect that we will push AI forward at the expense of moral and security concerns/issues.
The obsession comes from their desperation to recoup the massive costs for training the models. They *need* this to be successful.
I totally agree. I feel like if they truly believed the technology would be massively revolutionary, they could take their time to make sure they execute it at the best of their abilities.
Instead, it looks like this massive pressure to get anything done as fast as possible because of an “AI Arms Race”. But it comes off to me as hype and grifting.
"Ok, my business is doing so good, and tech is the future! Let's give huge salary ranges to everybody bc we're that good!"
"What? You're telling me an engineer is costing us 500k a year?? Whose idea is that?? If only we were able to cut engineers off...."
"Ok, this new AI thingy may replace engineers. How much will it cost? Millions of dollars a year? That's awesome! We can afford it"
"What do you mean we've lost billions of dollars??? No, we're too far in. We need to sell this anyway possible.... I have an idea"
*CEO of says it will replace all engineers by the end of 2025*
That is, tech companies made bad decisions, want more money, had more bad ideas trying to fix the bad decisions, and now has this whole "end of engineers" fallacy in order to try to recover from spending billions of dollars
Really feels like the good old internet bubble. It's really 1:1 what happened.
I like your style of presenting videos. It feels more like a conference talk and less like a TH-camr. Subscribed.
thanks! that's actually exactly the goal & vibe I'm aiming for :) even if it makes it sometimes a bit harder to get it all correct in a single take... haha
Quote: "LLMs don't seek truth." Exactly! Great point! Thank you!
Well, a lot of people don't seek truth as well :) But 💯
"Language models deal with language, not knowledge" - paraphrased from No Boilerplate
The enshittification of LLMs via SEO is something I had not considered
"This is not about capitalism"
Describes problems with capitalism in detail
I'd be interested to hear of your alternative system that prevents regulatory capture from occuring!
@@teej_dv Capitalism allows individual, not democratically legitimized organizations (e.g. corporations) to accumulate a lot of money, which is a major enabling factor for regulatory capture. And since there are systems that prevent organizations without democratic legitimacy to accumulate these amounts of money, this is in fact a problem of capitalism. Of course there is no perfect system that "prevents regulatory capture from occurring" with 100% certainty, if you're looking for that, then you aren't looking for anything really. But there are policies that could reduce the effectiveness/risk of regulatory capture, e.g. policies that:
- Improve the control of the people over the government (e.g., allowing more then two parties (de jure or de facto), reducing barriers for referendums (including referendums with initiative from the people))
- Democratize to some degree corporations of which the general public is a major stakeholder (e.g. democratically elected boards that have some degree of control)
But regardless of what you think of these policies, capitalism includes certain mechanisms that facilitate regulatory capture in ways that are unique to capitalism. Pretending that the issue of regulatory capture is not linked to capitalism seems more ideologically than rationally motivated.
He described how gov regulations enable monopolies. Think about how patents work for a second: a government will back up the inventor in the court of law just because said inventor paid some fee to the gov some time ago. That's the essence of it.
right wingers always self reporting when they bring up capitalism lmfao
Precisely. When the state exists to protect the interest of capitalism we are in the trenches of capitalism, unfortunately.
Things learned from this video:
1- VS Code is a curse word.
2- Keep learning what your want to learn, don't worry about LLMs
This is a good take. The analogy to a degraded google search with “ads” is interesting.
I missed you TJ. We need more of you on YT.
That take is so true and a bit scary. I've been coding for 7 years professionally, and around 12 years total, and I see so many new people being fascinated by LLM's without seeing the issues they bring 🙃
Good to hear some rational concerns aired... rather than just "ai ate my hamster" ... nice work.
I actually took notes from this video. It's very rewarding to see your thoughts validated by other people, i agree 100% with everything. Bias in LLMs are a huge concern.
Hey VS C**e fan here and I share your concerns. This definitely needs to be considered and discussed more. Good take 👍
Also I've watched other videos from you which I liked but I also liked this style of video from you too
Did you selfcensor the word "code" or is that something different?
@@sealsharp I noticed TJ bleeped out his use of it in the video so I did this censoring as a joke in response
Problem solving never goes out of fashion. In the land of the blind proompt engineers (those who have outsourced their faculty of thinking) the guy with the one eye (one who actually understands the code) will be king.
The prompters are alchemists, the programmers are chemists.
but i want to turn shit code into gold 😂😂😂
a nugget of the purest green!
th-cam.com/video/TkZFuKHXa7w/w-d-xo.html
prompters are astrologists, programmers are astronomers
Such a good quote.
Prompters are literally tech-priests praying to the machine spirit hoping it works.
Really love these types of videos. Keep it up!
Great video addressing some important and underrepresented points. I watched it via Prime's reaction so just stopping by to repay some engagement
I 100% agree, when something is abstracted away from your understanding, more and more companies will take advantage of it. Examples; getting your computer fixed, getting your car fixed, the plumber telling you that you need a new "special value on your boiler".
Dude you are on to something- the end of creativity, corruption of results by polluting data, boxing out competitors via govt manipulation, llm models with programmed bias. All this is happening right now. Subscribed.
Loving these incursions into philosophy from you TJ.
It reminds me that the most accurate language to describe functionality is a programming language. Natural language has too much ambiguity.
Very valid concerns. I think it will go this way only as you have predicted.
Accepting default answers seems like such a natural evolution in human laziness...
The beep for vs code killed me 😂
You’re truly making a difference!
YES 🙌 TJ this is what i’ve been talking about for the last 6 months with anyone who will listen. It’s a movement towards homogeny. Sure we can be more specific in prompts to get more creative and dialed in results but ultimately the models will produce very similar results consistently with little variation. So what does that mean for using it.
This is every industry by the way - marketing content, sales tools that auto chat with clients, anything we can use AI to produce. It will slowly coalesce into one big homogenous output and us accepting this reduced creativity.
Fantastic takes, especially the parallels between SEO & LLM relevance. I remember back in the day there was the the ol' `position: absolute; left: -999999999px;` for a bunch of content to help boost sites' search rankings. Only a matter of time before that sort of practice becomes commonplace in the LLM frontier
What a great presentation. AI poisoning is something that isn't discussed enough. I always used the example of an attack from a government, but lots of companies for have incentives (especially MS) to start creating biases toward certain company or tech and monetize it, and I think that's the most likely direction LLM takes.
I'm pretty certain ChatGPT believes in string theory, just because it got communicated on so much, not because it produced actually anything useful.
Love the vid. Lot's of great points! I maintain a healthy skepticism of AI but interested to see where it goes.
Really appreciate you articulating these thoughts TJ, there are tricky waters ahead to navigate with this type of technology!
My experience with the LLMs is that when they work it's magical, and I'm truly impressed, but I don't think they will replace programmers anytime soon. The perspective nowadays is very often from the programmers, who already know what they want to achieve, and they try to force the LLM to deliver. For me personally it's super annoying, when I know what I want, and I get wrong answers
An excellent and balanced presentation. I've started a small project (plugin for neovim) with a version for each ChatGPT and Claude. So far, I'm more impressed by Claude than ChatGPT, but overall, we are not there yet, and we are at least one disruption away from AI taking programming jobs away.
I also liked your interview of Mitchell Hashimoto with Prime, particularly the part about GPL vs. MIT licensing when using autocompletion/autosuggestions from AI.
Thanks for sharing your thoughts!
try get to elo 2700 first.
Yeah not to comment three times in a row but you’re literally saying exactly what I’ve been expecting to happen right now. These LLMs are going to funnel non engineers into all manner of services and tools that they don’t understand or need. They’re gonna rack up huge cloud infrastructure bills and all manner of mistakes are going to happen outside of coding itself
Thanks for sharing your thoughts and opinions on some things folks in the industry aren't really talking about when it comes to AI. Loved "small cabal" and "llmnop."
6:45 this risk is discussed in topics about systematic discrimination. You can see how AI (in general) can perpetuate biases. It makes predictions on the data that it already has,which is almost always biased. This is especially problematic with stuff like predictive policing
Great points TJ, thanks for the enjoyable video. I have similar worries, hope the real future plays out better than that
Yeah some people believe AI will remove the necessity for education. I believe the opposite. Greater education will be needed to effectively use and evaluate the AI you are leveraging. And I believe your first point about AI using the most likely solution based on trained data as opposed to the most effective solution is a point to that. You don't want your boss not to know how to your job while telling you how to do it. Same goes for AI.
For me, the biggest danger of AI is that it gets used to implement policy that no one can be held responsible, because “the computer did it”. We have seen this for years already; one example is Australia’s ‘robodebt’ scheme where a machine learning model told people that they needed to pay debts that didn’t exist over money they didn’t get, leading to at least 3 suicides. No one has gone to jail for making such a system.
I need this nvim present plugin though. Thank you for all of those awesome nvim ecosystem contributions!
TJ. You are spot on. Pragmatic way of thinking.
These are really good points. Very insightful!
Great video! Very interesting presentation.
One more concern of mine: LLMs are black boxes so it'll be easy to just reject any accountability when the results are biased or bad.
I'm glad you are discussing LLM's perpetuating and amplifying bias, because I've been on this soapbox for years and I felt like the only one. Any system that creates a feedback loop into itself runs a high risk of reaching a homogeneous output, either by reaching stable equilibrium and the output remains the same, or by reaching runaway amplification and again the output remains the same. LLM's are far too complex for humans to understand - let alone control - the scope and tenor of the inputs. So the result is people using biased systems to write the next generation of training data that will create even more biased systems.
You're talking about LLM's allowing React to crowd out other contenders, and cloud services gaining higher monopoly. What I'm wondering is what happens when too many people start asking LLM's about political, moral, and ethical best practices and then feeding that back into the machine and that bias starts to amplify. It quickly becomes "whatever was said the most wins above literally all else" If we aren't careful, it can erode a lot of our breadth of thought in roughly a generation.
My view is this: Don't use LLM's as an excuse to stop thinking and challenging the status quo. It's a very sophisticated parrot. It can't create, have intuitive insights, etc. That's up to us, and we ignore that responsibility at our own peril.
I can corroborate the conclusions being shared.
Even when you give your LLM‘s specific instructions to use a technology stack, package or other things, including giving the precise documentation and examples, the large language model will default and start to hallucinate, producing incompatible code based on your standard python libraries like Numpy, pandas
Don’t let any of these things have version incompatibilities.
On the bright side, going through these errors will get you to become an expert in actual coding really quick if you want to fix the problem
Great point! Sure with the current losses the LLM hosts are facing there will be an increasing pressure for this kind os sposored suggestions and the enshitification then starts.
Great content TJ, i already feel like i know you from Prime videos.
You've got Casey's shtick down pretty good. Great points
I really like this content, great video!
You are spot on, and this is basically what the people in AI safety and ethics like Gebru et al. have been saying, and instead of listening, people just screamed that they were “woke” and thoroughly pushed them out of the conversation.
Before LLMs, there were books. And in many cases, everyone was learning from the same one. And then there were search engines. And social media algorithms.
LLMs are an evolution of this, but also an anecdote to it. It doesn't take long to figuring out that just asking an LLM to write your code is a bad idea.
But, where I find LLMs really useful are in those moments where I think there's got to be a better way and wonder what if I tried this tool or that data structure? Being able to ask Chat GPT, how could I make this idea and getting many options to try back is a game changer.
Saying that AI systems are just predictors is a pretty good description. That is the technology at its heart even if the results can be absolutely fantastic.
Yes…but. Isn’t that what we humans are?
@@georgebeierberkeley I'd say so. At least whenever I've tried to reason about how humans think, my naive impressions are that we are just impressive pattern matching machines. The pattern matching is the linking of thought that eventually leads to an outcome and this seems to be pretty much at the heart of how neural networks and AI work. I of course don't know what I'm talking about :), but those are the impressions I have.
Top tier stuff - goes far beyond the classic doomer "AI taking all CS jobs"
Really interesting take. Thanks for sharing!
The easiest way to put this into a few words is "everything will be derivative".
Not that nothing can be novel, but the novel things are just going to be a repackaging of existing thought in a way that is relevant to the context of a given task. Instead of using the existing information as building blocks to reach the next step forward in advancement, it stays at the same level, shifting things around.
That results in finite advancement. You can advance until you have tried every single implementation of existing information. But you can't develop new information. But that's not going to be the case with models like O1. So far, it seems possible that test-time-compute methods may create new information.
If you know you know, excellent. 👏👏👏
The LLMs promoting products seems like the natural conclusion to me too. Especially since we're seeing trends that with the inclusion of the AI summaries in search sites like Wikipedia see traffic down 20%+ and this trend will likely follow other websites, the next logical step is to pay Google/Bing to promote your content in the summary.
I share very similar thoughts as you. I too think AI will be completely disruptive in the industry but fear people will now mistakenly think they don't have to understand the code LLM generates for them.
Very similar to the "Copy Paste" dev we have always had but it will be amplified 100x.
The easiest way to stand out from the crowd is to be the person who truly understands. You can still use LLMs to help in your understanding but you have to approach it with that mindset.
The people who are going to be the most effective at using AI code assistant tools are the ones who still have a fundamental understanding of how the code that gets generated works.
I am worried many people are seeing notable figures in tech state how much they love using these tools without realizing that they've had years and years of experience programming before LLMs were a thing. They have developed a level of intuition they can use when evaluating the output of LLMs.
You are never going to build this intuition yourself if you blindly tab away.
Its very unusual to see TJ don't make funny about something that I feel strange watching and thinking: "This is TJ?"
Great points, I think similar!!
Great take tj
I love your slidedeck
Lots of great insights!
Agree 1000%. A lot of wishful and magical thinking going on amongst our "thinking class".
There was a Star Trek episode, where people were artists and all their needs were served by AI. Problem was, everybody forgot how it worked and it needed fixing. Forgot the name of the episode...
Teej with the elite takes!!!
I'm kinda excited about all this craziness, it might be enough to scare me into a mostly offline life. I'm feeling super creeped out by the level of invasion we have already. I'm still addicted to this website and the convenience of texting and online maps and portable gaming, but when I compare all that to what I lost it doesn't seem like a good trade. Everyone used to hang out, no one hangs out anymore. It's a special thing now, something scheduled. Some of that is just getting older and busier, but a lot is everyone spending all their freetime online. I barelyt even read books anymore. I used to read all the time. Ebooks suck, subjects you to the rest of the distractions in a phone. A physical book feels restful, only words making a story in my inner world, no notifications. I want to de-tech, bring back boredom and stillness to my life. And yet here I am barfing my words into null-space :(
I think OS and compting commercial models will offset some of these concerns but, also, it's good to highlight these concerns.
Great take, totally agree
Open Source seems like a nice hedge against some of this in the long run, at least so long as we keep finding ways to make training/running LLMs efficient enough to be used on end-user hardware.
I was thinking about this few hours ago. I guess we still would have people who like to explore things and also LLMs now can learn with fewer examples because it's a fine tuning task for them rather than training from scratch
you know one of the things ive found handy in a weird way is to tell the LLM in detail what i want it to do architecturally and flow wise... then modify it function/piece by function/piece... could i have coded it myself ssure... but it actually lets me pilot more rather than drive and think more on how i want things.... and then even more important document it across.... LLMs are so much better at that.... but again youre right this entire thing depends on me having a certain level of expertise/internalization of best practices, pitfalls and tradeoffs etc
I still don't use any AI completion, I find it annoying to have big block of lego code just magically pops in instead of me using the lego myself. I personally use it for asking library stuff most of the time to quickly get a rough idea of what apis and feature that the tool provide, so just a good old documentation reading use case, atleast this is what i find it the most useful for work and maybe for none work like electronic shopping random facts checking etc. Personally, privacy isn't a concern for me, if it being a better tool that requires exploiting somebody else or being exploited? I'll still use it as long as it is a good tool and I have other bigger concern in life, not gonna worry about it replacing people's job or wiping out the humanity, those are not entirely out of reach but not the kind of problem I can fix, thus I don't care.
I didn't think about the problems of enshittified AI...
I think this is just another point in favor of my belief that vertical integration is inherently anti-competitive in the steady state, and so should be considered an anti-competitive practice legally. The current doctrine that it's fine as long as it's better for the consumer *right now* is not cutting it.
The same thing is also true for loss-leading IMHO, and that is *definitely* something these companies are doing with AI tools. It's truly anti-competitive to offer any product at a loss, and we really ought to treat it as such. It makes it impossible to compete with large companies with deep pockets, even if you had a golden goose of a product.
Of course I have no idea how we'd enforce any of this, but it would be a good start to force companies to break up into very focused domains when there is a clear bias, or even a potential for bias.
All AI companies should be financially motivated to produce products with the best most accurate results for the consumer, not for some parent company or significant donor/shareholder.
That Typescript threat hit hard. Damn.
We gotta do something fast.
It’s helped me learn rust, helped me when I got stuck.
I am a firm believer that LLM devs need to have a overlap in proficiency between dev and psychology. All of the issues you've described are very human like issues, and we are trying to get LLMs to respond in a human like way. Unfortunately, humans aren't all good lol, we have bad traits, we have traits that simply are and can go bad or good.
For instance, we want LLMs to be able to see 100 busses, than any other bus picture is than detectable. Well every detection is going to be a hallucination, it's just that hallucination is right. Big issue. In humans we call this the problem of perception, and the solution to it is we assume. The problem is we can't know all, it would be literally impossible. So we learn enough, and make guesses and assumptions about things. When these assumptions are correct, we barely even can tell they are assumptions, we just assume it's true (in other words, we believe our beliefs). When it's wrong, well, we can cling foolishly to our beliefs, or we can update them... So LLMs won't be able to eliminate hallucination in that sense, but it's worse. Humans have 5 senses, and we live in the objective world. LLMs have 1 sense, data/code/electricity, and they are digital. If a LLM assumes incorrectly that something is safe, well there is no immediate feedback loop. If Humans decide you hing a hot stove is safe, we know instantly. And this problem may seem simple, but there are millions of tiny decisions made every day by both LLMs and Humans, and they ripple, and we don't know the full consequence. It was less than a century ago we thought asbestos was the miraculous building material.
That said, it's here and the only way through history is forward. I'm not a doomer, and I use LLMs daily myself. It's just really dangerous. Hell, look at the damage social media and the like button caused, (like a 140% increase in teen girl depression, suicide and anxiety). It's something I don't think we approach with nearly enough caution. But perhaps that's for the best, if anyone gave it the caution it deserved, we might not even touch the tech at all.
That's the most original, based AI take I've ever heard anyone talked about AI, In Teej we trust!
Wow, these are really important concerns I haven't seen people bring up before. I hope more see this
Need more Teej thoughts in my yt feed
Bro beeped vscode🤣
glad you appreciated that :)
recovered my brain from we-are-doomed-state into fun-state
Watch your language, man, there's junior engineers here 😂. You can't just mention ****de without censoring.
Best TH-camr out there
As much as I agree with the concerns you pointed out, its probably good that AI suggests typescript over Ruby/Rails :P
I thought teej was gone for another 6 minutes, but he is back
3:42 when you build enough of these systems the answer is obvious, the libraries will be written with AI in mind. Real engineering with AI today consists of massaging the right prompt and context to get the best results. Codebases and libraries simply evolve into tools llms can use
LMNOP is my favorite phrase on the most popular song
TJ,
I might not be the biggest fan of your content, but i respect the work you've done and the efforts you've put into educating people.
I wanted to say, as an AI doomer (as you named it), that this was a very well presented and nuanced presentation that underlines a looot of the issues i have with the current AI bubble right now. thank you for that. thank you for sharing this to your followers.
Great video
Gotta admit, I really didn’t consider this inevitable comercial bias once these tools become the standard as google became in the past.
Research should go into developing open source powerful alternatives to keep these balances in check, establishing a community managed system with transparent reports that help everyone understand how these tools get results.
This level of power being guided solely by the interest of capital gain looks like a recipe for even more loss of agency over our lives.
I use Aider and APIs extensively (all free and unlimited, shout out to Gemini Experiment and Codestral) and I have to say at the current state AI coding is like you guiding a very dump junior developer, you have to extensively review their code and stop them mid way if they start hallucinating on long context codebase (100k+ tokens). But they're a pretty good as a side kick for scaffolding and dirty work. The system prompt and prompting technique make all the different tbh. With new technologies you have to sometimes give them the entire docs or at least the cheatsheet (in case of Raylib, 6 pages cheatsheet is nothing) and they'll do somewhat fine. You still have to do all the heavy lifting tho, but that's what fun about coding using AIs.
great vid dawg
thanks my man (send more funny edits, i will play them next time LOL)
Those LLMs will definitely push bias aligned with the issuing company's business goals. That's why every big company wants to create its own AI, they want to control the default narrative.
They can also legitimize its output by citing sources that are also generated, that themselves cite generated sources, so fact-checking would require the depth of analysis no one can achieve.
Your worries are correct but were presented too narrowly. All those things you mentioned, but applied to information in general. Choosing your framework, reading vacuum cleaner reviews, finding out which movie to watch, ending on reading some research in public vs private healthcare, for instance.
You can clearly see how companies would want to control the default narrative.
There’s gonna be so many people that start businesses with the outputs from LLMs and wind up in huge debt from networking fees and other shit
I'm glad there's at least some real competition in this space or this would be an even bigger concern.
really good point!
Great points
In the business we call this natural selection.
11:28 I am not worried about this part, because more and more hardware is capable of training small models and the knowledge to do so is already everywhere.
Right, but it’s not about hardware or knowledge, it’s about billionaires being able to convince congress (in the US) that these “unregulated” LLMs must be made illegal for safety reasons. I personally don’t think that’s a stretch at all.
@@redbrick808 Even if they manage to convince them, which I don't think they can given how tech illiterate congress is, one could argue something about the first amendment or some such and second you really can't stop people from using these models privately anyways. They'll just download them all before they get banned and share them in secret and run them on existing hardware. It's a really dumb take to think that they're even capable of banning them because that would require controls that are unfeasible.