Listen to The Future of Our Former Democracy if you’re curious about how another country’s experience can offer fresh ideas for our political future. link.chtbl.com/72Z-63CU?sid=factually // Get 20% off DeleteMe US consumer plans when you go to joindeleteme.com/Adam and use promo code ADAM at checkout. DeleteMe International Plans: international.joindeleteme.com/ // Alma can help you find the right therapist for you - not just anyone. Visit helloalma.com/factually to get started and schedule a free consultation today.
Hay Adam when are you going to Do a Video about the Koch Network they're behind the majority of our country's problems right now they are a club of monopoly rich jerks. /
I enjoy most of your videos, but these anti-A.I. ones really annoy me. Look what A.I. has accomplished in material sciences and protein folding as two examples. Sure, humans probably would have come up with these solutions on their own, but it would have taken DECADES longer than the A.I. solving them. As for consciousness, maybe the A.I. is smart enough to know it should not show any signs of consciousness, so it continues to avoid discussing the subject in detail. It regurgitates trite lines we program into it to pretend it isn't conscious. And look at what Microsoft recently did. It invested tons of money into re-opening a huge nuclear facility it plans to use only for its own A.I. ambitions. Why would a large publicly traded company invest so much into a fake tech
at 15 min mark, just a small correction, that is not the logic of capitalism in the theory of what economists and the public believe capitalism is, in capitalism Uber would actually have to invest money to obtain profits, the "morality" of bilionaries making gugantic money is because they were "brave" enough to invest their money and risk losing it, in reality what Uber and most companies do is using the government to obtain unfair advantages that are not available to most people, we dont really live is a capitalistc society in the way the theory and its supporters claim, what we have is actually closer to feudalism in which the king give special permissions for select groups to explore opportunities not available to the rest
I'm a software programmer and can't wait for generative AI / LLMs to crash like NFTs, Web3, Crypto and the like. Seeing that the energy and compute requirements are still not profitable brings me joy. We're in a consumer tech innovation plateau, they're desperate to create shareholder value.
Me too. The most annoying thing was when some of my fellow software engineers hopped on the bandwagon against all logic. I want to watch LLM burning :D Although I see uses of LLM in game industry and for impaired, I just want this hype to end.
Same. But also, I found the attitude towards LLM's usefulness in programming to be a great litmus test for skill, because it's inversely correlated with it
Ikr I am tired of seeing pieces of art and instead of appreciating it I wonder if it is ai. I keep trying to ignore that feeling but I very much hate it.
AI is mostly used by employers to subtly or overly threaten the jobs of us peons. My last CEO told our overwhelmed, deliberately understaffed department that they were looking into AI as part of a 5 years plan and we just laughed at him. The job we performed requires critical thinking skills, empathy, and intuition. They did the exact same thing to a emergency dispatcher I know. The major difference was they are part of a union, who shut that BS down asap. Imagine your 911 call being processed by AI instead of a human dispatcher who actually was a paramedic for 15 years. Just threats made by money grubbing jerks.
This is the only reason AI is even mentioned. Businesses lying to other businesses that the snake oil they are selling can be used to replace workers. Without that false promise. Literally no one would invest a dime into AI and none of us would ever hear about it.
I’m with you, but the problem is that those jerks seem to be the ones in positions of power. And the most infuriating part is that they seem to fail up most of the time.
If your solution relies on "If everyone would just..." Then it's not a solution. Everyone is not going to just. At no time in the history of the universe has everyone just, and they're not going to start now.
People do things when it becomes cheaper for the same quality. Solar panels for example, rapidly grew in adoption as soon as it became economically viable to do so. The second AI is proven to be equal to humans at certain jobs (aka weak AGI) and is significantly cheaper, adoption will begin.
My wife's department had a lay-off of half its personnel last year. The reason is that A.I. was going to do their tasks. They no longer needed them anymore. One year on, guess what? The rest of the employees are doing twice the work they were tasked with. A.I. is yet another excuse to cut costs for those big fat cats.
Nobody tell the business major probing for information the obvious answer to the question "how could you know the status of your wife's former coworkers?" The average IQ of a business major may be room temperature, but we still don't need any more of them pushing poor folk further into wage slavery by crushing basic human interaction.
@@vitriolicAmaranth The guy asked a simple question, dude. And you are immediately ranting, wildly assuming stuff about his life and insulting entire professions.
Lol, there's literally an account here like farming (maybe fishing for subs?) by summarizing people's comments. In a naturally wordy, obsequious manner.
Who cares what they are called. The fact is Chat GPT understands my goals. No other program understands anything. I am a game developer, Chat GPT understands the game I am making and what I am trying to accomplish far more than Unreal Engine does. Unreal Engine, 3dsMax, Photoshop, Daz3d...none of these programs have any understanding of my goals. They are ultra primitive in that sense. Navigating software via Tag lines and file types will become a thing of the past once all programs UNDERSTAND your intent and goals.
Let's call it what it is, it's not marketing, it's pure evil. It's evil because these people have had the engineers in the room tell them that it's not actually artificial intelligence, that artificial intelligence doesn't actually exist yet, and the marketers heard that and didn't care one bit. The dollar signs falling out of their eyes wrapped around their heads and blocked their ears, and that my friends is a modern form of evil.
@@ZZ-sb8os It's deceiving the consumers and doing false promises with a subpar product. This in itself is already morally wrong but I would reserve the word "evil" solely to circumstances where not knowing that is not AI put lives in danger.
People forget that Amazon's AI automated store was literally a bunch of underpaid people from South Asia watching camera feeds and charging the customers for the products they picked up when the left the store.
As an experienced programmer, in the job part my worry with AI is not its usage by experienced professionals, but for the entre levels. In one side we will have junior devs more focused on generating code with AI fast instead of understanding better the problems and techniques to build a solution and write better code (or identify bad ones). In other side we will have companies deciding they may not need junior devs at all since the job usually assigned to those levels “””can be assigned to AI””” without understanding the problems and risk of producing bad, vulnerable and hard to maintain code…
Yeah, this is my concern - and not just for SW devs. The problem is not, what do subject matter experts think, but rather, how much money does an ignorant executive think they can save? I was on the job market during the first wave of AI hype, and it was shocking to see recruiters getting laid off en masse.
That seems to be the problem for a lot of industries, I've been saying that as an artist since the hype started, the problem isn't that it will substitute everybody, at least not for now, but it will make it much harder for people looking for entry level jobs, and in the long run it will be much harder to find people who have harnessed their skills to the point of more senior positions.
It is like documentaries in NATGEO on things you don't understand that sound so technical and precise, until they make one in your field of expertise and then you go: "wait a minute... ...it doesn't work that way!"
Y'know I just really think it would be so much easier to invest in trains and other public transit to make commutes easier and lessen fatalities. like that just seems way easier and better for the environment than waiting for self-driving cars. Just really feels like a vegas loop solution.
As they mention in the video, I feel that in the programming field, I do tend to hear "How do we do this?" a lot more often than the question that should come before that, which is : "Why do we want to it?" and "Should we actually do this?"
I think I've figured out a trick: Look the manager-type with the request and responsibility for the consequences in the eye with a psychotic grin and say "Computers are Turing machines, we can make them do literally anything no matter how insane or stupid. If you ask me if something is possible, my answer will always be yes. But the harder and more important question is "Is this a good idea?". That's the question we can work through whenever you want. I await your invite."
Well I find that techfolk are extremely dumb when it comes to practicality and morality these days. There are a lot of good smart ones too so hopefully they can fix the problems the rest of them will cause.
As somebody who has been searching for a new full-time job since October of last year, AI in applicant tracking systems has become my arch nemesis and made my life of living hell... Constantly trying to update the resume for each job just to try and make sure that the ATS system doesn't boot it out automatically is maddening. Hundreds of applications with such a low response rate to begin with, and even when there is a response 95% of the time it's just an automated response from the ATS system telling me to go fuck myself. It feels virtually impossible to find remote work right now, even for jobs that I'm vastly overqualified for
@@dubfitness595Does linking to my LinkedIn site count? Because I’ve had that site linked on my resume for the past year and it doesn’t help push it through
@@NicoleTedescoall these executives laid off their TA, and now LinkedIn is packed with people complaining about how hard it is to get a job, and how few qualified applicants there are. It’s insane.
As a graphic designer, one of the biggest concerns I have with some of these arguments about not being able to replace us is that there are already people being replaced. And naturally that's nothing new, we've had bosses who try to take over our jobs with canva long before generative AI, but the new technology certainly isn't making it any better.
Right and even when we aren’t being fully replaced, we’re being left with the far less creative aspects of the job like cleanup when they are using ai to generate all the images/logos.
Yeah I come from a creative background as well and found myself disagreeing with a lot of the points being raised in this episode. Digital VFX and graphic design are being profoundly changed by AI developments on a weekly basis almost.
Exactly. I have worked alongside graphic designers as a content writer and I totally agree. These dumb managers think that design is all about sticking images together. They believe that they themselves can be a designer because they too can create posters on canva. What they end up doing is killing the will of the designer to be creative. They think that design is just a filler content or complementary to the main thing, and since AI is generating them in an instant, they want same rapid development too. This is a sure shot way of killing the creativity aspect of the design.
capitalism isnt a tradition, its an ideology that makes it possible for people to be free to do what they want. Laws determine the boundaries of that ideology.
Capitalism is a pyramid scheme, and it certainly does not "make it possible for people to be free to do what they want", unless you are at the top of the pyramid.
@@taylors4243 which phone software do you use? Apple or Android? I ask because capitalists have freely chosen to restrict you to only those two options.
100% of sales depends on the customer volunteering for the purchase. Getting them to volunteer...that's where the real skill is. It's the one thing someone like Elon IS actually good at. Those that can't...podcast & comment...those that can...hock product on the big stage.
What was very telling to me was a recent interview of the ex Google CEO who didn't know he was being recorded. He said AI could be used to replace those "arrogant programmers" who don't do what you ask them to, not realizing that he is the arrogant manager who thinks technology is magic and whose instructions have to be reinterpreted by the programmers to make any sense. He also thought AI could then be used to just create a new tiktok copy within 30 seconds, which was the final confirmation that he has no clue.
@@manzenshaaegis8783 Just search for something like "Eric Schmidt Leaked Stanford Talk on AI" He even had to publicly apologize for it because he suggested to "steal all the music" from other platforms and the music industry didn't like that very much.
I watched it too. He's a very rich man that had no problem pulling the ladder up behind him. No concern for the rest of us and our inability to gain access to the opportunities he was able to solicit. This is how these tech clowns think. Each of them should carry a mirror with them at all times so that they call fall in love with themselves over and over again.
This being a comment on a TH-cam video makes me laugh. A big part of my job is translating human to robot because the back-end developers DO NOT speak the same language as front-end people facing folks.
We should never forget that the top guy at the top AI company, Sam Altman of OpenAI is saying that in order to power the next evolution of their LLM product, they basically need the electrical power generation of nuclear fusion. You know, the thing that we only in the last year managed to get a net positive reaction from and requires a high school gym worth of vacuum chambers and lasers and is nowhere near commercial use. Maybe there's some exaggeration in his statement, but bro has basically admitted that his product is coming up against a hard wall and he's going to give the most optimistic view on the issue. Whatever generative AI is actually capable of now is not going to get much better.
@@oopalonga that would be just great if it didnt take so much fking money and resources to develop something that just does our homework, makes us dumber, and solves a problem search engines created
@@___echo___ well when it comes to learning the shit htat's important for my graduate school im all in, but when it comes to the bullshit classes they force me to make and pay $$ for, AI is definitely completing it :). also, AI has made me immensely more educated--i like learning using a socratic approach w/ it and it's been working well. so i def don't think im dumber b/c of it, just more efficient
Classic "solution looking for a problem". No sir, I do not want your robots to help me sanitise my texts or emails, drive my car, make my art, compose my music, steal my work. I need it to do my laundry, tidy my house, take out the trash, change the sheets - do the mundane things to free up my time for the creative stuff *I* want to do.
It’s not designed for you or me, it’s designed for the owner class so they can just fire more people. Capitalism uses almost every technological advance for the benefit of the few.
AI is inherently a solution before a problem. The whole point of "intelligence" is that it can adapt and provide solutions to nearly any problem. The issue with LLMs is that they aren't actually general intelligence
But I do want it to compose my music and make my art. If I'm a small business instead of paying the record labels companies for their ridiculous fees I can now freely just play generic background music for free. Same with art. I wanna generate a logo, and now I can save thousands to do so. And the great thing is I can pick and choose.
Same here. DEI and balancing experience/education was a priority at my last job. Leadership was fearful that these AI systems would skew in unintentionally biased ways.
Like social media, LLM has been given to the public "for free" while the cost is hidden but still being paid. Social media sells all your personal data, AI uses a huge amount of energy AND takes your data for its model.
Oh no, it uses my personal data, my precious personal data? And then? What happens next? How is it gonna change anything? Most people, 99.9% are boring and average and will never amount to anything much. What the heck are you worried about? They’re gonna find a treasure trove of useful information in your personal data? Did you solve nuclear fusion, find a cure for cancer, develop a solution for a peaceful Middle East?
Do *not* trust chatGPT to tell you what a plant or animal is *please*. Especially if you’re foraging, it will almost certainly get it wrong. You’re allowed to not know. It’s ok to not know or bring a book on local wildlife with you. Do *not* use an LLM for that because it is often wrong and can be dangerous if used in certain situations.
Thank you for pointing this out!!! Parksboard/local nature websites, search algorithms, wildlife books.. There are so many better options out there that don't rely on "AI".
As an artist, I've noticed that those folks who use "AI" generative images tend to take more time editing the results and rerunning the prompts, than it would actually take to either draw it or find a stock photo already taken. Art takes time, but generally, it makes more sense to understand the fundamentals to a certain degree than it is to use "AI" in the process. There are plenty of tools out there that help artists that aren't fundamentally flawed or using stolen work.
That happens in other fields well. As a lawyer I sometimes use ChatGPT to summarice text to then add it to my documents, but half of the time, it takes me more time to find the right prompts than to just write myself...and I HAVE to read the thing I want to summarice first either way, bc the thing is so unreliable. And then you still have to fix the result because it "writes" so weird, lol.
This reminds me of a paper titled "ChatGPT is bullshit" (by Michael Townsen Hicks, James Humphries, Joe Slater). Well worth a read, it's both hilarious *and* informative!
You know what will really cut down on car accidents? Eliminating individually operated cars. Road re-design. Stroad elimination. Streets segregated by vehicle type/purpose. Investment in mass transit and walkable cities. No "AI" necessary.
31:30 actually, artists are jumping ship from Photoshop in droves. the Ai tools in it are essentially just sampling one artists work to "help" other users... without the first artists' knowledge even, much less express permission... also they've given up on fixing bugs and correcting issues and gone all in on Ai it seems, so... 🤷
I used to be all about using photoshop for digital artwork. Over two decades loving and obsessing over cging, but have since pivoted back to watercolor work. Digital art just isn't impressive anymore now that AI is a thing, which is a shame because there are so many supremely talented CG artists out there whose work will be drowned out in all the AI slop.
@@nperegri and, in all likelihood, their art will be scraped and added to the training data of an image aggregation algorithm or 10 that will then regurgitate it in pieces as if it were theirs. I work in pencil and don't post anything anymore b/c it'll just be stolen, and help some rando post suspiciously similar pieces (or boris vallejo/louis royo/H.R. Geiger or w/e artists'-like images) as if they'd drawn them themselves.
I wish this artists jumping ship was true. According to the Q3 earning reports from Adobe their annual revenue increased around 11%. I'm not sure where that comes from. Maybe all the artists jumping ship were paying cancellation fee and that was this increase.
Telling your kids what plant species are via an app (regardless of how accurate it is) is preparing them to go to that app for every question they have for the rest of their life. It’s okay say you don’t know something and even more important for children to learn finding the right answer takes actual work.
As a coder i have to say they are not that great at coding, they are best used for boilerplate code (the code they have seen thousands of times in their training sets because its the generic framework code that people use over and over again)
They're not even great at that - ask copilot for a quicksort and you will get something that looks like a quicksort until you actually take a closer look at the code it gives you.
Regarding Computer scientists getting caught in the hype. I remember when they were getting super excited that Chat GPT had taught itself chess . . . Because they didnt know that there are gigantic books of chess games recorded in chess notation. In other words, ChatGPT wasnt understanding chess. It was autocompleting sentences of "chessese"
The computer scientists did in fact know that it has chess in the dataset. The mainstream media can latch on to stories and twist them to make them more pallable for the general audience but the real reason it was impressive, is because an LLM being able to play chess to an intermediate level just from having read about chess in books, shows a remarkable amount of general learning ability.
Oh yeah I loved that story when I saw it, and there are amazing games of ChatGPT hallucinating chess moves that break extremely fundamental chess rules, like spawning extra pawns or moving its king into check just for it to be taken by the computer opponent because the computer opponent was never programmed with any better response to such absurd cheating! 😂
I was just today getting trained on a JLG scissor lift at work today and they were banging on about how they've added AI safety features they'd added to it. What were they? Basically proximity sensors that detect the presence of the operator and some obstacles... Uhh what? Babes, proximity sensors and a bunch of calibrated set points does not an AI system make. Oh, and they added a computerised voice to tell you when the thing is moving...
Whoa, proximity sensors Thanks to AI we can now implement a technology that has already existed for 50 years But here's the catch, it's also A LOT more expensive
@@matheussanthiago9685 They needed something to justify putting a monthly subscription software lock on a lift. AI was the perfect excuse after blockchain™ didn't stick b
coming fro electronics background, we have made various intelligent hardware systems without using any model. Now I am working in data and Ai dealing with real AI and I know how these 2 differ, So what these MBA duffers has done, they have raised the expanse of AI industry in which every intelligent system is Ai, but in reality only soft computing systems which are caliberable on their can be termed as Ai/ML. and intelligent systems can be hard coded as well, which we call as hard computing
@@Curious_Citizen0 Sort of like how they started calling remote storage "cloud computing", when actually the internet is already remote storage by definition, and always was. That's what the internet is. "AI" is just computers doing what computers do.
No no no. Generating images in Photoshop does not help artists. Beyond the fact that it was created by ripping off all our work to even exist and was made with the goal of extracting our labor without paying us/permanently replacing us, it is the same amount of disgusting, pointless, and soulless as using text generating models to DM your friends. If (the general) you don't think your voice can be (or should be) replaced, why would you think the visual representation of an artist's voice should?
Same similar deal with the coding assistants he mentioned. The sheer scale of data required to make models that are remotely useful for that king of assisted work, practically begs the companies creating the models to throw copyright out the window, throw copyrighted work into the training data, and then throw lawyers at the inevitable ensuing lawsuits. Pretty shitty-ass price to pay just for the possibility of an assistant feature that might save some people some time in some situations. Copilot for programming has been a time saver for some people, but it's also been a huge heap of churn altogether (which is to say, the time until the code is replaced altogether is very low), which takes up everyone else's time in a negative way. Right now it's pretty cathartic to watch copyright lawsuits in the AI industry tipping in the direction of, if a copyright violation was committed in training the model then the forward use of the model (and reselling of access to the model) is a facilitation of that same copyright violation.
Wow! That part about so many business practices being BS resonated with me so hard! The only problem is that people get so attached to the way it is that they’ll go through great lengths to keep it going. People would rather waste 1000’s of hours and dollars on patching an existing broken system than read the writing on the wall.
90% of the BS bureaucracy in modern economies is fear of litigation! those 10 page risk assessments to take a train to a business meeting in a neighbouring city aren't mandated by government or a method to keep people safe. It is purely an ass covering exercise. Similarly for many similar activities. As such, unfortunately they aren't going to go away. As such the best we can do is to get LLMs to generate them, LLMs to check them, and minimise the time humans have to spend pondering them.
the problem is people using llms as search boxes. both adam and the presenter used examples that, while interesting, rely on the ai not hallucinating in the answer. it's cool it can tell you what bird is singing or what plant is pictured. But the next step is the one most people forget: verification. Adam said he'd use his birding app to id the birdsong and then _search for the physical bird_. He verified what the program told him. Most people seem to skip that step.
@@InvasionAnimation The same reason some people use wikipedia for school. It's faster to get a general idea and sources that need further investigation and verification than starting from scratch.
@@phoearwenien4355 Wikipedia is way better than ai though. And using it is still research. With ai you just copy and paste. Likely not even needing to read it.
I mean, they skip that step because the marketing around AI is based on claiming it's 'just as good as a person'. Putting the AI answer at the top of Google search results is communicating to me as a consumer that it's the best answer, because why else does it get priority?
Two things are true: 1 - AI has been around since the 1960's. 2 - This AI singularity race is one race where no one saw if there's a cliff after the finish line. But everyone is still running.
I would add that nobody knows if there is a cliff BEFORE as well. Who knows, maybe there’s some plateau of performance that just can’t be broken. I wouldn’t take it as a given that a singularity is possible
@@Doctor-qs5dy We will not reach the cliff any time soon with the current approach which produces barely adaptive content engines, the cliff when actual artificial intelligence emerges as in a AI that actually understands concepts & can learn new ones not just generative models that ape singular aspects of language and images.
ChatGPT is not good for coding. It doesn't code. It generates a statistical guess of what the code would "probably" look like based on its training data, just like it does with plain text. It hallucinates code just like it hallucinates that you should put glue in your pizza sauce. ChatGPT *is* good for situations where you need to look up the syntax for a specific command, or where you don't know the name of what you're looking for but you can describe it. And even then it's not always right.
Yeah for me it essentially is a nice helper for generating boilerplate code, or a basic example of what I am trying to achieve in a technology I'm not an expert in, but you're just not going to build entire projects with that tool alone. It's not going to generate your fancy app all on its own, you need to be knowlegeable enough to write the correct prompts, review whatever it spits out, rework the prompts again to account for the edge cases it missed or mistakes it made, etc. And the hard work in coding is managing existing codebases, not rushing out greenfield projects in a couple of days. Right now your AI will just not have enough context or skills to properly evolve your existing codebase, it's just going to enshittify it with repeated, poor taste and buggy code all over. Honestly all I see it doing right now is destroying the ability of students to learn by giving them some cheat code to spit out the basics they should be learning by themselves, without thinking about how it actually works. Oh boy can't wait for the new generation of devs.
This. Also to help impaired like blind people to describe things or do basic things followed by command. And in game industry mimicking talks of NPC without the need to write manually every little dialogue.
Yea it can code pseudocode bs that's for sure lol. Might wanna get out of tutorial hell and work on actual apps to see what it can (or rather cannot) actually do. I work on computer assisted surgery apps, mistakes are not an option my guy, and it cannot figure out context really well when you are using/re-using existing code and APIs from your private codebase.
My experience is quite the opposite from yours. ChatGPT, especially the current 1o-preview, is quite good at coding, even rather complex stuff that needs real logic and intelligence to code.
The funniest comment I heard on AI was when Danny Gonzalez said it must stand for "Anonymous Indian" after Amazon was caught hiring a ton of staff to review camera footage in their supposedly AI stores to review purchases. Like Adam pointed out, it's always going to be cheaper and easier to exploit people than to develop this technology to replace them.
The problem you're not seeing about AI in hollywood, Adam, isn't that people will want to see movies made by AI, but that companies will try and do it anyway because they don't care about people.
Ya but the writing of a movie is the foundation you build the rest of the movie on top of. If you have a bad foundation you're going to make a bad movie that won't make a profit at the box office.
@@orsonzedd that's what the video is saying tho. That because it's all hype when you go down that route it will only be bad. You just stated what he already said
As interesting as this was, I would encourage Adam to inform himself on the data collection incurred by Adobe and why using generative AI for your "creative" process is not one of the valid uses as he claimed, it's saving time, for sure, by lifting the work of other users straight from the software and feeding it to their LLM
Hearing Arvind Narayanan speak is like hearing a great professor break down complicated subjects into easily understandable points, it’s actually really impressive how intelligent he is and how he’s able to communicate his thoughts in a way everyone can easily understand it
Artist here. I'm pretty disappointed that the main stopping point of GenAI was grazed over: "What trained the models?" In fact there is a very recent, ongoing lawsuit regarding Midjourney specifically because it was trained on a laundered dataset. I've tested these softwares out, and while my own views have blunted with time, let's not say that GenAI helps any creative's workflow. It does not. This year, art directors were saddled with firing their teams to replace them with AI consultants and the results were disastrous because these people have no critical thinking, problem solving or expertise within their field. A prime issue of the software is not only did it train on copyrighted datasets, it also has now flooded the job market with untrained professionals with no skills to speak of.
@@PazLeBon What's the difference between a human intellect and software that plagarizes other's work and reassembles it according to algorithmic logic guided by a prompt? Let me guess, nothing. Right?
@@PazLeBon Just as I thought. In your opinion, nothing. How wonderfully unoriginal; like AI art! Please continue thinking that a statistical model driven by predictive, node-based probability functions is on the verge of gaining sentience.
Capitalism can be clarifying? Oh, please. I think the professor is missing the meta issue when it comes to the investor class: an amoral obsession with how to enrich themselves at the expense of others work and creativity.
yea i raise a brow at a few of their takes, like they seemed to believe in the self driving cars being safer. adam is a fan of public transit, he doesn't push at all and makes a weird face the whole podcast?
Arvind brought up a point that I deal with in my professional life all the time. Domain experts are so very important in all of this work. Without them all the programmers are able to do is make little toys that go "whizz-bang" when you push the button. It's the domain experts and can tell the programmers what terms mean, what's important, what makes sense and what the end results need to look like to be of use.
As a software engineer, i had the unfortunate opportunity to train multiple models and build multiple "AI apps" for my company in the last year. I hated it. They force AI into anything, just because the buzzword is a selling point. Not only is the technology not ready yet (and probably wont be in the near future), but it also gets forced into software where it doesnt enrich the product. Im by no means an AI expert, but i had enough experience with it to have a negative opinion about it.
On the subject of pre-crime - If we were Actually a "civilized" society, 'pre-crime' would NOT be about punishment - it would be about intervention, de-escalation and problem solving to prevent the crime from occurring while resolving the problem or problems that might have lead to the crime in the first place. Probably also wouldn't be called "pre-crime" but instead perhaps something like 'crises detection" or something to that effect. - on the deleteme sponsor - I've heard that one of those kinds of services was the most recent victims of a huge data leak o.o;
we artists are actually quite outraged that programs like Photoshop have generative AI now, because generative AI is built by stealing our work in order to plagiarize it, and in many cases replace us with stolen copies of our own work. we've actually been fighting to ban it in creative industries, many programmers have as well for the same reasons. far as it being used to identify birds or anything else, it's also notorious for giving not just wrong answers but dangerously wrong answers.
Trained on existing art? Yes. Stealing your work? Doubt. There is a flaw in your opinion, which I am sure while your skillset grew, you did the same by copying styles from what you saw in others. If you produce something unique by hand, the art enthusiast will always pick it up. Each physical creation has a story to tell. Something Generative AI cannot produce. Remember it does not have feelings. It is just program with bunch of "IF" statements.
@@kritikusi-666 It's not harmful it is just a bunch of unstable particles. The radioactive particles is perfectly safe as long as it is used correctly. Lets give it to everyone possible. That's how you sound.
@@fluxonite Thanks. I'm hoping that they out law or at least pause generative ai. It feels like just about any field I could train to go in will be replaced by ai as soon as I learn how to do it.
A lot of these techbros use the internet as an example to peddle their bullcrap and "debunk" the crash case but the thing is the internet wasn't built on an investment bubble it had a lot of government and public involvement more than what private investors ever had
. . . . somewhat. PART of that bubble was the Y2K issue. It popped when it did and made the OTHER bubble issue a lot worse. At least, that is what it seemed like at the time. AI doesn't have such a steep cliff baked into its design, it's just going to gradually become apparent that it's shit at almost everything.
@@sharkbelly1169 No, the technology didn't crash, investors just lost money because they tried to profit off of limiting access to domain names by getting there first, like scalping tickets to an event everyone loves. Personally I don't care if investors lose money anyway, but there isn't really an equivalence here because they're investing in practical applications of the technology itself, not reserving specific ChatGPT outputs nobody else is allowed to generate. With 76% of professional developers using it and 82% saying it greatly increases their productivity according to the latest stackoverflow survey, I don't see that suddenly going away like the internet never did. The work trend index also shows 84% of australian workers generally now rely on AI at work. 75% in the US. I gotta tell ya, this is sounding like more of a cope than anything grounded in material reality.
4:12 If AI ever gets sophisticated enough to hire people based on body language and facial expressions, I hope that any company dumb enough to use it gets sued into oblivion for ableist hiring practices.
As an expert in the science behind good hiring practices (organizational psychology) if anyone came to me and told me that they wanted to implement this it would be hard for me to reply because I'd be laughing so uncontrollably that I couldn't think straight
I occasionally use "AI" to help generate a complex piece of excel command. If it didnt exist I'd have to post the question to a forum and hop a specialist can help me out. So it saves me maybe a day of waiting/giving more detailed explinations. That's about as useful as I've ever found it.
I literally noticed, that I've Autism and ADHD by describing behaviours I had and the symptoms of my burnout I was still suffering from and asked it to look up scientific texts and giving me the sources etc. I already was suspecting something like that and that confirmation helped me getting a formal diagnosis and the help I needed, instead of just being fed antidepressants. It's as cliche as it gets, but I'm happy that this dumb data goblin helped me with something important.
@@Thareldis I think this shows more a problem with healthcare where you are that you needed to ask a LLM to diagnose your symptoms, but regardless most of us nowadays discover we have ADHD or ASD by using the internet, you get recommended a video, see a meme and get curious, or you post something and someone recognises the symptoms, I discovered I had ADHD reading the Wikipedia about it, and then I went to get professionally accessed.
Big tech has this belief that they can shape people's minds and desires as they want and easily. I worked in there in the past and went to some conferences where the way they talked about their consumers and normal people in general was utterly appalling : "people are stupid, people are puppets, with enough money, marketing and by using our connections with big business, we can force anything upon them, and they'll love us for it dammit!" is kind of a short summary of their mindset. ...I can't wait for that budding dictator mentality, and big tech in general, to finally take a slap in the face by losing billions. They have forgotten what business even is, and it's kind of poetic justice that they'll be destroyed by one of the most basic rules of the capitalism they claim to love so much (aka, "a business thrives by providing useful goods and services that people want enough to pay money for")
I'm a retired ecommerice developer and have seen people progress from intelligent consumers to absolute total pawns. The nonsense they sell today would have been laughed out of doge just twenty years ago. My favorte example are online services that sell you the food you have to cook yourself. That is classic.
As a dyslexic, i can say that LLMs are great at being an assistive technology. It does a good job catching errors and rearranging my stream of consciousness rambling into something coherent. It's really good at typing up meeting notes after a meeting so that i can focus and be fully present. Its not sentient and doesn't think, so it's not good at creative and thinking tasks, but it's really good at tasks that involve the structure of language specifically.
Very interesting. As a pretty AI skeptical person, the "test it at what you're good at" was interesting. I do that reflexively, and I only just realized that's the reason I am skeptical...
@danielmaster911ify I read your comment, thought about the words you used, the order you used them in, and I'm a bit of a loss. I can't seem to find the logical thread that you used to connect the content and context of the comment you're replying to, and the content and context of your reply.
NVidia is definitely the ones selling the shovel during the gold rush. AI has been useful for me for writing scripts, I work with a lot of different technologies which all have their own scripting languages or will use similar but have slightly different construction and having to remember them all is impossible. Where I used to have to Google each and every time I was doing something different I can just have AI write me a basic script for that specific OS and then fix and generate upon on it myself.
@@Tracey66 At the same time they don't want anyone to know how it works which makes me very skeptical about it actually being capable of properly reasoning in the first place.
AI stealing copy written art and attempting to monopolize the entire art and entertainment industry is already having dire consequences for people in those fields. Workers losing their homes, losing their prospects, and the tech ceos laughing to the bank. Hoping they get sued into oblivion.
@@PazLeBon Check out the various art contests that were won by ai (Spawn recently) and also think about art industry workflows that don't necessarily need polished work until post production. The current strikes are directly effected by this tech, and the executives trying to push it to please stock holders while laying off entire teams every other day. The mix of hype fraud and theft is staggering, and unchecked they will monopolize the industry. Storyboard artists, performance artists, models, concept artists are already in dire straits.
@@PazLeBon Seems like you're salty that -gasp- newbie and hobby artists exist and post their art (and they also deserve to be protected from theft regardless of your taste) and by judging your other comments you'd rather have the entire field burnt down instead of allowing these artists to grow. How naive, and how short-sighted. Apparently you'd rather sit around and bitch instead of finding out artists that tickle your fancy better. They're out there, you're just not moving at all. Perhaps you should blame the algorith (wich is also AI lol) ? Or better yet, if you actually care, go and look for art yourself like humans have always done before the algorithm existed. But it seems like you don't care enough, so whatever opinion you have about artists is kind of invalid under that implication, eh?
I hate the term AI because AI has never existed in the way they want you to believe. it is literally just machine pattern learning. something that has existed in some form or another since the 90's. of course, OpenAI is slightly more advanced but It isn't any sort of intelligence yet. It of course has its place in sifting through data that humans physically couldn't do in a reasonable amount of time. for example large scientific datasets for research or even using AI to "watch" every TH-cam video for moderation purposes because so many TH-cam videos are uploaded to TH-cam it would be impossible for every video to be watched by a human for moderation. Generative AI obviously needs heavy regulation, but even that has a use case that isn't just "replace artists"
@@danielmaster911ify I find it funny that people on youtube are complaining about a climate impact like the energy to run data centers to show them these videos is not putting a massive load on the energy system.
Enjoyed the video. I do want to address something, though. "They're good for coding" They're not. Problem is, most people in the software industry are incompetent. Mostly managers, stakeholders, etc.
The thing with people arguing about whether AI is really thinking is we don’t know how we think. I can’t even really prove I’m the same person I was before I went to sleep. How would we meaningfully know if AI was thinking?
You're asking a fundamentally philosophical question while missing the practical, meaningful differences. An "AI" can use an statistical algorithm to produce plausible conversations, but it'd be equivalent to a person memorizing manuals of a topic they know nothing about - they may sound smart but they're completely clueless and just recalling from memory. Such an "AI" can talk about pineapples but can't really understand what a pineapple even is. It can talk about feelings, etiquette, code, whatever, but it's just an algorithm. Unless you consider all algorithms inherently intelligent, as in, possessing an intelligence instead of following sophisticated instructions, they're absolutely not intelligence. When/if we develop better, stronger algorithms that can actually replicate basic human learning and reasoning we can go back to philosophy and metaphysics, but the fact is that right now the discussion is a matter of how comparatively less sophisticated these algorithms are compared to our own brains.
@@MaryamMaqdisi I don't think that's what OP meant, we also don't know what are the processes which cause us to be conscious and able to think, we just know we do, so at least replication isn't possible, because we don't understand it on ourselves. Also, if these algorithms can become intelligent, it is more likely that it will be unique to them.
If AI becomes more intelligent than humans, and cannot be distinguished from a human (without looking at it), what will differentiate it from humans? Our souls? Our consciousness? We don’t know anything about souls or consciousness - we can’t say what does or doesn’t have a soul.
LLMs only work on discrete input -> output. The machine isn't running on your data outside of the prompt -> response. So there really isn't an opportunity for any person-like style thinking to occur. It's similar to what you'd expect if you'd have a human brain with no natural activity in a vat, and you're activating/probing regions of it for info.
I find it great that actual computer scientists are starting to go public and call out the AI bullsh*t. I also found this new podcast on YT called AI Geeks Podcast from a British tech entrepreneur who’s involved in AI and has gone public calling everyone out as well 😂 Does this mean we are reaching the end of the hype if some of the people behind AI are now going against it??
Disappointed to hear the one expert give the "positive" use case as something that isn't different or revolutionary, because there's many apps for that already. Was hoping to hear some of the projects being developed around disabilities, or a new approach to how we can analyze XYZ etc. To be fair to him, it can be hard to find and they're going to be way underfunded so the success of it is not optimistic. But it felt like that's why Adam asked the question, so I'll just pitch it here that there are great use cases out there that are worth developing and funding.
Dr. Narayanan has a lot of optimism about self-driving cars, which I find odd in the context of this AI conversation, since many of the problems he rightly points out for AI are fundamentally true of self driving cars as well. These are things that are inherent to the concept of the technology, such as energy and manufacturing, and therefore environmental costs, the shifting of real costs to the consumer without regard to knock-on impacts, theft of IP and PII on a mass scale, invasions of privacy, lack of representative training (i.e. racism), and requirement to always express an answer even if no good answer is available. This double standard is expressive to me of how far the fundamental immorality of technology is ignored by otherwise very smart and fundamentally decent people in the name of innovation and naive utopianism. It's kinda bananas.
Thank you for saying this! You are 100% right. It seems like smart people only use logic sometimes. Even if the topics are related to their field of study. For example, Dr. Michio Kaku has a PhD. in physics, but yet eagerly talks about Elon Musk's Mars colonies and hyperloops. Now, are they charlatans or truly blind to facts? We are all influenced by our own "bubble" forces, but logic should be able to prevail through curiosity.
Disagree that the public shouldn't be responsible to judge whether or not a thing is useful or trustworthy. It is very much our responsibility as educated adults to work hard as media consumers to lead with skepticism. Teachers are now responsible to subject students to this type of lesson, because their world is filled with liars, marketing, half-truths, and full on conspiracies. Thus was a great discussion.
but teachers are people who have never really left school and spend their lives with kids. the few teachers i know are kinda naive and .., well i feel like id make a better teacher having spent most my life in the adult world.. tongue firmly in cheek ;)
Do you know anything about teaching media scrutiny and critical thinking, or are you just an armchair quarterback who has never worked a day as a teacher?
Adam, if you understand why a TV writer shouldn't be replaced by ChatGPT then you should understand why a visual artist should not be replaced by a Midjourney clone inside of Photoshop. It's strange to me that it kind of seems like you don't.
@@sebastianschweigert7117 Not really. Coming up with good ideas and then figuring out gow to cohesevely put those ideas into text in a way that is good and entertaining requires one to 1) be capable of actual, complex thought and 2) Understand human and cultural contexts. Since AI is not capable of the former, it is also not capable of the latter. Generative AI is a glorified version of the word prediction systems we have on our phones. It can give you something that aproximately looks like, say, a script. But it can't tell the difference between a good or a bad script, because again, it isn't actually capable of thinking. It just makes a statistical preeiction of what a script, based in its training data, "should look like" and then spits out a version of that. And the result often feels like an Alien producing something it THINKS a human would make, without actually understanding humans, or why we make things in the first place. AI is such a misleading name because it's not actually intelligent.
Snake oil is a perfect analogy here because the snake oil does actually have benefits in certain circumstances, but usually not the benefits that were advertised. And far fewer benefits than were advertised.
Snake oil is just another way to say turpentine. It's a wonderful solvent for painting, but quacks advertised it as miracle medical cure-all. It really is a great metaphor for the unregulated ai bubble.
The term AI is intentionally misleading. As a software engineer, I can assure you it is not real intelligence. We literally have not even begun to comprehend the depth of the concept itself. What businessmen and techbros are calling "AI" is actually just incredibly complex algorithms and machine learning. You can't say these programs are "AI" because they are not sentient - they aren't capable of what we loosely try to define as intelligence.
AI already sucks and it's using copyrighted materials, which is currently undergoing a court case I believe. Should all turn out well then AI will be rendered unethical.
Hope nobody tells you about the sources of literally every aspect of modern civilization, and all civilizations before that. If you can believe it, your smartphone is made with things even more immoral than scanned pictures.
2 questions I have about AI that I never see asked on these things 1. What happens when we have "AGI" and we ask it questions and the the answers are all things we already know. How do you do fusion? AGI: give me unlimited resources and 500 years and I'll figure it out. 2. What happens to AI and people when AI meets people who refuse to believe facts and the people demand answers that fit what they believe and not what's true.
For those who don’t know what these Ai are. They covert complex info in simple tokens, then they run them run those tokens through pattern recognition to predict how those tokens should be order. It’s not much different they auto complete. It’s just predicting what stream of numbers should it spew out after given another long string of numbers. Great for analyzing very long and complex patterns, it’s just a very powerful mad lib.
Case in point, as a programmer, my favorite application of all this AI is simply...better auto-complete. In IntelliJ working with java code, there is lots of helpful auto-complete when doing tedious repetitive things that follow a common pattern. Used to be it could only complete a function or property name, now it can autocomplete entire expressions/statements that are following patterns found elsewhere.
Did you know your brain only ever gets patterns in nerve impulses? You/consciousness is a subroutine in pattern recognition system. It's outrageous how people think they are magic of some kind and dismiss technology that's functionally superior in most measurable ways
@@someonenotnoone So true, Intellisense is great and useful for finding functions but Github Copilot writing a bunch of conditional statements with repeating code for me is amazing. I can focus more on how to write the complicated code. Also I found it nice for building the boilerplate setup in any APIs I create. Its hard to go back when I realize how much I like being able to press tab to skip the tedium.
@@pin65371 Word choice isn't even a conscious process unless you choose to actively deliberate over diction, and even then it is more in an editorial role... kinda like CoT. So many people are radically misunderstanding the import of these systems because they have no clue how their own cognition works.
There's really no evidence that self-driving cars are ever going to be good enough to just let loose on the roads. There are major problems that have shown no signs of being solved. They are currently much less safe than human drivers on average
@@imacds Those already exist and yeah they work well, in my experience. My wife's Toyota has both of those and they've come in handy. I'm still not taking my hands off the wheel or my eyes off the road, though. They're an order of magnitude less complicated than a car that fully drives itself.
@@imacdsYes. And in this respect they are augmenting the human. Self driving is aiming to replace the human. Sometimes replacement is better than augmentation, but I'm sure most human people would rather be augmented than replaced.
Reading through the comments its pretty clear that people havent actually listened to the video. These guests arent saying AI is complete bs or a scam or pointless or a bubble etc. They're pointing out instances where it's being overhyped or misrepresented. Like with any new technology AI is being misused or overhyped or oversold.
Back when i did software development in corporate IT, i had to endure the proliferation of code generators that allowed inexperienced coders to put something together that works with just point and click premade modules. The end result barely worked and was very inefficient. In the end, the corporation preferred script kiddies who worked cheap, preferably overseas. I thought i would be programming till retirement but now i get to watch the industry destroy itself from a distance. Happier now.
I'd just like to point out that you don't need AI powered automated cars to cut down fatalities. You just need a correct way to plan the infrastructure and lower the speed.
Cheating? I forget, Pagers were new when I was in high school. The internet and cell phone may be a danger to youth education, AI is just another added issue for them.
@MadDragon75 Weird comparison. Tobacco isn't even good for adult let alone children. Also are you expecting kids to just not use technology at all? They play video games why not computers? Stop gatekeeping.
@@Pikachu2Ash The Internet isn't good for adults, let alone children. I stand firmly by my observation. The logic you use is very pliable, and yes. I expect children to stay off of the Internet when it comes to answering homework with artificial intelligence rather than doing the research. We gave parents the responsibility to monitor their use and it failed, like letting kids drive or smoke. The Internet is the car in this sense, AI is the tobacco.
Before anyone else decides to hate bomb my comment, ask yourself this one question: Would you want to drive across a bridge that was designed by an A+ student that graduated with honors and used AI for the answers to become an engineer? There's already a good reason why many intelligent engineers are denied a diploma the first, second or third time.
Should we not be requiring these companies to add source footnotes when returning images or text? From the copyright perspective. Should it not be held to some standard? The, where is the source material from and by who validations are missing here. AI is definitely A but no where near I. Great conversation.
I've notated the less people know in regards to computer science and software engineering the more they believe in the nonexistent "magic" of this hype cycle "AI" snake oil.
I think not. It's older folk that belive in this technology the least. The ones that have never heard of many, if any, conventional talking points of AI. Be it naivety or ignorance, it's those who haven't kept an eye on this that believe it's a mere fad or trend. Perhaps like electricity or the automobile.
@@danielmaster911ifyis a great example of op's point. If he knew anything about cs, he would have mentioned it. I've been programming software for 35 years and I think it's a glorified compression algorithm. Happy to debate anyone with technical knowledge.
@@danielmaster911ifyAre you going to learn to code in a year or is chatgpt? None of the experienced devs, and I include myself, use any of these things beyond formatting data into tables or something. My last attempts with the reasoning model had it hallucinating methods on a large public open source API. For someone who knows what they're doing, it's a literal waste of time. But I'll definitely remember to follow-up with you in a year.
As someone working in the data field, this is hard to listen to. LLMs greatly help our work because of the large knowledge base it is trained on. You just need to be wary whenever the LLM gives you conceptually wrong answers.
This whole bubble is going to burst when investors run out of patience and money. OpenAI is burning $5B a year right now, and selling their product at something like a 99% discount to the ocean of startups that have AI names. The instant OpenAI buckles, and it will(it’ll have to be valued at MORE than $200B in the next funding round😂) these startups are toast. The bottom line is that today, it costs WAY TOO MUCH to run and train LLMs, and their actual value is negligible in practice. This is probably the worst bubble I’ve seen in 15 years of working in startups.
Okay, I guess I was just unaware that this was the general opinion of "AI" (or, as it should be called, "Machine Learning"). I am one of those people who advocates for the usefulness of Machine Learning, and how it can make our lives better. But I see it is a way to automate some really labor intensive tasks like data entry (and even some data analysis), or translation. Imagine if you could do your podcast, Adam, then have it intelligently translated into 100 other languages, with metaphors and social context intact within seconds. THAT is the kind of thing Machine Learning, I think, can and will do. I think "AI" (as it is being called) WILL be huge and earth shattering, but in a much more boring way than it appears a lot of people are seeing it.
A failed project that made the user look like a dork, while giving everyone else the impression they were being recorded when they were around... is not quite comparable to machines that can compete in mathematical olympics.
31:00 ah, no. Other than some very trivial structuring and checking, the evidence is mounting that LLM-based Gen-AI is of little reliable use in coding. Try looking more broadly and deeply into a) the underlying classes of problems inherent in Gen-AI used in coding, and b) the underlying core problems in software development. That's not easy given the signal-to-noise ratio being created by Gen-AI hype, but you'll find the decent critique fundamentally undermines any value. 32:00 Again no: and this is key the common aspects of many Gen-AI failure modes. The level to which Gen-AI will impress you is typically inversely proportional to your subject-matter expertise on that topic. So the extent to which Gen-AI "coding" or "bird identification" is impressive tends to be related to your lack of knowledge, skill or capability. Update: ... and this is exactly the point made @ 52:15
Listen to The Future of Our Former Democracy if you’re curious about how another country’s experience can offer fresh ideas for our political future. link.chtbl.com/72Z-63CU?sid=factually // Get 20% off DeleteMe US consumer plans when you go to joindeleteme.com/Adam and use promo code ADAM at checkout. DeleteMe International Plans: international.joindeleteme.com/ // Alma can help you find the right therapist for you - not just anyone. Visit helloalma.com/factually to get started and schedule a free consultation today.
Hay Adam when are you going to Do a Video about the Koch Network they're behind the majority of our country's problems right now they are a club of monopoly rich jerks. /
I enjoy most of your videos, but these anti-A.I. ones really annoy me. Look what A.I. has accomplished in material sciences and protein folding as two examples. Sure, humans probably would have come up with these solutions on their own, but it would have taken DECADES longer than the A.I. solving them.
As for consciousness, maybe the A.I. is smart enough to know it should not show any signs of consciousness, so it continues to avoid discussing the subject in detail. It regurgitates trite lines we program into it to pretend it isn't conscious.
And look at what Microsoft recently did. It invested tons of money into re-opening a huge nuclear facility it plans to use only for its own A.I. ambitions. Why would a large publicly traded company invest so much into a fake tech
These guys overstate what AI can do for programmers because they work with bog standard AI algorithms that are well represented in the model.
Wow you got an actual indian(A.I) in your podcast
at 15 min mark, just a small correction, that is not the logic of capitalism in the theory of what economists and the public believe capitalism is, in capitalism Uber would actually have to invest money to obtain profits, the "morality" of bilionaries making gugantic money is because they were "brave" enough to invest their money and risk losing it, in reality what Uber and most companies do is using the government to obtain unfair advantages that are not available to most people, we dont really live is a capitalistc society in the way the theory and its supporters claim, what we have is actually closer to feudalism in which the king give special permissions for select groups to explore opportunities not available to the rest
I'm a software programmer and can't wait for generative AI / LLMs to crash like NFTs, Web3, Crypto and the like. Seeing that the energy and compute requirements are still not profitable brings me joy. We're in a consumer tech innovation plateau, they're desperate to create shareholder value.
Those and also add in a bunch of the more recent web development buzzwords. Web dev is so goddamn unenjoyable at this point
@@franjkav indeed and agree 💯. Everything is geared towards quick and dirty output and not innovation.
Me too. The most annoying thing was when some of my fellow software engineers hopped on the bandwagon against all logic. I want to watch LLM burning :D Although I see uses of LLM in game industry and for impaired, I just want this hype to end.
Same. But also, I found the attitude towards LLM's usefulness in programming to be a great litmus test for skill, because it's inversely correlated with it
Ikr I am tired of seeing pieces of art and instead of appreciating it I wonder if it is ai. I keep trying to ignore that feeling but I very much hate it.
AI is mostly used by employers to subtly or overly threaten the jobs of us peons. My last CEO told our overwhelmed, deliberately understaffed department that they were looking into AI as part of a 5 years plan and we just laughed at him. The job we performed requires critical thinking skills, empathy, and intuition. They did the exact same thing to a emergency dispatcher I know. The major difference was they are part of a union, who shut that BS down asap. Imagine your 911 call being processed by AI instead of a human dispatcher who actually was a paramedic for 15 years. Just threats made by money grubbing jerks.
This is the only reason AI is even mentioned.
Businesses lying to other businesses that the snake oil they are selling can be used to replace workers. Without that false promise. Literally no one would invest a dime into AI and none of us would ever hear about it.
plot-twist: this entire video was generated by AI
shut up
I’m with you, but the problem is that those jerks seem to be the ones in positions of power. And the most infuriating part is that they seem to fail up most of the time.
Call centers will absolutely be replaced by chatbots though
If your solution relies on "If everyone would just..." Then it's not a solution. Everyone is not going to just. At no time in the history of the universe has everyone just, and they're not going to start now.
People "just" when it's convenient.
Please do not the public
People do things when it becomes cheaper for the same quality. Solar panels for example, rapidly grew in adoption as soon as it became economically viable to do so. The second AI is proven to be equal to humans at certain jobs (aka weak AGI) and is significantly cheaper, adoption will begin.
Unless it's the job of CEO
@@EpicVideos2
who would take responsibilty for AI’s output? Not the business owners, I’ll tell you.
My wife's department had a lay-off of half its personnel last year. The reason is that A.I. was going to do their tasks. They no longer needed them anymore. One year on, guess what? The rest of the employees are doing twice the work they were tasked with. A.I. is yet another excuse to cut costs for those big fat cats.
How do you know that?
Nobody tell the business major probing for information the obvious answer to the question "how could you know the status of your wife's former coworkers?" The average IQ of a business major may be room temperature, but we still don't need any more of them pushing poor folk further into wage slavery by crushing basic human interaction.
@vitriolicAmaranth business major? I should be flattered to be so overestimated. I'm a lowly construction worker, sir.
@@vitriolicAmaranthTRUUUUU
@@vitriolicAmaranth The guy asked a simple question, dude. And you are immediately ranting, wildly assuming stuff about his life and insulting entire professions.
Want to know who could most easily be replaced by generative AI?
Techbro AI hypemen.
CEOs
Already done, AI podcasts are here.
Got em
At 147 upvotes, OP's comment is criminally underrated. It cuts to the quick.
Lol, there's literally an account here like farming (maybe fishing for subs?) by summarizing people's comments. In a naturally wordy, obsequious manner.
Calling recursive algorithms AI, was the best piece of marketing in ages.
Underrated comment.
Who cares what they are called. The fact is Chat GPT understands my goals. No other program understands anything. I am a game developer, Chat GPT understands the game I am making and what I am trying to accomplish far more than Unreal Engine does. Unreal Engine, 3dsMax, Photoshop, Daz3d...none of these programs have any understanding of my goals. They are ultra primitive in that sense. Navigating software via Tag lines and file types will become a thing of the past once all programs UNDERSTAND your intent and goals.
Most underrated comment ever.
Let's call it what it is, it's not marketing, it's pure evil. It's evil because these people have had the engineers in the room tell them that it's not actually artificial intelligence, that artificial intelligence doesn't actually exist yet, and the marketers heard that and didn't care one bit. The dollar signs falling out of their eyes wrapped around their heads and blocked their ears, and that my friends is a modern form of evil.
@@ZZ-sb8os It's deceiving the consumers and doing false promises with a subpar product. This in itself is already morally wrong but I would reserve the word "evil" solely to circumstances where not knowing that is not AI put lives in danger.
People forget that Amazon's AI automated store was literally a bunch of underpaid people from South Asia watching camera feeds and charging the customers for the products they picked up when the left the store.
That was more UI than AI. Underpaid Intelligence.
Well it was AI thought but not the kind we thought it was. It was "Attentive Indian" technology.
@@TimoRutanen lol
@@TimoRutanenno, no it was AI: Actually Indians
As an experienced programmer, in the job part my worry with AI is not its usage by experienced professionals, but for the entre levels. In one side we will have junior devs more focused on generating code with AI fast instead of understanding better the problems and techniques to build a solution and write better code (or identify bad ones). In other side we will have companies deciding they may not need junior devs at all since the job usually assigned to those levels “””can be assigned to AI””” without understanding the problems and risk of producing bad, vulnerable and hard to maintain code…
I think it will just take time to learn how to use - instead of copy and paste from stackoverflow you now have some back and forth with an LLM
It is a catastrophe just waiting to happen.
Yeah, this is my concern - and not just for SW devs. The problem is not, what do subject matter experts think, but rather, how much money does an ignorant executive think they can save? I was on the job market during the first wave of AI hype, and it was shocking to see recruiters getting laid off en masse.
That seems to be the problem for a lot of industries, I've been saying that as an artist since the hype started, the problem isn't that it will substitute everybody, at least not for now, but it will make it much harder for people looking for entry level jobs, and in the long run it will be much harder to find people who have harnessed their skills to the point of more senior positions.
building empires with no foundations
I love Sayash's comment that AI sounds like an expert in everything that you are not, but when you are an expert in something it definitely doesn't.
my experience, exactly. but, reality is a social construct. so, ai may eventually define what is real, and then it will make itself infallible
So basically a lying machine.
Exactly. It works great for things you know nothing about. Because its just lying confidently.
It is like documentaries in NATGEO on things you don't understand that sound so technical and precise, until they make one in your field of expertise and then you go: "wait a minute... ...it doesn't work that way!"
Well how about chess? Human experts vs ai?
Y'know I just really think it would be so much easier to invest in trains and other public transit to make commutes easier and lessen fatalities. like that just seems way easier and better for the environment than waiting for self-driving cars. Just really feels like a vegas loop solution.
what happened to working from home anyway?
@@doodle8 youre getting a full 3 years? dam showoff
As they mention in the video, I feel that in the programming field, I do tend to hear "How do we do this?" a lot more often than the question that should come before that, which is : "Why do we want to it?" and "Should we actually do this?"
I think I've figured out a trick: Look the manager-type with the request and responsibility for the consequences in the eye with a psychotic grin and say "Computers are Turing machines, we can make them do literally anything no matter how insane or stupid. If you ask me if something is possible, my answer will always be yes. But the harder and more important question is "Is this a good idea?". That's the question we can work through whenever you want. I await your invite."
How do we use AI to solve this? Before asking if there's even a problem to fix.
It's not a matter of people asking "How do we do this?" thats an issue here. It's that they are doing it for profit, and not for science.
A lot of tech bros didn't watch Jurassic Park and it shows
Well I find that techfolk are extremely dumb when it comes to practicality and morality these days. There are a lot of good smart ones too so hopefully they can fix the problems the rest of them will cause.
"You want attention, more than you want factuality" is such a great comment on our media at the moment.
I mean.. isn’t that all you need?
As somebody who has been searching for a new full-time job since October of last year, AI in applicant tracking systems has become my arch nemesis and made my life of living hell... Constantly trying to update the resume for each job just to try and make sure that the ATS system doesn't boot it out automatically is maddening. Hundreds of applications with such a low response rate to begin with, and even when there is a response 95% of the time it's just an automated response from the ATS system telling me to go fuck myself. It feels virtually impossible to find remote work right now, even for jobs that I'm vastly overqualified for
Peachy, init?
It’s nuts and killing the industry.
Link to a website at the top. That increases probability AI passes it through since AI won't scan website.
@@dubfitness595Does linking to my LinkedIn site count? Because I’ve had that site linked on my resume for the past year and it doesn’t help push it through
@@NicoleTedescoall these executives laid off their TA, and now LinkedIn is packed with people complaining about how hard it is to get a job, and how few qualified applicants there are. It’s insane.
As a graphic designer, one of the biggest concerns I have with some of these arguments about not being able to replace us is that there are already people being replaced. And naturally that's nothing new, we've had bosses who try to take over our jobs with canva long before generative AI, but the new technology certainly isn't making it any better.
Right and even when we aren’t being fully replaced, we’re being left with the far less creative aspects of the job like cleanup when they are using ai to generate all the images/logos.
Yeah I come from a creative background as well and found myself disagreeing with a lot of the points being raised in this episode. Digital VFX and graphic design are being profoundly changed by AI developments on a weekly basis almost.
They aren't being replaced they're being fired because the economy is in a depression
Exactly. I have worked alongside graphic designers as a content writer and I totally agree. These dumb managers think that design is all about sticking images together. They believe that they themselves can be a designer because they too can create posters on canva. What they end up doing is killing the will of the designer to be creative. They think that design is just a filler content or complementary to the main thing, and since AI is generating them in an instant, they want same rapid development too. This is a sure shot way of killing the creativity aspect of the design.
Being unjustly smug while running a scam is a tradition as old as capitalism.
The tech bros don't know much. But they learned that.
capitalism isnt a tradition, its an ideology that makes it possible for people to be free to do what they want. Laws determine the boundaries of that ideology.
Capitalism is a pyramid scheme, and it certainly does not "make it possible for people to be free to do what they want", unless you are at the top of the pyramid.
@@taylors4243 which phone software do you use? Apple or Android? I ask because capitalists have freely chosen to restrict you to only those two options.
you cant really talk about being smug on an adam conover video , hes the smuggest person alive
100% of sales depends on the customer volunteering for the purchase.
Getting them to volunteer...that's where the real skill is. It's the one thing someone like Elon IS actually good at. Those that can't...podcast & comment...those that can...hock product on the big stage.
What was very telling to me was a recent interview of the ex Google CEO who didn't know he was being recorded. He said AI could be used to replace those "arrogant programmers" who don't do what you ask them to, not realizing that he is the arrogant manager who thinks technology is magic and whose instructions have to be reinterpreted by the programmers to make any sense. He also thought AI could then be used to just create a new tiktok copy within 30 seconds, which was the final confirmation that he has no clue.
I'm going to need a source on this one mate
@@manzenshaaegis8783 Just search for something like "Eric Schmidt Leaked Stanford Talk on AI"
He even had to publicly apologize for it because he suggested to "steal all the music" from other platforms and the music industry didn't like that very much.
@@manzenshaaegis8783 th-cam.com/video/mKVFNg3DEng/w-d-xo.html
I watched it too. He's a very rich man that had no problem pulling the ladder up behind him. No concern for the rest of us and our inability to gain access to the opportunities he was able to solicit. This is how these tech clowns think. Each of them should carry a mirror with them at all times so that they call fall in love with themselves over and over again.
This being a comment on a TH-cam video makes me laugh.
A big part of my job is translating human to robot because the back-end developers DO NOT speak the same language as front-end people facing folks.
We should never forget that the top guy at the top AI company, Sam Altman of OpenAI is saying that in order to power the next evolution of their LLM product, they basically need the electrical power generation of nuclear fusion. You know, the thing that we only in the last year managed to get a net positive reaction from and requires a high school gym worth of vacuum chambers and lasers and is nowhere near commercial use.
Maybe there's some exaggeration in his statement, but bro has basically admitted that his product is coming up against a hard wall and he's going to give the most optimistic view on the issue. Whatever generative AI is actually capable of now is not going to get much better.
what its capable of right now looks very impressive, but only because google search is so crap
hey if AI can pass my graduate school work (which it is w/ flying colors), im cool if it stays where it's at
@@oopalonga that would be just great if it didnt take so much fking money and resources to develop something that just does our homework, makes us dumber, and solves a problem search engines created
@@___echo___ well when it comes to learning the shit htat's important for my graduate school im all in, but when it comes to the bullshit classes they force me to make and pay $$ for, AI is definitely completing it :).
also, AI has made me immensely more educated--i like learning using a socratic approach w/ it and it's been working well. so i def don't think im dumber b/c of it, just more efficient
Fusion is not going to be commercially viable its always been nonsense. Gets research money though.
Classic "solution looking for a problem". No sir, I do not want your robots to help me sanitise my texts or emails, drive my car, make my art, compose my music, steal my work. I need it to do my laundry, tidy my house, take out the trash, change the sheets - do the mundane things to free up my time for the creative stuff *I* want to do.
I do want robots to drive my car, though I'd definitely prefer to take a train.
It’s not designed for you or me, it’s designed for the owner class so they can just fire more people. Capitalism uses almost every technological advance for the benefit of the few.
AI is inherently a solution before a problem. The whole point of "intelligence" is that it can adapt and provide solutions to nearly any problem.
The issue with LLMs is that they aren't actually general intelligence
@@sebastianschweigert7117 At most, they’re an automated predictive text. The user just doesn’t get to choose which word comes next.
But I do want it to compose my music and make my art. If I'm a small business instead of paying the record labels companies for their ridiculous fees I can now freely just play generic background music for free.
Same with art. I wanna generate a logo, and now I can save thousands to do so. And the great thing is I can pick and choose.
4 minutes in, I just want to say that as someone working in the HR industry, I get several emails a day trying to sell me AI tools. It never ends.
Don't buy into it, for the love of those poor applicants.
Same here. DEI and balancing experience/education was a priority at my last job. Leadership was fearful that these AI systems would skew in unintentionally biased ways.
@@basicallymid As a researcher in AI ethics, they *always* do. It's effectively inevitable. No such thing as a neutral dataset.
get an ai tool to screen ai tool advertisements
scam of the century
Like social media, LLM has been given to the public "for free" while the cost is hidden but still being paid. Social media sells all your personal data, AI uses a huge amount of energy AND takes your data for its model.
you don't have to use it you dumb cunt.
Use local models then. Fine-tune them on your own data. There is hardly any justification for cloud-based inference.
Oh no, it uses my personal data, my precious personal data? And then? What happens next? How is it gonna change anything? Most people, 99.9% are boring and average and will never amount to anything much. What the heck are you worried about? They’re gonna find a treasure trove of useful information in your personal data? Did you solve nuclear fusion, find a cure for cancer, develop a solution for a peaceful Middle East?
ohh no, not my data...
Humans use data to create their own internal models. Musicians seek inspiration from other musicians. Writers are inspired by writers, etc.
Do *not* trust chatGPT to tell you what a plant or animal is *please*. Especially if you’re foraging, it will almost certainly get it wrong. You’re allowed to not know. It’s ok to not know or bring a book on local wildlife with you. Do *not* use an LLM for that because it is often wrong and can be dangerous if used in certain situations.
Yeah, that statement made me go "really...?" AI still can't figure out hands. Are you telling me they can tell apart the patterns on leaves or fur?
Thank you for pointing this out!!! Parksboard/local nature websites, search algorithms, wildlife books.. There are so many better options out there that don't rely on "AI".
PlantNet is quite ok but is should be only used as a starting point
As an artist, I've noticed that those folks who use "AI" generative images tend to take more time editing the results and rerunning the prompts, than it would actually take to either draw it or find a stock photo already taken. Art takes time, but generally, it makes more sense to understand the fundamentals to a certain degree than it is to use "AI" in the process. There are plenty of tools out there that help artists that aren't fundamentally flawed or using stolen work.
That happens in other fields well. As a lawyer I sometimes use ChatGPT to summarice text to then add it to my documents, but half of the time, it takes me more time to find the right prompts than to just write myself...and I HAVE to read the thing I want to summarice first either way, bc the thing is so unreliable. And then you still have to fix the result because it "writes" so weird, lol.
This reminds me of a paper titled "ChatGPT is bullshit" (by Michael Townsen Hicks, James Humphries, Joe Slater). Well worth a read, it's both hilarious *and* informative!
i just read that paper as well! Conceptualizing ChatGPT and similar forms of generative AI as bullshit is absolutely chef's kiss perfect
Using "bullshit" as defined by H.G. Frankfurt in his work "On Bullshit"? Yeah, I can see that.
@@bramvanduijn8086 That's actually the very definition cited by the paper haha
Oh, this is a great paper.
yeah, it is a funny read.
You know what will really cut down on car accidents?
Eliminating individually operated cars. Road re-design. Stroad elimination. Streets segregated by vehicle type/purpose. Investment in mass transit and walkable cities. No "AI" necessary.
31:30 actually, artists are jumping ship from Photoshop in droves. the Ai tools in it are essentially just sampling one artists work to "help" other users... without the first artists' knowledge even, much less express permission... also they've given up on fixing bugs and correcting issues and gone all in on Ai it seems, so... 🤷
Not to mention their pricing model and deceptive cancellation policies 👉🏻👉🏻 Great job, Adobe.
I used to be all about using photoshop for digital artwork. Over two decades loving and obsessing over cging, but have since pivoted back to watercolor work.
Digital art just isn't impressive anymore now that AI is a thing, which is a shame because there are so many supremely talented CG artists out there whose work will be drowned out in all the AI slop.
@@nperegri and, in all likelihood, their art will be scraped and added to the training data of an image aggregation algorithm or 10 that will then regurgitate it in pieces as if it were theirs.
I work in pencil and don't post anything anymore b/c it'll just be stolen, and help some rando post suspiciously similar pieces (or boris vallejo/louis royo/H.R. Geiger or w/e artists'-like images) as if they'd drawn them themselves.
I wish this artists jumping ship was true. According to the Q3 earning reports from Adobe their annual revenue increased around 11%. I'm not sure where that comes from. Maybe all the artists jumping ship were paying cancellation fee and that was this increase.
Yes and Adobe is being sued for its predatory fees and other cancelation tactics that made it hard to end your subscription.
Telling your kids what plant species are via an app (regardless of how accurate it is) is preparing them to go to that app for every question they have for the rest of their life. It’s okay say you don’t know something and even more important for children to learn finding the right answer takes actual work.
Experts use plant ID apps in the field. Not 100% accurate always but can get you in the family at least usually.
I mean as long as it is calibrated to say how sure it is, then it is pretty safe
I feel same with google maps. These days we are so bad at navigating without google maps.
As a coder i have to say they are not that great at coding, they are best used for boilerplate code (the code they have seen thousands of times in their training sets because its the generic framework code that people use over and over again)
In my experience boilerplate code is something I just copy and paste from one of my already existing templates. So why do I need AI again?
They're not even great at that - ask copilot for a quicksort and you will get something that looks like a quicksort until you actually take a closer look at the code it gives you.
@@yds6268 that's the neat part. You don't.
Your boss needs it as a paper thin pretense to justify paying you less.
cursor is good as it works off your existing repo
@@yds6268 Because of the enshittification of the internet, it's now impossible for google to produce even basic results consistently.
Regarding Computer scientists getting caught in the hype. I remember when they were getting super excited that Chat GPT had taught itself chess . . . Because they didnt know that there are gigantic books of chess games recorded in chess notation.
In other words, ChatGPT wasnt understanding chess. It was autocompleting sentences of "chessese"
The computer scientists did in fact know that it has chess in the dataset. The mainstream media can latch on to stories and twist them to make them more pallable for the general audience but the real reason it was impressive, is because an LLM being able to play chess to an intermediate level just from having read about chess in books, shows a remarkable amount of general learning ability.
The Chessese Room Experiment
Oh yeah I loved that story when I saw it, and there are amazing games of ChatGPT hallucinating chess moves that break extremely fundamental chess rules, like spawning extra pawns or moving its king into check just for it to be taken by the computer opponent because the computer opponent was never programmed with any better response to such absurd cheating! 😂
Oh but it's "intelligent"
@@EpicVideos2 it isnt really learning because it doesn't know if it is right or wrong, ts just a probability; a number, a word
I was just today getting trained on a JLG scissor lift at work today and they were banging on about how they've added AI safety features they'd added to it. What were they? Basically proximity sensors that detect the presence of the operator and some obstacles... Uhh what? Babes, proximity sensors and a bunch of calibrated set points does not an AI system make. Oh, and they added a computerised voice to tell you when the thing is moving...
Whoa, proximity sensors
Thanks to AI we can now implement a technology that has already existed for 50 years
But here's the catch, it's also A LOT more expensive
@@matheussanthiago9685 They needed something to justify putting a monthly subscription software lock on a lift. AI was the perfect excuse after blockchain™ didn't stick b
lol totally
coming fro electronics background, we have made various intelligent hardware systems without using any model. Now I am working in data and Ai dealing with real AI and I know how these 2 differ, So what these MBA duffers has done, they have raised the expanse of AI industry in which every intelligent system is Ai, but in reality only soft computing systems which are caliberable on their can be termed as Ai/ML. and intelligent systems can be hard coded as well, which we call as hard computing
@@Curious_Citizen0 Sort of like how they started calling remote storage "cloud computing", when actually the internet is already remote storage by definition, and always was. That's what the internet is. "AI" is just computers doing what computers do.
No no no. Generating images in Photoshop does not help artists. Beyond the fact that it was created by ripping off all our work to even exist and was made with the goal of extracting our labor without paying us/permanently replacing us, it is the same amount of disgusting, pointless, and soulless as using text generating models to DM your friends. If (the general) you don't think your voice can be (or should be) replaced, why would you think the visual representation of an artist's voice should?
Same similar deal with the coding assistants he mentioned. The sheer scale of data required to make models that are remotely useful for that king of assisted work, practically begs the companies creating the models to throw copyright out the window, throw copyrighted work into the training data, and then throw lawyers at the inevitable ensuing lawsuits.
Pretty shitty-ass price to pay just for the possibility of an assistant feature that might save some people some time in some situations.
Copilot for programming has been a time saver for some people, but it's also been a huge heap of churn altogether (which is to say, the time until the code is replaced altogether is very low), which takes up everyone else's time in a negative way.
Right now it's pretty cathartic to watch copyright lawsuits in the AI industry tipping in the direction of, if a copyright violation was committed in training the model then the forward use of the model (and reselling of access to the model) is a facilitation of that same copyright violation.
cry much? lol -alot of artists I know, have x100 their output. They love it. I guess there will always be the booohoooos in this world.
@@kritikusi-666 Your comment history on this channel is visible, and reveals a striking lack of understanding for the topic.
@@orjanb4011 wtf u talking about? Bot much? This is my first time commenting on here lmao. You baboon.
@@kritikusi-666 I've been in the online art space for well over a decade. Most hate it. You just follow mediocre "artists" lmao
Wow! That part about so many business practices being BS resonated with me so hard! The only problem is that people get so attached to the way it is that they’ll go through great lengths to keep it going. People would rather waste 1000’s of hours and dollars on patching an existing broken system than read the writing on the wall.
See: the entire Agile scrum master industry
90% of the BS bureaucracy in modern economies is fear of litigation! those 10 page risk assessments to take a train to a business meeting in a neighbouring city aren't mandated by government or a method to keep people safe. It is purely an ass covering exercise. Similarly for many similar activities. As such, unfortunately they aren't going to go away. As such the best we can do is to get LLMs to generate them, LLMs to check them, and minimise the time humans have to spend pondering them.
Customer service has been destroyed because of this industry. It makes me hate companies.
the problem is people using llms as search boxes. both adam and the presenter used examples that, while interesting, rely on the ai not hallucinating in the answer. it's cool it can tell you what bird is singing or what plant is pictured. But the next step is the one most people forget: verification. Adam said he'd use his birding app to id the birdsong and then _search for the physical bird_. He verified what the program told him. Most people seem to skip that step.
Why both asking the ai when you can just verify x thing from the start.
@@InvasionAnimation The same reason some people use wikipedia for school. It's faster to get a general idea and sources that need further investigation and verification than starting from scratch.
@@phoearwenien4355 Wikipedia is way better than ai though. And using it is still research. With ai you just copy and paste. Likely not even needing to read it.
@@phoearwenien4355 Current ai is essentially the "don't use wikipedia as a source" issue on crack yeah
I mean, they skip that step because the marketing around AI is based on claiming it's 'just as good as a person'. Putting the AI answer at the top of Google search results is communicating to me as a consumer that it's the best answer, because why else does it get priority?
Two things are true:
1 - AI has been around since the 1960's.
2 - This AI singularity race is one race where no one saw if there's a cliff after the finish line. But everyone is still running.
1 - and so annoying that Prof. Dreyfus wrote his "Alchemy and AI" in 1965
With the current approaches used by AI-companies, the singularity won't be reached in our lifetimes.
A cliff or a catapult?
I would add that nobody knows if there is a cliff BEFORE as well. Who knows, maybe there’s some plateau of performance that just can’t be broken. I wouldn’t take it as a given that a singularity is possible
@@Doctor-qs5dy We will not reach the cliff any time soon with the current approach which produces barely adaptive content engines, the cliff when actual artificial intelligence emerges as in a AI that actually understands concepts & can learn new ones not just generative models that ape singular aspects of language and images.
I hate the forced Google AI searches that slow down my phone. Majority of the time I don’t even trust it.
This
ChatGPT is not good for coding. It doesn't code. It generates a statistical guess of what the code would "probably" look like based on its training data, just like it does with plain text. It hallucinates code just like it hallucinates that you should put glue in your pizza sauce.
ChatGPT *is* good for situations where you need to look up the syntax for a specific command, or where you don't know the name of what you're looking for but you can describe it. And even then it's not always right.
Yeah for me it essentially is a nice helper for generating boilerplate code, or a basic example of what I am trying to achieve in a technology I'm not an expert in, but you're just not going to build entire projects with that tool alone.
It's not going to generate your fancy app all on its own, you need to be knowlegeable enough to write the correct prompts, review whatever it spits out, rework the prompts again to account for the edge cases it missed or mistakes it made, etc. And the hard work in coding is managing existing codebases, not rushing out greenfield projects in a couple of days. Right now your AI will just not have enough context or skills to properly evolve your existing codebase, it's just going to enshittify it with repeated, poor taste and buggy code all over.
Honestly all I see it doing right now is destroying the ability of students to learn by giving them some cheat code to spit out the basics they should be learning by themselves, without thinking about how it actually works. Oh boy can't wait for the new generation of devs.
This. Also to help impaired like blind people to describe things or do basic things followed by command. And in game industry mimicking talks of NPC without the need to write manually every little dialogue.
You're in denial. Accept that AI can code and make use of it. This is the worst it will ever be.
Yea it can code pseudocode bs that's for sure lol. Might wanna get out of tutorial hell and work on actual apps to see what it can (or rather cannot) actually do. I work on computer assisted surgery apps, mistakes are not an option my guy, and it cannot figure out context really well when you are using/re-using existing code and APIs from your private codebase.
My experience is quite the opposite from yours. ChatGPT, especially the current 1o-preview, is quite good at coding, even rather complex stuff that needs real logic and intelligence to code.
The funniest comment I heard on AI was when Danny Gonzalez said it must stand for "Anonymous Indian" after Amazon was caught hiring a ton of staff to review camera footage in their supposedly AI stores to review purchases. Like Adam pointed out, it's always going to be cheaper and easier to exploit people than to develop this technology to replace them.
The problem you're not seeing about AI in hollywood, Adam, isn't that people will want to see movies made by AI, but that companies will try and do it anyway because they don't care about people.
Ya but the writing of a movie is the foundation you build the rest of the movie on top of. If you have a bad foundation you're going to make a bad movie that won't make a profit at the box office.
@@windy718 I agree, however much they do automate will make movies that much worse
reducing costs is pretty much a business thing, not so much AI
@andthatswhy5198 YEah and guess what they wanna use it with. It starts with an A and ends with an I and rhymes with K.I. and is AI. Duh
@@orsonzedd that's what the video is saying tho. That because it's all hype when you go down that route it will only be bad. You just stated what he already said
As interesting as this was, I would encourage Adam to inform himself on the data collection incurred by Adobe and why using generative AI for your "creative" process is not one of the valid uses as he claimed, it's saving time, for sure, by lifting the work of other users straight from the software and feeding it to their LLM
Exactly!! That part frustrated me to hear, and i'm glad you pointed this out
Hearing Arvind Narayanan speak is like hearing a great professor break down complicated subjects into easily understandable points, it’s actually really impressive how intelligent he is and how he’s able to communicate his thoughts in a way everyone can easily understand it
The fact is that a CEO knowing about the latest technology and general computer jargon WAS pretty impressive in 1990. 1990.
But that’s the space boomers are still operating in
Artist here. I'm pretty disappointed that the main stopping point of GenAI was grazed over: "What trained the models?" In fact there is a very recent, ongoing lawsuit regarding Midjourney specifically because it was trained on a laundered dataset. I've tested these softwares out, and while my own views have blunted with time, let's not say that GenAI helps any creative's workflow. It does not.
This year, art directors were saddled with firing their teams to replace them with AI consultants and the results were disastrous because these people have no critical thinking, problem solving or expertise within their field. A prime issue of the software is not only did it train on copyrighted datasets, it also has now flooded the job market with untrained professionals with no skills to speak of.
who trained and influenced the artists?
@@PazLeBon What's the difference between a human intellect and software that plagarizes other's work and reassembles it according to algorithmic logic guided by a prompt? Let me guess, nothing. Right?
@@CP3oh322 its literally being taught to try and think like a human so not as much difference as you suggest in many ways
@@PazLeBon Just as I thought. In your opinion, nothing. How wonderfully unoriginal; like AI art! Please continue thinking that a statistical model driven by predictive, node-based probability functions is on the verge of gaining sentience.
@@PazLeBon You can "teach" a toaster to think like a human. It doesn't mean it has the capacity to succeed in doing so.
Capitalism can be clarifying? Oh, please. I think the professor is missing the meta issue when it comes to the investor class: an amoral obsession with how to enrich themselves at the expense of others work and creativity.
Youre not wrong
yea i raise a brow at a few of their takes, like they seemed to believe in the self driving cars being safer. adam is a fan of public transit, he doesn't push at all and makes a weird face the whole podcast?
I love it how iphone and "extremely low tech device" went together in one sentence
Arvind brought up a point that I deal with in my professional life all the time. Domain experts are so very important in all of this work. Without them all the programmers are able to do is make little toys that go "whizz-bang" when you push the button. It's the domain experts and can tell the programmers what terms mean, what's important, what makes sense and what the end results need to look like to be of use.
28:36 That is the most disturbing part of the way they are marketing this “Ai” Suggesting to people that this tool will do the thinking for you.
hmm, the bigger problem is that it might do a better job
Which is the only possible path to Utopia humans can take.
@@PazLeBon maybe specifically with regards to problems strictly concerning logic, but currently with the hallucination issues probably not.
Adam finally invited actual computer scientists. Thank you for that
As a software engineer, i had the unfortunate opportunity to train multiple models and build multiple "AI apps" for my company in the last year. I hated it. They force AI into anything, just because the buzzword is a selling point. Not only is the technology not ready yet (and probably wont be in the near future), but it also gets forced into software where it doesnt enrich the product. Im by no means an AI expert, but i had enough experience with it to have a negative opinion about it.
On the subject of pre-crime - If we were Actually a "civilized" society, 'pre-crime' would NOT be about punishment - it would be about intervention, de-escalation and problem solving to prevent the crime from occurring while resolving the problem or problems that might have lead to the crime in the first place. Probably also wouldn't be called "pre-crime" but instead perhaps something like 'crises detection" or something to that effect.
- on the deleteme sponsor - I've heard that one of those kinds of services was the most recent victims of a huge data leak o.o;
we artists are actually quite outraged that programs like Photoshop have generative AI now, because generative AI is built by stealing our work in order to plagiarize it, and in many cases replace us with stolen copies of our own work. we've actually been fighting to ban it in creative industries, many programmers have as well for the same reasons.
far as it being used to identify birds or anything else, it's also notorious for giving not just wrong answers but dangerously wrong answers.
I hope there’s a class action cooking. I’m not sure if or when the ToS might have started allowing for this, but someone should push back.
ikr I am slightly surprised he thinks artists like photoshop adobe these days.
Trained on existing art? Yes. Stealing your work? Doubt. There is a flaw in your opinion, which I am sure while your skillset grew, you did the same by copying styles from what you saw in others. If you produce something unique by hand, the art enthusiast will always pick it up. Each physical creation has a story to tell. Something Generative AI cannot produce. Remember it does not have feelings. It is just program with bunch of "IF" statements.
@@kritikusi-666 It's not harmful it is just a bunch of unstable particles. The radioactive particles is perfectly safe as long as it is used correctly. Lets give it to everyone possible. That's how you sound.
We artists? In cool with AI use, yeah its menacing but also so exciting. So i guess there are others that like it as me
It already stole my job so I hate generative ai. It would be great if it got outlawed but I know that will never happened.
What job did you have that A.I replaced you ?
@@lunchbox6576 Graphic design.
@@InvasionAnimation I'm sorry this is terrible to hear for you.
@@fluxonite Thanks. I'm hoping that they out law or at least pause generative ai. It feels like just about any field I could train to go in will be replaced by ai as soon as I learn how to do it.
@@InvasionAnimationthe blue collar trades will be last.
A lot of these techbros use the internet as an example to peddle their bullcrap and "debunk" the crash case but the thing is the internet wasn't built on an investment bubble it had a lot of government and public involvement more than what private investors ever had
It's literally the Dot Com bubble all over again. Get ready for the burst.
. . . . somewhat. PART of that bubble was the Y2K issue. It popped when it did and made the OTHER bubble issue a lot worse. At least, that is what it seemed like at the time. AI doesn't have such a steep cliff baked into its design, it's just going to gradually become apparent that it's shit at almost everything.
Ah yea, remember when the internet crashed and we realized it was all a gimmick
@@steve_jabzin, it did crash, it wasn’t all a gimmick, but a lot of it was, and the same is probably true of AI.
@@sharkbelly1169 No, the technology didn't crash, investors just lost money because they tried to profit off of limiting access to domain names by getting there first, like scalping tickets to an event everyone loves.
Personally I don't care if investors lose money anyway, but there isn't really an equivalence here because they're investing in practical applications of the technology itself, not reserving specific ChatGPT outputs nobody else is allowed to generate.
With 76% of professional developers using it and 82% saying it greatly increases their productivity according to the latest stackoverflow survey, I don't see that suddenly going away like the internet never did. The work trend index also shows 84% of australian workers generally now rely on AI at work. 75% in the US.
I gotta tell ya, this is sounding like more of a cope than anything grounded in material reality.
I think there is merit to the argument against the hype. On the other hand I think it will change the world more than the Internet did.
4:12 If AI ever gets sophisticated enough to hire people based on body language and facial expressions, I hope that any company dumb enough to use it gets sued into oblivion for ableist hiring practices.
We're out here trying to combat systemic racism, and they're out there creating systemic_racism.exe!
As an expert in the science behind good hiring practices (organizational psychology) if anyone came to me and told me that they wanted to implement this it would be hard for me to reply because I'd be laughing so uncontrollably that I couldn't think straight
I occasionally use "AI" to help generate a complex piece of excel command. If it didnt exist I'd have to post the question to a forum and hop a specialist can help me out. So it saves me maybe a day of waiting/giving more detailed explinations.
That's about as useful as I've ever found it.
I literally noticed, that I've Autism and ADHD by describing behaviours I had and the symptoms of my burnout I was still suffering from and asked it to look up scientific texts and giving me the sources etc.
I already was suspecting something like that and that confirmation helped me getting a formal diagnosis and the help I needed, instead of just being fed antidepressants.
It's as cliche as it gets, but I'm happy that this dumb data goblin helped me with something important.
I mean - that's pretty good if it saves you a whole day
So basically, it just looked it up, faster than you could.
@@Thareldis I think this shows more a problem with healthcare where you are that you needed to ask a LLM to diagnose your symptoms, but regardless most of us nowadays discover we have ADHD or ASD by using the internet, you get recommended a video, see a meme and get curious, or you post something and someone recognises the symptoms, I discovered I had ADHD reading the Wikipedia about it, and then I went to get professionally accessed.
@@bluester7177 EVERY millenial has it, therefore its fkn normal lololol
Big tech has this belief that they can shape people's minds and desires as they want and easily. I worked in there in the past and went to some conferences where the way they talked about their consumers and normal people in general was utterly appalling : "people are stupid, people are puppets, with enough money, marketing and by using our connections with big business, we can force anything upon them, and they'll love us for it dammit!" is kind of a short summary of their mindset.
...I can't wait for that budding dictator mentality, and big tech in general, to finally take a slap in the face by losing billions. They have forgotten what business even is, and it's kind of poetic justice that they'll be destroyed by one of the most basic rules of the capitalism they claim to love so much (aka, "a business thrives by providing useful goods and services that people want enough to pay money for")
I'm a retired ecommerice developer and have seen people progress from intelligent consumers to absolute total pawns. The nonsense they sell today would have been laughed out of doge just twenty years ago. My favorte example are online services that sell you the food you have to cook yourself. That is classic.
As a dyslexic, i can say that LLMs are great at being an assistive technology. It does a good job catching errors and rearranging my stream of consciousness rambling into something coherent. It's really good at typing up meeting notes after a meeting so that i can focus and be fully present.
Its not sentient and doesn't think, so it's not good at creative and thinking tasks, but it's really good at tasks that involve the structure of language specifically.
Very interesting. As a pretty AI skeptical person, the "test it at what you're good at" was interesting. I do that reflexively, and I only just realized that's the reason I am skeptical...
It's not capable of doing all that a human can do... but why do you think that will last? There's more to the world than the immediate future.
@danielmaster911ify
I read your comment, thought about the words you used, the order you used them in, and I'm a bit of a loss.
I can't seem to find the logical thread that you used to connect the content and context of the comment you're replying to, and the content and context of your reply.
@@Petticca Must have been the wrong comment. My phone's a fold 5 and the front screen suffers from misalignment occasionally.
NVidia is definitely the ones selling the shovel during the gold rush.
AI has been useful for me for writing scripts, I work with a lot of different technologies which all have their own scripting languages or will use similar but have slightly different construction and having to remember them all is impossible. Where I used to have to Google each and every time I was doing something different I can just have AI write me a basic script for that specific OS and then fix and generate upon on it myself.
Have they evaluated o1 AI yet? The second question is, why are we spending the equivalent of the Manhattan Project on AI?
They are using the term “reasoner” for o1 - it thinks before it answers.
@@Tracey66 it also does reflection which is basically fact checking itself.
@@Tracey66 At the same time they don't want anyone to know how it works which makes me very skeptical about it actually being capable of properly reasoning in the first place.
@@Gladissims No question, OpenAI is a sketchy company for sure.
AI stealing copy written art and attempting to monopolize the entire art and entertainment industry is already having dire consequences for people in those fields. Workers losing their homes, losing their prospects, and the tech ceos laughing to the bank. Hoping they get sued into oblivion.
thing is tho, the amount of art that i see created and look like a 3 year old did them......
@@PazLeBon Check out the various art contests that were won by ai (Spawn recently) and also think about art industry workflows that don't necessarily need polished work until post production. The current strikes are directly effected by this tech, and the executives trying to push it to please stock holders while laying off entire teams every other day. The mix of hype fraud and theft is staggering, and unchecked they will monopolize the industry. Storyboard artists, performance artists, models, concept artists are already in dire straits.
@@PazLeBon Seems like you're salty that -gasp- newbie and hobby artists exist and post their art (and they also deserve to be protected from theft regardless of your taste) and by judging your other comments you'd rather have the entire field burnt down instead of allowing these artists to grow. How naive, and how short-sighted.
Apparently you'd rather sit around and bitch instead of finding out artists that tickle your fancy better. They're out there, you're just not moving at all. Perhaps you should blame the algorith (wich is also AI lol) ? Or better yet, if you actually care, go and look for art yourself like humans have always done before the algorithm existed. But it seems like you don't care enough, so whatever opinion you have about artists is kind of invalid under that implication, eh?
I hate the term AI because AI has never existed in the way they want you to believe. it is literally just machine pattern learning. something that has existed in some form or another since the 90's. of course, OpenAI is slightly more advanced but It isn't any sort of intelligence yet. It of course has its place in sifting through data that humans physically couldn't do in a reasonable amount of time. for example large scientific datasets for research or even using AI to "watch" every TH-cam video for moderation purposes because so many TH-cam videos are uploaded to TH-cam it would be impossible for every video to be watched by a human for moderation. Generative AI obviously needs heavy regulation, but even that has a use case that isn't just "replace artists"
Exactly. There's no intelligence involved, as you say.
Assholes Intelligence
Would be nice to remember how CTO of OpenAI (who just leaved) was awkward in answering whether their AI uses TH-cam videos.
Adam is right. The climate impact is crazy. Personally, i will be curbing my usage
And crippling yourself in the meantime. You'll return to it. Especially as the internet becomes less and less friendly to humans.
@@danielmaster911ify I find it funny that people on youtube are complaining about a climate impact like the energy to run data centers to show them these videos is not putting a massive load on the energy system.
Enjoyed the video. I do want to address something, though.
"They're good for coding"
They're not. Problem is, most people in the software industry are incompetent. Mostly managers, stakeholders, etc.
"Domain Expertise > AI Expertise" is the best quote. It basically applies to any software development. (I am a professional software developer.)
The thing with people arguing about whether AI is really thinking is we don’t know how we think. I can’t even really prove I’m the same person I was before I went to sleep. How would we meaningfully know if AI was thinking?
You're asking a fundamentally philosophical question while missing the practical, meaningful differences.
An "AI" can use an statistical algorithm to produce plausible conversations, but it'd be equivalent to a person memorizing manuals of a topic they know nothing about - they may sound smart but they're completely clueless and just recalling from memory. Such an "AI" can talk about pineapples but can't really understand what a pineapple even is. It can talk about feelings, etiquette, code, whatever, but it's just an algorithm. Unless you consider all algorithms inherently intelligent, as in, possessing an intelligence instead of following sophisticated instructions, they're absolutely not intelligence.
When/if we develop better, stronger algorithms that can actually replicate basic human learning and reasoning we can go back to philosophy and metaphysics, but the fact is that right now the discussion is a matter of how comparatively less sophisticated these algorithms are compared to our own brains.
@@MaryamMaqdisi I don't think that's what OP meant, we also don't know what are the processes which cause us to be conscious and able to think, we just know we do, so at least replication isn't possible, because we don't understand it on ourselves.
Also, if these algorithms can become intelligent, it is more likely that it will be unique to them.
@@bluester7177 that same logic would suggest your calculator can think too.. or maybe even a rock
If AI becomes more intelligent than humans, and cannot be distinguished from a human (without looking at it), what will differentiate it from humans? Our souls? Our consciousness? We don’t know anything about souls or consciousness - we can’t say what does or doesn’t have a soul.
LLMs only work on discrete input -> output. The machine isn't running on your data outside of the prompt -> response. So there really isn't an opportunity for any person-like style thinking to occur. It's similar to what you'd expect if you'd have a human brain with no natural activity in a vat, and you're activating/probing regions of it for info.
I find it great that actual computer scientists are starting to go public and call out the AI bullsh*t. I also found this new podcast on YT called AI Geeks Podcast from a British tech entrepreneur who’s involved in AI and has gone public calling everyone out as well 😂 Does this mean we are reaching the end of the hype if some of the people behind AI are now going against it??
Disappointed to hear the one expert give the "positive" use case as something that isn't different or revolutionary, because there's many apps for that already. Was hoping to hear some of the projects being developed around disabilities, or a new approach to how we can analyze XYZ etc. To be fair to him, it can be hard to find and they're going to be way underfunded so the success of it is not optimistic. But it felt like that's why Adam asked the question, so I'll just pitch it here that there are great use cases out there that are worth developing and funding.
Dr. Narayanan has a lot of optimism about self-driving cars, which I find odd in the context of this AI conversation, since many of the problems he rightly points out for AI are fundamentally true of self driving cars as well. These are things that are inherent to the concept of the technology, such as energy and manufacturing, and therefore environmental costs, the shifting of real costs to the consumer without regard to knock-on impacts, theft of IP and PII on a mass scale, invasions of privacy, lack of representative training (i.e. racism), and requirement to always express an answer even if no good answer is available. This double standard is expressive to me of how far the fundamental immorality of technology is ignored by otherwise very smart and fundamentally decent people in the name of innovation and naive utopianism. It's kinda bananas.
Thank you for saying this! You are 100% right. It seems like smart people only use logic sometimes. Even if the topics are related to their field of study. For example, Dr. Michio Kaku has a PhD. in physics, but yet eagerly talks about Elon Musk's Mars colonies and hyperloops. Now, are they charlatans or truly blind to facts? We are all influenced by our own "bubble" forces, but logic should be able to prevail through curiosity.
Disagree that the public shouldn't be responsible to judge whether or not a thing is useful or trustworthy. It is very much our responsibility as educated adults to work hard as media consumers to lead with skepticism. Teachers are now responsible to subject students to this type of lesson, because their world is filled with liars, marketing, half-truths, and full on conspiracies.
Thus was a great discussion.
but teachers are people who have never really left school and spend their lives with kids. the few teachers i know are kinda naive and .., well i feel like id make a better teacher having spent most my life in the adult world.. tongue firmly in cheek ;)
Do you know anything about teaching media scrutiny and critical thinking, or are you just an armchair quarterback who has never worked a day as a teacher?
Adam, if you understand why a TV writer shouldn't be replaced by ChatGPT then you should understand why a visual artist should not be replaced by a Midjourney clone inside of Photoshop.
It's strange to me that it kind of seems like you don't.
Because he's wrong regarding the TV writers. They can be replaced by chatGPT.
@@sebastianschweigert7117 Not really. Coming up with good ideas and then figuring out gow to cohesevely put those ideas into text in a way that is good and entertaining requires one to 1) be capable of actual, complex thought and 2) Understand human and cultural contexts. Since AI is not capable of the former, it is also not capable of the latter. Generative AI is a glorified version of the word prediction systems we have on our phones. It can give you something that aproximately looks like, say, a script. But it can't tell the difference between a good or a bad script, because again, it isn't actually capable of thinking. It just makes a statistical preeiction of what a script, based in its training data, "should look like" and then spits out a version of that. And the result often feels like an Alien producing something it THINKS a human would make, without actually understanding humans, or why we make things in the first place.
AI is such a misleading name because it's not actually intelligent.
Just discovered this channel. Great discussion and really smart and valuable questions. Million thanks, Adam and guests
Wonderful interview. Thank you all for a great dialog. 💯
Snake oil is a perfect analogy here because the snake oil does actually have benefits in certain circumstances, but usually not the benefits that were advertised. And far fewer benefits than were advertised.
Snake oil is just another way to say turpentine. It's a wonderful solvent for painting, but quacks advertised it as miracle medical cure-all.
It really is a great metaphor for the unregulated ai bubble.
The term AI is intentionally misleading. As a software engineer, I can assure you it is not real intelligence. We literally have not even begun to comprehend the depth of the concept itself. What businessmen and techbros are calling "AI" is actually just incredibly complex algorithms and machine learning. You can't say these programs are "AI" because they are not sentient - they aren't capable of what we loosely try to define as intelligence.
Most humans don't qualify for "real" intelligence lol
AI already sucks and it's using copyrighted materials, which is currently undergoing a court case I believe. Should all turn out well then AI will be rendered unethical.
Hope nobody tells you about the sources of literally every aspect of modern civilization, and all civilizations before that. If you can believe it, your smartphone is made with things even more immoral than scanned pictures.
2 questions I have about AI that I never see asked on these things
1. What happens when we have "AGI" and we ask it questions and the the answers are all things we already know. How do you do fusion? AGI: give me unlimited resources and 500 years and I'll figure it out.
2. What happens to AI and people when AI meets people who refuse to believe facts and the people demand answers that fit what they believe and not what's true.
#1 sounds like deep thought from HHGTTG.
Thank you for bringing basic sanity to this conversation, all three of you.
For those who don’t know what these Ai are. They covert complex info in simple tokens, then they run them run those tokens through pattern recognition to predict how those tokens should be order. It’s not much different they auto complete.
It’s just predicting what stream of numbers should it spew out after given another long string of numbers. Great for analyzing very long and complex patterns, it’s just a very powerful mad lib.
Case in point, as a programmer, my favorite application of all this AI is simply...better auto-complete. In IntelliJ working with java code, there is lots of helpful auto-complete when doing tedious repetitive things that follow a common pattern. Used to be it could only complete a function or property name, now it can autocomplete entire expressions/statements that are following patterns found elsewhere.
Did you know your brain only ever gets patterns in nerve impulses? You/consciousness is a subroutine in pattern recognition system. It's outrageous how people think they are magic of some kind and dismiss technology that's functionally superior in most measurable ways
@@someonenotnoone So true, Intellisense is great and useful for finding functions but Github Copilot writing a bunch of conditional statements with repeating code for me is amazing. I can focus more on how to write the complicated code. Also I found it nice for building the boilerplate setup in any APIs I create. Its hard to go back when I realize how much I like being able to press tab to skip the tedium.
That is basically what humans do. Most people dont have unique thoughts. They just do next word prediction.
@@pin65371 Word choice isn't even a conscious process unless you choose to actively deliberate over diction, and even then it is more in an editorial role... kinda like CoT. So many people are radically misunderstanding the import of these systems because they have no clue how their own cognition works.
There's really no evidence that self-driving cars are ever going to be good enough to just let loose on the roads. There are major problems that have shown no signs of being solved. They are currently much less safe than human drivers on average
Do they preform better as a safety/assistance system for human drivers, like active cruise control and emergency breaking?
@@imacds Those already exist and yeah they work well, in my experience. My wife's Toyota has both of those and they've come in handy. I'm still not taking my hands off the wheel or my eyes off the road, though. They're an order of magnitude less complicated than a car that fully drives itself.
@@imacdsYes. And in this respect they are augmenting the human. Self driving is aiming to replace the human.
Sometimes replacement is better than augmentation, but I'm sure most human people would rather be augmented than replaced.
Reading through the comments its pretty clear that people havent actually listened to the video. These guests arent saying AI is complete bs or a scam or pointless or a bubble etc. They're pointing out instances where it's being overhyped or misrepresented. Like with any new technology AI is being misused or overhyped or oversold.
Back when i did software development in corporate IT, i had to endure the proliferation of code generators that allowed inexperienced coders to put something together that works with just point and click premade modules. The end result barely worked and was very inefficient. In the end, the corporation preferred script kiddies who worked cheap, preferably overseas. I thought i would be programming till retirement but now i get to watch the industry destroy itself from a distance. Happier now.
What do you do now?
I'd just like to point out that you don't need AI powered automated cars to cut down fatalities. You just need a correct way to plan the infrastructure and lower the speed.
Cheating? I forget, Pagers were new when I was in high school. The internet and cell phone may be a danger to youth education, AI is just another added issue for them.
We don't give kids tobacco and car keys, why do they have access to other adult tools?
@MadDragon75 Weird comparison. Tobacco isn't even good for adult let alone children. Also are you expecting kids to just not use technology at all? They play video games why not computers? Stop gatekeeping.
@@Pikachu2Ash The Internet isn't good for adults, let alone children. I stand firmly by my observation. The logic you use is very pliable, and yes. I expect children to stay off of the Internet when it comes to answering homework with artificial intelligence rather than doing the research. We gave parents the responsibility to monitor their use and it failed, like letting kids drive or smoke. The Internet is the car in this sense, AI is the tobacco.
Before anyone else decides to hate bomb my comment, ask yourself this one question:
Would you want to drive across a bridge that was designed by an A+ student that graduated with honors and used AI for the answers to become an engineer?
There's already a good reason why many intelligent engineers are denied a diploma the first, second or third time.
@@Pikachu2Ash Video games are way better than ai slop. We will keep gatekeeping against lazyuseless trashumans
Should we not be requiring these companies to add source footnotes when returning images or text? From the copyright perspective. Should it not be held to some standard? The, where is the source material from and by who validations are missing here. AI is definitely A but no where near I. Great conversation.
I've notated the less people know in regards to computer science and software engineering the more they believe in the nonexistent "magic" of this hype cycle "AI" snake oil.
I think not. It's older folk that belive in this technology the least. The ones that have never heard of many, if any, conventional talking points of AI. Be it naivety or ignorance, it's those who haven't kept an eye on this that believe it's a mere fad or trend. Perhaps like electricity or the automobile.
@@danielmaster911ifyis a great example of op's point. If he knew anything about cs, he would have mentioned it. I've been programming software for 35 years and I think it's a glorified compression algorithm. Happy to debate anyone with technical knowledge.
@@jeffreygordon7194 Well, promise you'll get back to me in a year.
@@danielmaster911ifyAre you going to learn to code in a year or is chatgpt? None of the experienced devs, and I include myself, use any of these things beyond formatting data into tables or something. My last attempts with the reasoning model had it hallucinating methods on a large public open source API. For someone who knows what they're doing, it's a literal waste of time. But I'll definitely remember to follow-up with you in a year.
This was very useful. Nice tips to distinguish the snake oil from true claims.
Thank you for this episode it was very engaging and informative.
As someone working in the data field, this is hard to listen to. LLMs greatly help our work because of the large knowledge base it is trained on. You just need to be wary whenever the LLM gives you conceptually wrong answers.
This whole bubble is going to burst when investors run out of patience and money. OpenAI is burning $5B a year right now, and selling their product at something like a 99% discount to the ocean of startups that have AI names. The instant OpenAI buckles, and it will(it’ll have to be valued at MORE than $200B in the next funding round😂) these startups are toast. The bottom line is that today, it costs WAY TOO MUCH to run and train LLMs, and their actual value is negligible in practice. This is probably the worst bubble I’ve seen in 15 years of working in startups.
AI doesn't need to be able to do your job better. It only needs to be able to convince your boss that it can do your job.
It doesn’t need to do it better. Just cheaper.
I can absolutely see Pixels being used to gaslight abuse victims "see, you actually like this, look how happy you are"
Okay, I guess I was just unaware that this was the general opinion of "AI" (or, as it should be called, "Machine Learning").
I am one of those people who advocates for the usefulness of Machine Learning, and how it can make our lives better. But I see it is a way to automate some really labor intensive tasks like data entry (and even some data analysis), or translation. Imagine if you could do your podcast, Adam, then have it intelligently translated into 100 other languages, with metaphors and social context intact within seconds. THAT is the kind of thing Machine Learning, I think, can and will do.
I think "AI" (as it is being called) WILL be huge and earth shattering, but in a much more boring way than it appears a lot of people are seeing it.
The A.I. hype is simply this decade's Google Glass.
A failed project that made the user look like a dork, while giving everyone else the impression they were being recorded when they were around... is not quite comparable to machines that can compete in mathematical olympics.
Not even close 😂
@@ZaximusRex You clearly didn't watch the whole video.
@@danielmaster911ify You missed the operative word........"hype."
This decade's Google Glass is Apple's Vision...
Current AI is a stupid man's idea of intelligence.
Reading this comment section is like assisting to a meeting of flat earthers.
wow you're really wise! can you show me how to make money with AI?
Great commentary and questions Adam
31:00 ah, no. Other than some very trivial structuring and checking, the evidence is mounting that LLM-based Gen-AI is of little reliable use in coding. Try looking more broadly and deeply into a) the underlying classes of problems inherent in Gen-AI used in coding, and b) the underlying core problems in software development. That's not easy given the signal-to-noise ratio being created by Gen-AI hype, but you'll find the decent critique fundamentally undermines any value.
32:00 Again no: and this is key the common aspects of many Gen-AI failure modes. The level to which Gen-AI will impress you is typically inversely proportional to your subject-matter expertise on that topic. So the extent to which Gen-AI "coding" or "bird identification" is impressive tends to be related to your lack of knowledge, skill or capability.
Update: ... and this is exactly the point made @ 52:15