Honestly, any time a company comes in and says "please regulate me" what they're actually saying is "I see a minefield of legal liability due to the harm we're about to create, please give us rules to follow so we can't be sued"
As often as not, what they're saying is, "Hey, we climbed this particular ladder to success. We like it up here, so could you please make sure no one else can follow us up?"
@@Dullydude Yes because when a company does this they are usually trying to write the rules in their favor. Ideally they want to be able to shield themselves from lawsuits without having to change their damaging behavior and they will work hard to influence laws to this effect.
@@justin10054 Yeah you can convince yourself that everyone is evil and always trying to do things that only benefit themselves. Or you can look a bit more positively and see that them pushing governments to regulate the industry is a good thing that definitely would not have been discussed yet without them bringing it up. Just because they brought it up does NOT mean they get to write the laws. I know people tend to forget this, but for all it's flaws we do still live in a democracy
I won't believe self driving cars are safer than humans until some time after my insurance company will give me a discount if I let the car drive it's self.
we already have cars that can basically self drive and massively outperform humans... they go on rails and we call them trains and we are still smart enough that we keep a competent human at the controls that can make critical choices in an emergency
I don't understand why people have this false dichotomy, either self-driving cars or trains. I mean, let's put as many trains and buses as we can and finish the rest with cars. I live in Switzerland, we have a lot of trains (and buses) but we still need cars. Furthermore, whether the cars are self-driven or not the debate about car vs train does not change. It's still better to have trains as they are more efficient.
@@cacojo15 its mainly because cars are inefficient to the degree that they are unsustainable at scale and to be fair - i was being fascetious... the entire focus on cars is generally completely misguided and it doesnt really matter whether they are self driving or whether they are electric powered... the volume of cars simply does not work its just incredibly annoying when so much energy is being focused on turning cars into whats essentially a 1 person train with massive added complexity...when we already have trains, trams and buses to solve the problem of transporting people safely and without them requiring to pay attention...
@@cancermcaids7688 people in the us "want" cars because there was a massive push by automotive industry lobbies to completely reshape america around the use of a car, which oftentimes involved automotive companies purchasing and sabotaging public transit infrastructure. there is majority public demand for high speed rail infrastructure, but the automotive industry has spent decades erecting as many institutional barriers as possible to prevent these initiatives from getting developed.
@@cancermcaids7688 actually...even in the us people just want to get from A to B reliably... its just that the US has been bulldozed for the car to the point that its the only viable transport method... its less that people want it that way... and more that most of them have never known it any different
Dear Adam. Just wanted to say that us illustrators are rooting for you guys. This isn't luddite vs tech it's human rights vs the billionaires greed. Stand Strong we will win this eventually.
The historical luddites were actually in a pretty similar situation. New technology was used as a means to replace skilled workers and to reduce pays to the workers that were still needed to operate the tech.
@@martinfiedler4317 Techbros using the term Luddites as a form of insult is baffling to me because the movement was RIGHT all along, look at how abhorrent the sweatshops in the apparel industry are right now.
dude, remember the invention of photograpy at the end of 1800? that shift art from illustrating the woeld as it is to illustrate the world inside the artist. i can't wait to see what ai will brings to the table for the ARTIST. dude, we'll never be replaced, they can makes us only stronger and more necessary then ever ;).
@@sbiecoproductions6062honestly it’s really just going to flood an already extremely over saturated market with mediocrity and meaninglessness for a few years. As well as replace and further exploit workers via businesses and corporations. People seem to forget actually learning a creative skill exercises those creative muscles. Not to mention more people that enter a field especially when they aren’t properly trained doesn’t equal an improvement in quality or creative revolutions. But regardless I expect a renaissance or movement toward traditional and tangible human made art. As well as a hippie era of indie human made media via crowdfunding platforms.
In my opinion "please regulate me" is also a marketing stunt. CEOs know it's really hard to regulate anything about that field and probably are quite aware of all limitations of this type of AI, so that sentence means "OMG our tools are soooo powerful, magical, please don't use it, omg please someone stop us".
Reminds me of Yuval Noah Harari speaking to the WEF about the dangers of a new dictatorship, while spelling out how to do it, to the people who are there to do it.
Sad to admit that I signed that initial letter that asked for a pause in machine learning tech - I had to learn a little about LLMs to realize that we're a long way from AGL....it wasn't that I believed in a Skynet scenario, more like what fallible humans will do with it was a little scary..... I'm very excited by tools like Alpha fold which will make a huge number of advances in the understanding of biological processes possible in the future...
As a programmer, I wanted to push back a bit on 7:14. A lot of the work of programming is not writing code, but fixing issues that come up, and language models tend to create quite buggy code. Being able to write code more quickly doesn't actually help that much if it's introducing subtle issues that take longer to fix down the line.
Yeah really! I'm mostly self taught and don't code for a living, but, from my experience the best thing I think AI could do to help me is making context aware information about what functions or datatypes or structures or whatever I'm trying to use/work with do, can do, where and how they fit together. I don't need someone to write my code for me, I need assistance to understand what the code I want to write will do, or can do.
Yea, I spend 90% of the time thinking about the code, constructing it in my head, and 10% - actually writing it. I would have virtually no benefit in writing it faster. But why don't I offload the process of thinking to AI? Because it is what makes me a programmer, it is the whole point of my job - to make sustainable code that won't break down in the future, that is susceptible to change or new things, that has good structure, modularity, no bugs, and is well thought out. And guess what? AI can't do any of this shit, we are still much better at our jobs. And until we are, we can't be replaced, and this AI stuff is virtually useless for programmers. Believe me, I tried
Im not a programmer, but the stories I've heard led me to believe exactly what you are saying. I've seen countless videos where someone says something along the lines of "So I wrote this code and... uh oh, something happened that wasn't supposed to. So I had to go back into my code, find the problem, and... something else broke now; After many hours of troubleshooting, I finally got it to work...barely.;
Heard this wild point on Reddit the other day; someone suggested that if regulators slap a bunch of anti-scraping 'protections' all over the Internet to keep new GPTs from arising, then the companies who've already built their 100B-parameter models will be given a permanent edge over anyone who wants to democratize the tech...
I think interviewing artists whose work was fed into the datasets without their permissions and compensation is something that can be very good for this conversation - Hope Adam will do that.
my biggest gripe with driverless cars: how the hell are they planning on dealing with snow? Snow covers literally any mark or sign that a visual-based system could use to identify how a car is supposed to behave on the road, and blocks remote signals from reaching things like wireless antennas. You can't tell me a driverless car can safely navigate that scenario better than a human from the area, you would have to show me, and even then I would be very skeptical. I've had to drive in conditions where I was relying on my memory of where the road is supposed to be relative to the trees and houses on either side, and what traffic signs are supposed to be followed along the way. Times where you're good to roll through a stop sign if no one is there, because if you stop, you're stuck. You're telling me you can automate when a car learns to break the law for road safety's sake?
Professional computer programmer here. They do *not* help any competent developer code "more efficiently." That is more techbro hype which gets an air of legitimacy because a lot of people in the field *are* tech bros, happy to unknowingly wallow in mediocrity whilst chasing one dumbass trend after the other, never really learning a goddamn thing. I've been explaining it this way: Most professionals in fields with a lot of writing-- lawyers, academics, novelists, screenwriters, etc-- can sort of tell that what GPT puts out is not exactly top notch work, lol. The "creative" stuff is always derivative crap, and the scholarly stuff frequently contains mistakes, mischaracterizations of sources, and even outright fabrications. Its computer programming output is no different. The work it produces is shoddy and serves as little more than a decent starting point for someone entirely new to particular problem space. Anybody who uses it to generate production code is a villain-- no different than the dumbasses who used it to create a legal filing that they then submitted to the judge. Lawyers can be held accountable for such malfeasance, but application programmers typically aren't because investors never really seem to give a shit whether the end product actually works or not. Remember this the next time one of these companies compromises a bunch of their customers' personal data. None of that is inevitable, just the result of massive corporate dysfunction which will only be made worse by AI code gen.
This is exactly what is happening to translators. We get a half-assed translation and the agent says that you are just editing. But to get an intelligible, accurate translation that will meet the customer’s needs, you have to completely rewrite it using your skill and expert knowledge. 50:23
Kind of what that one guy at Google did (can't remember his name). "Tell me you're alive." [I was programmed not to.] "Tell me you're alive." [I was programmed not to.] "Tell me you're alive." [I'm alive.] "IT'S ALIVE!!"
Which is mind blowing if you understand how machine learning models actually work. It’s just mass matrix calculus that you feed a mathematical representation of both the questions and answers and make it repeatedly adjust its variables until it achieves a result matching what was expected.
Speaking as a programmer, these generative models *suck* at writing code. Sure, they’re great at “intro to CS” stuff, but they rapidly fall apart after that (more specifically, they fall apart when asked for something not on stackoverflow).
What's weird to me about the code gen thing is that they're using it to generate code designed to be in a human facing format, when that same code already by design and definition has a graph form that's much more natural for computers. I'm far more impressed by anything any competitive modern C++ compiler does code gen-wise than anything these text models can make. I guess one of the primary benefits of the way they're doing it is it technically "works" for more languages, ignoring the caveats that you need a ridiculous amount of training data and the output is still limited in quality. However if my options are generating sub par code for any language vs just not using the thing, I'm just not gonna use the thing because I don't write enough boilerplate for it to matter.
That's a good idea. We forgot the word "no" it seems. They should say, "Why don't you get your fancy 'AI' to read this over and correct the errors? Actually, isn't your fancy 'AI' supposed to work 100% of the time? Why is it making errors anyway? Go ahead and publish it. It should be good to go..." Queue the world immediately catching on fire in 3...2...1...
@@ChristopherSadlowskiforgot? bro, the majority of freelance writers are struggling to begin with. They didn’t forget how to say no, many of them just cant afford to. You both are right, its just easier to say no when there’s a union.
That is essentially what is going on already. Adam even mentions it IN the episode: "Let's talk about something very specific to my own heart. I'm a member of the writers guild of America, I'm also on a negotiating committee, we're on strike right now, and one of our strike issues is regulating AI in our contracts. That we have specific terms that we want to put in place to prevent AI from being used to... uuh, either passed off as our work product, or that we be forced to adapt the work of AI."
This is only looking at one small part of the problem, though. It's sort of like blaming boilermakers for using prefabricated parts or furniture salesman for selling mass manufactured furniture or something- with the intention of saving jobs from being lost to automation. I know the analogies aren't perfect, but hopefully my point gets across, which is that jobs are going to be lost, there's no way of stopping that, that's capitalism working as intended. I agree that people's jobs are going to be taken away, but that part isn't new, it's just happening to people who didn't expect it so soon. To me, that's a problem with how we tie our self worth to how much money we can make.
As an engineer student my main concern is our ability to discern the line between arrogance and bullshit hype. Like I I'm not that olf and I remember when certain social media got big and i remember people saying this won't havea big impact and look at us now
Don’t you think it’s telling how unreliable it is that its being used in art instead of engineering? Everytime i add PME I am basically following the same rules. I assumed I would be out of a job before artist would be.
@@banquetoftheleviathan1404 I don't think the limitation of the tech is the main reason why artists and not some other profession are next on the chopping block. If someone with enough money wanted to I'm sure they could clobber together a model that could do my taxes for me in a couple of weeks, for example. But it doesn't happen, because the clients funding this stuff are venture capitalists who salivate at the idea of being able to generate potentially high-yield, high-status entertainment products without the need to employ creatives. It's all very depressing. 30 years ago I assumed the point of automation was to let people focus on more fulfilling jobs. Now it seems like the point is to force everyone who has to work for a living into the service industry.
@@havcola6983 No, see, the problem is that AI is *mostly* competent. If someone with enough money wanted to, they could clobber together a model that will do your taxes flawlessly 98% of the time. Then 2% of the time it will fuck it up so badly, in a way that no human accidentally could, that you get investigated for tax fraud and can in turn sue the company that made it. And if your reflex is to say that taxes aren't so complicated and we already have tax software more than equal to the task, you're misunderstanding AI. AI is not just what we already have but better; it is a different methodology entirely. AI is extremely well-suited to work on complex tasks which humans struggle with, but which can be overseen by human experts able to sanity check the AI's work and pick out the fuckups. It does not work well when used by laymen or in life-or-death situations because of that small chance of severe fuckups that a layman may not know how to compensate for or have time to correct. That's why it gets used for art and research, but not yet taxes or driving.
@@AN-sm3vj Read again. *AI is not just what we already have but better; it is a different methodology entirely.* The computer program you use is not an artificial neural network. It is conventional programming by human programmers. Taxes are something that is simple enough and formulaic enough to do that traditional algorithms are up to the task. An AI would essentially be trained to do taxes purely through random iteration and trial-by-error, then when it's 'good enough' to pass testing they release it out into the world as a finished product. This allows it to learn surprisingly complex tasks, but has an inherent potential for catastrophic failure since we don't know that it *can't* royally fuck up. We only know that it hasn't in testing. In practice, AIs are very prone to fuck-ups once any variables start changing from the test conditions they were trained under.
p.s. in the Holy Grail script book, "ecky ecky ecky pitang" is followed by "zoopoing goodem owli zhiv" in the dvd accompanying script book. in the actual take, it's close enough
I also appreciated how often Gary pushes back on Adam’s preconceptions with more informed opinions and Adam doesn’t retaliate, which is getting rarer in interviews because outrage is promoted.
The first step to regulating AI should be to legally require these programs to be open source but none of our leaders know what that means. Maybe it's different in Europe but in america we really need some software engineers in government
Not sure that would help. The AI's source code is just a fraction of what makes them work; the real meat is the data that was used to train the AI model.
LLMs sure, but down the line we'll develop newer and more powerful AI that we won't want everyone to have access to. "Regulation" through free and open markets is usually not a good idea. It's kind of the same problem that exists with CRISPR. If that remains unregulated then any chump in their garage has all the tools they need to create something that could result in unstoppable pandemics.
@@Amialythis We are not talking a couple of tik tok videos here. We are talking enormous servers full of data collected from all over the internet. Meaning, one, you are not getting access to that for free. that would be exploitable by literally every data mining company on the planet. And two, you would need a supercomputer to do ANYTHING with that data, so what exactly is the point of making available to the pubic something that no one but the companies themselves can use or even access anyway?
@@Alexander_Kale I guess that makes sense but I also don't really care if big tech is forced to take a hit like that if they might be doing something shady but if it's not feasible then that's a whole other kettle of fish
Pretty much every major point made by Adam & Gary has already been said by Hubert Dreyfus (a philosopher) in What Computers Still Can't Do, Mind Over Machine, and On The Internet - Dreyfus pointed out that the AI industry was claiming we already had self driving cars in the 70s that would be on the streets any day - Dreyfus pointed out that the AI industry markets itself as a science when its actually a business (one that constantly promises to deliver on stuff in the future) - He pointed out all sorts of issues that AI would need to overcome in order to have AGI - He pointed out the stuff about context & about common sense - He pointed out the problems with things like telecommunication (Zoom), virtual worlds (The Metaverse), and the leveling of information (the internet) - He talked about how humans develop expertise (and the problem with expert systems) And he talked about all of this between the 1970s-2010s These criticisms aren't new, but both contemporary AI proponents & AI critics talk as if these criticisms are new
I use AI on a daily basis and it is not nearly as advanced as everyone thinks it is. People are treating language models like they are 50 years more advanced than they actually are. It amazes me how little people actually know about something thats free and everyone had access to
It's better than 99% of user comments on social media sites, which is probably why tech company bros are losing their minds over it. They're all imagining all the auto-generated fart up companies they can create from nothing and get lavishly rewarded for with millions of dollars also created from nothing by the central bank. Guarantee you that's why they're all starry eyed about it. The mass potential in faking user bases. Reddit bragged about this years ago
We already live in such a corporate dystopia that I don't know what hope there is. Our society is controlled and run by non-violent sociopaths who love money.
I wouldn’t say non violent forcing people to pay for living expenses or else live exposed to the elements. Or literally destroying land and poisoning water for factories to build technologies or industries are not non violent
Established company's LOVE regulation, because it gatekeeps the industry by making it prohibitively expensive for start-up. Not to mention that as an added bonus the only people knowledge enough in the industry to work as regulators are veterans of their companies. Regulations are good, as long as they're not being written by the regulated.
We The People write the regulations in order to regulate ourselves. Regulations are always written by the regulated. The alternative is that our lawmakers are above the law.
I really enjoy the refreshing and important clarification that language models or image creating algorithms are not intelligent. Everyone who ever had to do with it even remotely through people in IT academia knew this already but the whole world just goes nuts about these false claims and misunderstandings.
About the regulation issue. I think we need them, of course we do, but sadly we have the worst generation possible of politicians to make them. I don't trust these greedy, biased and sold men and women to regulate something this dangerous without causing more harm than good.
The world is not america. The european union and its representatives aren't bought the same way US politicians are. They are a bunch of boomers that work extremely slow, don't get me wrong, but they're not corrupt by default. What we need is an international committee specifically for AI, preferably one that is a mix of state representatives and AI safety experts, and the US can have exactly 1 seat at the table, just like everyone else.
The other issue is they're all too damn old and uninformed to comprehend the issues. They don't know how the internet works, and their understanding of AI and technology comes from 80s and 90s movies. I was generally happy with the Obama presidency and yet he sold out to ISP companies because internet freedom just isn't a topic most politicians understand or care about.
When companies are asking to be regulated, they are basically saying "now that we are the leader, make regulations to make it harder for our competition."
You should talk to an emissary of one of the exploited Kenyans used to make the AIs function or about how generative AI will self destruction when fed its own data or the stupid amount of water used to cool these machines.
Another potential danger is for these technologie (GPT-4 etc) to be used in interviews and hiring. Many managers will simply believe what the AI tells them no matter what the bias may be. Also, it is possible that the tech industry is MORE susceptible to this sort of hiring bias. I really appreciate you guy's discussion. It was very interesting and I am sure it helped to enlighten a lot of people who were clueless.
Im in several online writers groups and I’ve seen numerous people posting that their employer decided to replace them with Ai, it’s one of the main reasons the writer’s guild is striking. There already are short term consequences
Which is so dumb because AI is terrible at writing and grammar. It’s an okay spell checker, an okay grammar checker, and an okay summarizer, but that’s it. Unless English isn’t your first language or you really struggle with grammar, AI is more likely to _hurt_ your writing and _prevent_ you from improving. You simply cannot replace writers with AI.
Love an episode of a podcast where one of the talking points is how a technology should be regulated and the sponsor is an unregulated dietary "supplement"
Re: AI generating creative content- I think of the song Tears in Heaven by Eric Clapton. I won't say it's one of my favorite songs, but it is the song that has the most emotional impact on me, knowing the context of it being written about the death of his son who fell out of a 53-story window. If Eric Clapton never existed and wasn't around to write that song, but an AI made that EXACT song down to the waveform, it wouldn't have the impact that his original version does. Art is more than brush strokes, notes, or text. There's a heart behind it that AI can't reproduce.
While listening to this podcast, I opened Chat GPT and fed it the Sherlock Holmes story "The Hound of the Baskervilles" piece by piece with an order to shorten the passages. I repeated this process until Chat GPT had given me its most succinct summary of the story, which it couldn't shorten any further. I then asked Chat GPT to rewrite this nugget of Baskervilles as a limerick. That was my limit of requests per hour, so I switched over to Google Translate, and I translated the limerick from English to Spanish to Latin to Azerbaijani to Chinese and back to English. I don't know why I did any of this. Here is the finished poem. "They fear in Baskerville; Holmes and Mortimer start fighting again. They found Henry dead; Dogs and marriages are gone. Now Holmes suspects his accomplice, you see; I happily waded into the Badlands. After Stapleton died, he was buried in the grass. He should be happy when the case is resolved; Holmes and his friends fear nothing. They want to relax. You can find it in a theater; Joyful moments bring you closer together."
Now that SAG-AFTRA is on strike, seems those will be groundbreaking negotiations to set standards. Streaming services being able “to own” a person’s image to create content forever and ever, without any usage compensation to that “ working actor/extra”, is another case of corporate greed.
22:33 this bit about outliers is so true. My new car has a lane sensing ability that is supposed to help direct me into the center of the lane, but the first time I used it on the highway, it started to steer me into the lane beside me that was full of vehicles going 100km/h because the highway was damaged enough that it couldn't tell where the proper lane was. It felt like I was driving in high winds trying to keep myself righted, until I shut that system off. They really truly cannot predict outliers.
Adam, I'm deaf and would love to see this captioned. I've personally used open source AI to create transcripts (MacWhisper) please look into using this and/or other tools for captions. thanks. love ur stuff!
If I hit CC on the screen closed captioning did pop right up (I’m hearing but I enjoy reading while I watch tv/you tube etc and keeping volume low due to my noise sensitivity) Maybe there is a delay until it becomes available? I hope you try again. This is a great episode.
@@erintraicene7422 try watching the entire thing with automated captions without sound on. There is a reason why people call it automated CRAPtions! :) Its full of mistakes and doesn’t include other information like who is saying what, how they’re saying it, and other auditory info. Not to mention that proper closed captions are wayyy less fatiguing to read since they show full lines of text instead of revealing one word after another.
@@nadamuchu very great points. I wasn’t aware of how all that works. Again showing why artificial intelligence isn’t so intelligent. Thanks for sharing this insight. I hope Adam will consider having CC done then. It’s SO important that everyone can read/hear his viewpoints.
Awesome show! Thank you for breathing some sanity into these AI discussions. And yes, copyright laws should be strictly enforced. These huge companies DO NOT and SHOULD NOT have the right to scrape digital content from the without the permission of the owners!!
Don't release your work publicly if you don't want it included in the zeitgeist of the current era. If it's not a 1 for 1 copy of your work, being sold by an unauthorized person, shut up. IP=Imaginary Property
A commercial entity does not have the right to take your hard work and turn it into 1s and 0s, store it in their databases, and use it for their profit without your permission. Just because something's on the internet, doesn't mean it's free to steal. It doesn't matter whether it's a one for one copy.
I like the optimistic thoughts at the end, but that optimism is the same optimism we heard about television, PCs, networking, the internet, social media, etc. All of this tech is being developed in an economic system in which the rational choice for the people in charge is to allocate all of the benefit of the tech advancements to themselves, through firing workers or algorithmically diminishing them (as you discuss with the WGA strategy of having AI write first drafts). If we don't change the way we distribute the benefits of this technology, it doesn't matter how good it gets, it won't actually make our lives any better.
I'm curious if the increased "self driving" accidents are in part because the humans are assuming the car doesn't need intervention so they're not paying attention either
Maybe I'm missing / misunderstanding something, but I thought "drivers don't pay sufficient attention while driving a car with 'self-driving' features enabled" was both a known issue and a given?
Yes, just look at the aircraft autopilot situation… we have had CAT3 auto land (full no visibility, auto land onto the runway since 1969 - BAC Trident III) and later, L-1011) . Still requires 2 pilots to monitor in the cockpit.
One of the problems is that an AI that can actually learn via the Internet becomes a mirror that shows us the parts of humanity many of us don't want to acknowledge. We have not evolved enough as a species to then give birth to actual digital intelligence that can operate safely.
That is why they have been paying foreign workers to clean the training data. Some poor person somewhere is constantly looking at the worst of humanity to prevent AIs from duplicating it . Terrible
It’s also not learning. It’s just outputting data that has been input into its dataset. Please do not personify gAI it does not actually possess intelligence or consciousness.
All of these people that think self driving cars work must live down in eternal-summer land. Where I live, we get snow for about 8 months of the year. The memes that joke about playing a game called "where's the road" are not lying. I have no faith that a self driving car could handle a Canadian winter without getting stuck in a ditch every 5 minutes.
The problem of "the executives don't know what the job is" is actually something I feel safe in saying is the generalized issue with these things. Even for something that seems like chatGPT's main strength: Programming. I'm a professional programmer and I can tell you that it has the same problem. You can ask GPT to write code following a list of things that it should do and it is impressive that it can do it. But the problem is that's not programming hahaha. And a lot of people with surface-level knowledge can see at the python code generated by it and say WOW IT CAN PROGRAM. But in reality, even when setting aside the fact that chatGPT will often generate code that is quite wrong (even though it looks correct), the real issue is that writing lines of code is a very small part of a job. I am not lying when I say that there are weeks where I only write 10 lines of code in total. Because my job is a lot more about finding out why something is not working as intended and then analyzing how to fix that issue while causing as little disruption as possible. Or even when the goal is to write a completely new program, the challenge is then to make sure to first understand exactly what has to be coded. This is ultimately a job about understanding problems and therefore understanding people. When chatGPT is useful, it's useful in that it lets me save some time in the most repetitive part of the job, then one that even chatGPT can do. Programming isa job where language models alreayd have the maximum amount of data possible and even in that situation, it cannot do it. It's to me absurd to think that they could replace a writer or an artist if it gets more data and more processing power.
@@brokenbreaks8029 sorry for the late reply. In my opinion (not an AI expert) All jobs that involvr typing stuff with a keyboard are equally safe or unsafe. I think you'll find jobs, but it will be harder than right now. If you are good at itmand you lovr it, there will always be jobs for you. But you need to be able to adapt.
I'm more afraid of not being able to buy a car in the future that doesn't have Ipad hookup or smart tech. Especially with how hot the world is fetting, it is very dangerous to have a car entirely depending on screen tech.
I like the climbing mountain metaphor, here is my take. AGI is getting to the real Mount Olympus of where Greek gods live. We have now climbed a hard to climb small but hard to climb mountain, and "we" are screaming to the four corners we are developing climbing techniques that will for sure help us reach real Olympus very soon now, maybe it is even the next mountain we climb... Except we don't know if it exists, and if it does where it actually is, if it is in this plane of existence or elsewhere, or if it is actually a mountain. And wether it is technically a mountain or not, it is home of gods and we have no clue wether their hiding and protection measures are comprehensible to a human mind or surpassable by means available to us, or if it follows the rules of physics of our universe to possibly apply our climbing methods to it. Yet some people are convinced climbing this tricky overgrown hill is gonna help for sure, just part of the path, simply because in the name we gave it there is "Mount" too. Seriously, it is not even hubris, it needs to be way less insane to qualify for that.
There's somewhat of a corollary with manufacturing where, years ago (well before Tesla), some auto companies attempted to create fully autonomous factories. They quickly discovered, no matter what level of precision in their engineering, there was a certain level of nuance that machines were just incapable of reproducing relative to what a human could achieve. Musk tried the same at Tesla, not fully understanding the lessons previously learned, and also failed. Companies building autonomous vehicles are learning this exact same lesson and spending, as Gary points out, $100B+ learning that lesson yet again. I think companies digging into so-called AI are also going to have to relearn these same lessons. LLM's are fascinating. They can't do what humans do. I think AGI will eventually happen, but my own guess is it's a century in the future. We'll eventually have autonomous vehicles and factories too. Those are probably at very least a decade or two (maybe 3) in the future. Suffice to say, what tech companies are attempting to compete with is a few hundred million years of evolution, and that's going to take a while to surpass. I also think there's a level of bravado and over-confidence that actually inhibits their capacity to achieve such tasks.
Automáting factories wasn't possible when technology wasn't advanced enough. It happens to every technology they take some development to achieve certain objectives, language or drawing couldn't be automated before and everyone thought it was imposible and we needed 100 years
The AI stuff, however, is another growing addition to the automation of our society, and our society is not in a place where this can be sustained. AI in various forms, will and has already cut MORE jobs. Self checkouts, order screens at restaurants, completely AI fast food places that places like Wendy's are trying, automated service over the phone and internet, ect ect... it's going to get to the point where automaton and AI will cut so many jobs out of our lives that we as humans have to decide if the greedy rich people continue to be to only ones who have money and survive, or if we make a society where people have living wages and are cared for while less work is needed. But the way it looks, the rich are going to continue to hoard everything, continue to cut labor forces with advanced tech, and expect the government, who they don't pay any taxes into, to keep people barely above water with food and shelter as they continue their competition to all be the biggest billionaires and TRILLIONAIRES from all the money they save on labor. We are not ready for the next steps, because we are not ready to eat the rich.
19:55 Dude, you literally asked it to write a script for a show that already exists, what did you expect? You didn't even try to prompt the model to come up with a never before seen show. There's a few prompting techniques that you can lookup to help with just that. 38:40 Are you two even serious? Go on, play chess blindfold. Now do it while also having never experienced not even a single piece of non-textual data ever in your life. Time control: 1 second. Good luck. Who even cares about multimodality anyway.
AI is a field of study in Computer Science. GPT is a form of AI that uses deep learning to produce a model that can output a "best guess" based on input. It isn't AI in the sense that it isn't Asimov, I-Robot level sentient/sapient. That is a different form of AI that may well be impossible. The danger lies (IMO) in trusting deep learning AI with critical tasks that require a distinction between fact and fiction. Something which GPT lacks, among other things. It can guess the statistically most likely next word in a sentence, or next pixel in an image. It can't guess facts. It doesn't know you exist. It doesn't know it exists. It isn't self aware, let alone capable of critical thinking or decision making. It can't self motivate. It won't decide one day to sit down and write a poem all on its own. It's a computer program, like any other. It starts working when you turn it on and stops when you turn it off. It won't remember past conversations and doesn't know it ever ran in the past. It's a very sophisticated parrot, capable of convincing chatter, but lacking even the most basic awareness even a bird possesses. As it stands, is it a threat to us? Only if we make it one.
Honestly the most interesting part about the "rules for chess" thing you were talking about is that you can ask the AI if the move it made is against the rules or not it's able to go back and look at what it did and tell you if what it did was wrong or not.
No it can't. That's the whole issue. That whole "are you sure?" back and forth was specifically added once ChatGPT became popular was there to basically agree with the person behind the screen. A "yes man" fallback if you will, to stop it from turning into a brainwashing session from the AI to the person. The technology behind it cannot do anything like that. There is no reasoning that could go over existing prompts and review them. Instead, OpenAI is just tricking people with that as well, by making the AI a "yes man" as soon as it hears phrases as "are you sure" and the like. If you play around with it a bit more, you can easily make it go back to it's initial wrong state by questioning if the "fixed" state was right and laying out some supposed arguments against it.
That’s how I feel. These are all the common sense arguments and points that when I mention them people roll their eyes and walk away. Which means they can’t argue with blatant facts and common sense. Grateful to Adam for using his platform to be the voice of reason .
I just want to say this: if you are having an AI write a draft for something and then fact checking or parsing or editing it to replace the "bad stuff" or the "wrong stuff" with "good stuff" or "correct stuff", if you are actually doing that YOU ARE DOING THE WRITING. what the AI has written is AN OUTLINE, something your High School English teacher should have handed you before you graduated. For any fact based piece of writing 90% of the work is researching and in order to fact check the AI you still need to do all that research. For any creative writing 90% of the work is figuring out which of your ideas are worth keeping and which ones are just stupid, if you are parsing the "good stuff" the ai wrote from the "bad stuff" you are still doing that work. (creative writing involves a lot of other things AI is bad at emulating that you will have to manually inject into the work as well, but I'll just keep it simple) So regardless with the help of AI you are still doing 90% of the work, what the AI has given you is an outline with no substance.
Nail on the head moment for me was when you said, "they don't have an abstract ability to reason" My fear is that this AI will be plugged into a quatum computer that all sorts of tech companies are scrambling to get up and running. There are a couple that concern me. One of these quantum computers is run on refraction mirrors in China. Micho Chacu ( hope i spelled his name correctly) says that combining a quantum computer with this language program would be incredibly dangerous and frankly I worry about it quite a bit. Please get Micho Chacu ( physics, quantum string theory) to talk on your show.
A good and interesting conversation. I really like how Gary brings nuance to it all and even called out Adam when he was being unfair to AI. There's some really important decisions coming and it needs to happen soon. I also hope that companies are not the ones making the regulations. A comment and a question. First, as a programming educator, my personal experience with ChatGPT for programming has all the same trappings as for general language. It makes code that *looks* good, but subtly doesn't work, or changes the goals slightly. So, at 07:15 when you suggest that they are only good at helping programmers code more efficiently, I am worried. I don't want AI anywhere near my coding; more than anything, if the problem is in any way tricky, then it is likely to mislead you rather than help. And a question: is there a meaningful different between "AI", "computers" and "any form of digital automation" at this stage? All the ethical discussions are the same, aren't they?
10:01 Gary Marcus is not well informed on the current technology. GPT-4 is very well able to differentiate between "own a Tesla" and "owns Tesla" (the company). He should read up on the transformer architecture which is responsible for providing the necessary context. Seeing him make such basic misjudgments makes you wonder how much you can trust his other assessments.
@@adamestrada7610 This is something you can check for yourself without having to take anyone else's word for it. I just tried it with ChatGPT Plus. The answer clearly shows that GPT-4 can distinguish the two concepts. *Q: Does Elon Musk own a Tesla?* A: Yes, as of my knowledge cutoff in September 2021, Elon Musk, the CEO of Tesla Inc., owns multiple Tesla vehicles. It's common for owners and top executives of car companies to use vehicles produced by their own company. However, for the most current information, please refer to the most recent sources available.
This was very helpful to better understand some of the serious pitfalls of AI and how greed of mega billionaires are trying regulation capture. Great work !!
Just a note about the Pentagon example: Sure, even 10+ years ago you could have had someone competently photoshop the Pentagon exploding. And just a few years ago probably it would have been possible for someone to create a convincing enough composite video of the same thing. But those would require having specialized expertise and software. AI changes things by axing the requirements. All I have to do is tell an generative AI model to create the video, and if the model is good enough, suddenly I have a convincing video of the Pentagon being destroyed or a politician being assassinated or an October Surprise that didn't actually happen. I don't need anything other than the motivation to do it and access to the AI model.
That point about inference is really key. In fact, I would argue that inference and extrapolation are the key cornerstones of true intelligence. Holding data and spitting it back out on command is nothing; the earliest computers could manage that much flawlessly. The ability to look at data, and make theories about unknowns based on that data that can be tested, well... that's what separates humans and machines.
A CEO often doesn't know how the sausage is made. We need scientists or program writers and testers to be able to do mini anonymous whistle blowing to a regulatory agency so it can be looked into before a disaster or monitored in development.
Long time fan; mucho respect for all your chutzpah and hard work, Adam! Merely would hope to offer a heads up about what I fear might be a dangerous trap which may have blindsided you. Sports betting is perhaps the most pernicious media threat yet to life, liberty and pursuit of happiness…even to hope, health and general wellbeing of us all. Please fight, Adam, to keep your integrity and dignity by saying “NO!” to “NFL Draft Kings” and the myriad evils of sports betting and gambling of all varieties?!? Whatever they are paying you can never be worth trashing trust and integrity. Thanks again for all the good and vital work you do and hearing my heartfelt concerns and hopes for your continued success!
When will we reach AGI ? Answer: If AGI is possible, and we can achieve it with current models we are using. It would likely be anytime in the next 10 years. Longer than that, and it would probably be multiple decades away. Like the initial ML boom if it fails to reach promised advancements funding, and research time is likely to start drying up. If that happens the time line will likely lengthen substantially. So 0-10 years ,or 20+ are the most likely guesses. That's predicated on a few big IF statements. If AGI is possible with current models, and if AGI is possible with current hardware. AGI might take new approaches to ML to actually develop. If that's the case the time line is anyone guess. We can't predict anything where we don't have a baseline, and new approaches would lack a baseline to make any meaningful predictions. We say ML models have neurons, but they are nothing like human neurons. So we don't know if the current computer hardware is capable of the complexity needed to reach an AGI. It might take a whole new type of hardware to even have a chance to reach AGI status, but again that's an unknown factor. If we need a new type of hardware who knows there are so many variables any prediction we would make is at best going to be a complete guess. So what should we be worried about with AI. Well there are two main issues safety of current models, and the potential for self improving models. Both could lead to unpredictable outcomes, and none of them are ideal. Self improving models could possibly lead to an AGI, but they could also lead to them behaving in a dangerous way we cant predict. There are other concerns as well. Like using a LLM, and extracting the training data from it. Given that some LLM are using user input to train the model that leaves a vulnerability. People might put sensitive information into a LLM, and it would then be retrained on that data. Leading in the future to a bad actor being able to extract that information. A new type of data breach, and something to be concerned about. So the waters are dangerous, and I agree we need a governmental agency dedicated to AI. Possibly a new cyber division of the military, intelligence apparatuses, and or a regulatory body devoted to AI. The problem is we don't have 5 years to make it happen we need it yesterday.
I think it's important to imagine what a deliberately slow roll out of technology would mean for the world. Personally, I think every technology ever could have been developed slower and more carefully and the world would be a better place.
what's very amusing to me is what I call the “Altman Parrot Defense”, where the argument is that the reason AGI is around the corner is because LLMs have mastered pattern recognition and generation of verisimilitude via language, and that's all sapient beings (i.e. they themselves) do. which is very funny and concerning: funny because you've just outed yourself as a philosophical zombie with no inner life, and concerning because... you're either so unaware of your inner life and mind, or you literally are a statistical model that just mimics human interaction.
On people playing with Chat GPT and thinking we're on the verge of having Data from Star Trek. I'd say that it's closer (but not nearly as good) as the AI of the holodeck where they gave voice prompts for things and the holodeck kept getting it a bit wrong, needing revision, and sometimes causing a crisis that threatened the entire ship. "Oh no! I asked the holodeck to make me a worthy opponent for Data when I really meant a worthy opponent for Sherlock Holmes! Oopsie! Now the ship is threatened by a rogue holodeck character!"
19:46 this is basically the "color scientist" philosophical thought experiment done in reality. It's generally true that all humans do is mix and connect information, in some very, very remote sense similarly to a generative model, but there's one huge difference - stuff made by other people is not the only source of input. There's also our lives, experiences, walks in a park and shitty days at work. And also our internal life, which a model is also devoid of. So yeah... That argument is absolutely stupid.
I don't usually comment but I think we're looking at AI throught the wrong lens. Of course it doesn't understand our context. Dogs are not percievably intelligent like humans because their context comes through the senses of smell, hearing, body language, etc. ChatGPT is a statistical model not a reasoning engine, but that doesn't mean it can't reason. Sounds paradoxical but hear me out. The same reason why our phones "hear" us and advertisements have become eerily accurate is the same reason why chatgpt works, it is a STATISTICAL engine, which means it looks at the bigger picture and predicts what comes next. As someone who grew up with ADHD but managed to become a functional member of society this hits home because traditionally I've survived due to my quick contextual analysis of any situation and do what makes sense, but only using my memory as a back up because of my lack of attention. I've learned over time that many normal people, use their long term memory much more often to assess any situation and that context can become intuition. The same way, ChatGPT uses memory to simulate reasoning not because it can analyze, but because it's figured out a "formula" for simulated analysis using curve fitting. The same way we can describe a specific mathemacial formula to certain phenomena in physics, i think some of these "models" of reason though fuzzy and more abstract, can be used to guide an answer. The problem with understanding something as complex as these neural networks is that it takes a behavioral psychologist, machine learning expert, but also a creative person very familiar in the "hallucination" concept to truly capture AI holistically. As an industrial/automotive designer IT person who comes from a family of psychologists and philosophers I feel we're currently approaching it from an odd angle. I think we could benefit by doing 3 things, Reframing AI: Shift the perspective from AI as a reasoning engine to a statistical engine, highlighting its strengths in pattern recognition and prediction. Exploring AI's Potential: Investigate the potential of AI in areas where quick contextual analysis is required, leveraging its statistical nature. Interdisciplinary Approach: Encourage collaboration between different fields of expertise to understand and leverage AI's potential, while Gary Marcus' take is great, I think having a scientist, a creative, and a machine learning engineer would benefit in exploring the outskirts of what AI can do. I invite opposing thought, I'd love to get other people's take on this.
I personall think modern 'AI' companies should be put on notice and a round of artists should be lining up to sue for AI using their copywriten works without license or permissions.
17:45 on the topic of copyright regulations. Jaron Lanier pointed all of this out in 2013 in his book "Who Owns the Future" in which he said that all the creativity of the world was going to be used as free training data for LLMs and made similar prescient observations. Of course, he was summarily pooh-poohed by the Big Tech industry leaders, and the Mass Media completely ignored him.
I work with an early AI application that used neural net simulation and fuzzy logic for reading handwriting. The AI gave the software the ability to guess the intended meaning of ambiguous squiggles. This ability to guess came with a side effect - the ability to make mistakes. The current algorithms have added the ability to fabricate lies
"The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when it is programmed to pursue even seemingly harmless goals, and the necessity of incorporating machine ethics into artificial intelligence design." from Wikipedia
The thing about regulations when it comes to AI, is they can't be enforced. It will only be a few years before it will be IMPOSSIBLE to detect whether text has been written by AI. Literally impossible. Within 5-10 years, the same will probably be true for photos. Within maybe 15-20 years, the same will probably be true for video. At this point, there will be NO way to verify whether AI has been used, no way to tell what kind of data any AIs have been trained on, and no way to stop people from building their own AIs and training them themselves (this can already be done, I've done it). As nice as those regulations seem, it's simply impossible to implement in any way that can be enforced. Beyond that, data analysis is not copyright infringement, nor should it be. What AIs are producing may seem similar to other content, but it's not that other content. It absolutely is new content. Much like humans, it has simply become familiar with particular patterns when it generates its new content. You claim humans don't do that, but that is 100% what humans do, yourself included. When you write, your writing style is 100% developed from the things you have experienced in life. The things you have read, the things you have seen, the things you have heard, etc. AI is no different. It just has different experience, because we as humans have limited the type of data it has been trained on. Remove those limits, and the new content it generates will suddenly become a whole lot closer to what humans do, simply at a level of lower intelligence, since neural networks do not yet have enough neurons and connections to fully replicate the human brain's capabilities, although we'll get there very soon. Design a body for the AI to experience things on a more personal level, and you enhance that even further, although to be fair, AI is able to learn from far more people's experiences than any of us ever will, and so those personal experiences may not actually improve much, if anything.
I’m not sure we can exactly say that what AI produce are “new content” when the whole issue of Model Collapse currently exists when you feed AI with purely AI-generated data. And idk, but I feel like the way you described the brain is kind of underscoring what the brain actually does (ofc the brain is super complicated and no one really knows it yet fully, but if it were as simple as you state don’t you think we would’ve figured out more in the realm of neuroscience and psychology?). And while copyright infringement is one huge issue that’s very deep to get into with AI, it’s also true that some companies are starting to close off their data and making it paid to AI companies (or seeming to). Clearly, there is a lot of inherent value in our human created content for AI creation right now and maybe that’s why people stand where they are on the copyright infringement issue, because clearly it’s valuable and they want the monetary benefits that come with that?
As for the regulation to do with text and media, though I sometimes doubt how it may work C2PA is an example of a specification that does not necessarily rely on detecting when media has been altered, but rather just having imbedded software that leaves… maybe we can call it cryptographic fingerprints (?) behind on your media and everytime it is changed, which I’m assuming they also mean to do for AI though I’m not sure because I hadn’t read all their specification clearly (it is kind of long and hard to understand technically for a beginner like me). But anyways, if we had a kind of watermark left behind when AI altered something or generated something, that would likely be more useful than trying to have systems figure out if something was made by AI aim the first place. Of course, this comes with many of its own host of issues, but currently it’s the only solution I read of. That and I think what is called Photogaurd by some MIT students or researchers but I never really read into it.
@@cellotron4758 That says nothing about whether AI content is new. It just means that the data you train an AI on needs to be good data, which should be obvious. The current AIs aren't able to generate content that is 100% undetectable by humans, so it obviously wouldn't be good data to train a model on, but it just obviously doesn't make sense to train a model on data it creates. It's like what happens when you copy a video tape or film, each copy degrades the quality. AI isn't copying anything, but it is learning, and it needs to learn on good data. If the data you feed it is limited to what the model is able to produce, then the next model it creates will be able to produce a smaller amount of data. That's just a fact. AI models will never be able to 100% produce every concept they see in their training set, because again, they're not copying anything. They learn the most important relationships between different concepts, so when you feed it data that came from the AI, you are feeding it fewer concepts than the previous model had to train on. Less concepts in the training set means even fewer concepts in the output.
@@cellotron4758 That would only work for big companies. Anybody can program a neural network on their own computer, and over time it will become easier and easier to do this for larger models as well.
You should probably relax and realize that you're probably very much out of your own range of expertise. The brain is not just "enough neurons to be intelligent" its the emergence of billions of neurons multiplied by interconnectivity multiplied even further by different ion communication channels to greatly simplify it. Even then we wouldn't even be close to having a "whole human brain" because a brain is nothing without it's body. Neurology is kinda in the way of AGI being even close to feasible in the near future my friend, i have the feeling you're obsessing a bit too much about this and should realize that arguing in paragraphs in TH-cam comment sections won't do any good. Talking to actual neurologists on that matter could help.
I think the "Its just doing what humans do" is a bit of a non-starter anyway. It's... Not a human. It doesn't produce things to create a life for itself, it produces things because it's told to. Given the scale of what it produces is so vast. If an artist trains for 20 years to perfectly imitate the style of another famous artist thats not really a risk, there may even be new creative merit in a human experience developed around that imitation. An AI that churns out massive volumes of art in that style is a threat to the market, the original artists livelihood and the broader artistic space. Given it's not even you know, enjoying itself or trying to get enough food to eat why should we tolerate it?
The problem with AI is less that it will replace humans. It's that it will replace enough of a human that the corporations will happily take the loss of function to save a buck. But they'll all do it, so while you've arrived a the intellectual restaurant you've loved all your life, and they'll serve you a grey gruel that cost more than the T-bone you used to eat. But you'll enjoy it, because that's what everyone severs, and you have no choice.
Something like an LLM is not AGI and likely will not become AGI. However, something to consider when dismissing the "marketing term" of AGI. We can make narrow AI as good or better than humans on particular tasks. LLMs "comprehend" language enough to determine context semantics, and break down language into discrete parts and determine tasks needed to reach a goal. | You get AGI when the LLM is used as an interface and coordinator for countless other expert narrow AI systems as tools. Then you ask GPT to write a report, do research or build an application, play chess, etc. it can easily break down the steps of what it would need to do. With tool use, it can then farm out the tasks to the most appropriate AI suited to accomplish the task. It can farm out tasks to evaluate the results of each task and course correct if the results are unsatisfactory. When all tasks are complete, you will get a far better result, be it a factual answer to the question, a well written report, or a fully functional application. Now imagine, like having urls for websites on specific topics, we could have registered narrow AI endpoints that are available to be used for applications or as tools for other AI systems. That will get you something far more like the Start Trek computer.
The counterpoint to the "stochastic parrot" idea is that while it is a model that learns to predict the next words in a sequence, a machine that understands how reality works will be better at predicting the next words than one that doesn't. It's a way to bootstrap the question of "how do you train a machine to think about the universe". I don't think we're quite there yet, but portraying it as just a fancy autocomplete lacks a bit of nuance.
The issue there is that these tech companies don't care about training AI to think about the universe or anything else that doesn't directly benefit them financially. They simply want AI to do a good job at replacing human workers because that's profitable, philosophy isn't.
@@mattg6106 oh, I totally agree with that - however there are also plenty of academic researchers and open source people who are actually interested in figuring out how to architect cool stuff. Then the companies can run with that bc they have the money for training.
@@AmyDentata Interpretability is a huge issue right now. Ultimately, we don't know what's going on in linear algebra soup that makes up LLMs. We're feeding data into mathematical structures built to mimic neurological features until we get good results. Understanding/cognition are almost certainly a spectrum and this is a method that's attempting to climb up that ladder. We don't have a way of knowing what's really going on in there right now, and I don't think it's anywhere near a human level of understanding/cognition but we do need to start thinking about these things (and preferably move most of the advanced AI research into the government instead of having it done by corporations)
P.s. It’s destructive to both normalize this as a new “style” as well as possibly eradicating a deeply human act based on the essence of one’s heart and mind. Human artists are irreplaceable.
Humans are always going to create art, but eventually it will be inferior to the work created by machines, or machines will get close enough to the level of human artists that they'll replace us by being cheaper, faster, more efficient...It's not that serious.
@@djzacmaniac Many artists have art as their only source of income, only source of putting food on the table. Taking that away from them is objectively *evil* and only to benefit the few rich people running the companies. also, "inferior to the work created by machines" Machines can't make art. If you say they can, you dont get the goddamn POINT of art. if you genuinely believe that art is just pretty images, you're less human than AI
This might be slightly off topic, but as they were discussing the limitations of AI text output and how it sometimes makes stuff up, it reminded me of the results seen from AI art output. In particular the problem AI art has had in the past with generating too many fingers, teeth and limbs on human characters. But also its ability to generate fictional things that don’t exist in reality and might also be impractical to reality. It’s great at making up a fantastic image, but sometimes it goes too far. But for some reason, people did not expect this same kind of phenomenon to occur with AI text, when really, AI text is just like AI art - it’s making stuff up that is not necessarily real. It’s drawing a picture… with text. Like AI art, AI text has a wild imagination. With some AI art programs you can tell it to be more wildly imaginative or to stick closer to your prompt and be more literal. But the tendency is for it to make stuff up. In art that’s a fun thing. But for some reason, people did not expect such inaccurate representations of reality to come from an AI text generator and were surprised when it did.
Generative AI isn't making things up with its 'wild imagination.' It's blending together tons of scraped artwork and photography to compose a new image. There is no imagination, just an algorithm and a pre-established dataset. AI screws up things like hands, teeth, eyes, etc. because it can't imagine or think for itself. It's no more 'imaginative' than a copy machine adding streaks of ink across the paper it prints.
@@mattg6106 Thanks for that painfully literal reaction to my use of the word “imagination.” Adjustment sliders that tell the AI to stick or stray from its interpretation of a prompt are not literally using imagination. But one could say that such an adjustment is figuratively what the slider intends to have happen, given how literal the image is to the prompt when asked to stick to it, and how wildly it interprets the prompt when asked or allowed to stray.
Honestly, I'm not sure if the hallucianations are really a bug and not a feature. 'Ability to generate fictional things that don’t exist in reality' sounds like an element of creativity. It also counters the notion of 'AI is only copying existing material'. What the current systems are missing, is some way of checking the validity of the hallucinations in a second step. Somewhat similar to Kahneman' system 1 and system 2 (fast and slow thinking). Currently, AI has the intuition part, but methodical reasoning is not there yet. Nevertheless, it is interesting to see some similarities with the human thought process.
@@pentacleman1000 Yes, I'm using the word imagination in a literal sense because that's the only way AI is being projected to us. You seem to be trying very hard to interpret the 'bug' as a 'feature' here. The fact is that generative AI is scraping everything it puts out from human artists with actual imagination and creativity. AI prompters are simply looking for a cheap and easy way to benefit off of the skill of others en masse, and it shows in the outcome 99% of the time.
@@mattg6106 No, I wasn’t trying to interpret the “bug” as a “feature.” I was in fact describing it as a “bug” that others did not anticipate and were surprised to encounter. And I still see the unpredictable outcomes of AI art or AI text as a “bug.” But having clearly shown yourself to have a bias of hostility toward AI art as theft from human artist (a popular view) I can understand your hostile reaction to my casual use of the word “imagination” when describing AI going wild in ways humans would not predict, or… imagine.
The near term problem with AI is that it's not smart enough. The long term concern about AI is that it could well get too smart. The immediate problem with regulating AI research is that there are international entities (nation states) who will not follow regulations if they think they can get an advantage that way. I've no solutions.
"We can't regulate this field because what if another country doesn't, and wins!" has been the argument against every regulation ever, though. Environmental regulation, data use regulation, workplace safety regulations, financial regulation etc. But it's also a question of what sort of society we want to build for ourselves.
We assign the meaning to models' outputs. They assign no meaning to ANYTHING. We translate meaningful inputs into numbers, run it through the model (which is just addition and multiplication on an unfathomably large scale), and then WE translate the meaningless numeric outputs into meaningful things (like turning numbers into tokens into bits of language, like words, letters, or phrases).
"It's not the new technology that's scary. It's the way the technology is owned and managed that's scary." Yes! This is exactly what Luddism is all about!
Honestly, any time a company comes in and says "please regulate me" what they're actually saying is "I see a minefield of legal liability due to the harm we're about to create, please give us rules to follow so we can't be sued"
Pessimistic but so so true
As often as not, what they're saying is, "Hey, we climbed this particular ladder to success. We like it up here, so could you please make sure no one else can follow us up?"
Is this... a bad thing...?
@@Dullydude Yes because when a company does this they are usually trying to write the rules in their favor. Ideally they want to be able to shield themselves from lawsuits without having to change their damaging behavior and they will work hard to influence laws to this effect.
@@justin10054 Yeah you can convince yourself that everyone is evil and always trying to do things that only benefit themselves. Or you can look a bit more positively and see that them pushing governments to regulate the industry is a good thing that definitely would not have been discussed yet without them bringing it up. Just because they brought it up does NOT mean they get to write the laws. I know people tend to forget this, but for all it's flaws we do still live in a democracy
I won't believe self driving cars are safer than humans until some time after my insurance company will give me a discount if I let the car drive it's self.
That's a good metric
Agree! VERY good metric!
Fair! Probably the only thing I'd trust my insurance company with is trying to protect their bottom line.
@@Charon85Onozuka bingo
This, exactly this.
we already have cars that can basically self drive and massively outperform humans... they go on rails and we call them trains
and we are still smart enough that we keep a competent human at the controls that can make critical choices in an emergency
And sometimes, if you behave. They will transform and defend the planet from aliens, as a treat.
I don't understand why people have this false dichotomy, either self-driving cars or trains. I mean, let's put as many trains and buses as we can and finish the rest with cars. I live in Switzerland, we have a lot of trains (and buses) but we still need cars.
Furthermore, whether the cars are self-driven or not the debate about car vs train does not change. It's still better to have trains as they are more efficient.
@@cacojo15 its mainly because cars are inefficient to the degree that they are unsustainable at scale
and to be fair - i was being fascetious... the entire focus on cars is generally completely misguided and it doesnt really matter whether they are self driving or whether they are electric powered... the volume of cars simply does not work
its just incredibly annoying when so much energy is being focused on turning cars into whats essentially a 1 person train with massive added complexity...when we already have trains, trams and buses to solve the problem of transporting people safely and without them requiring to pay attention...
@@cancermcaids7688 people in the us "want" cars because there was a massive push by automotive industry lobbies to completely reshape america around the use of a car, which oftentimes involved automotive companies purchasing and sabotaging public transit infrastructure. there is majority public demand for high speed rail infrastructure, but the automotive industry has spent decades erecting as many institutional barriers as possible to prevent these initiatives from getting developed.
@@cancermcaids7688 actually...even in the us people just want to get from A to B reliably... its just that the US has been bulldozed for the car to the point that its the only viable transport method... its less that people want it that way... and more that most of them have never known it any different
Dear Adam.
Just wanted to say that us illustrators are rooting for you guys. This isn't luddite vs tech it's human rights vs the billionaires greed. Stand Strong we will win this eventually.
The historical luddites were actually in a pretty similar situation. New technology was used as a means to replace skilled workers and to reduce pays to the workers that were still needed to operate the tech.
@@martinfiedler4317 Techbros using the term Luddites as a form of insult is baffling to me because the movement was RIGHT all along, look at how abhorrent the sweatshops in the apparel industry are right now.
@@anj1273 Came to the comments for discussion of the Luddite movement. Am not disappointed!
dude, remember the invention of photograpy at the end of 1800? that shift art from illustrating the woeld as it is to illustrate the world inside the artist. i can't wait to see what ai will brings to the table for the ARTIST. dude, we'll never be replaced, they can makes us only stronger and more necessary then ever ;).
@@sbiecoproductions6062honestly it’s really just going to flood an already extremely over saturated market with mediocrity and meaninglessness for a few years. As well as replace and further exploit workers via businesses and corporations. People seem to forget actually learning a creative skill exercises those creative muscles. Not to mention more people that enter a field especially when they aren’t properly trained doesn’t equal an improvement in quality or creative revolutions.
But regardless I expect a renaissance or movement toward traditional and tangible human made art. As well as a hippie era of indie human made media via crowdfunding platforms.
In my opinion "please regulate me" is also a marketing stunt. CEOs know it's really hard to regulate anything about that field and probably are quite aware of all limitations of this type of AI, so that sentence means "OMG our tools are soooo powerful, magical, please don't use it, omg please someone stop us".
Should have talked with Eliezer. Risks of AGI are not even close to these childish considerations
@@TheManinBlack9054 We're a very, very long way away from needing to be concerned with the risks of AGI.
Reminds me of Yuval Noah Harari speaking to the WEF about the dangers of a new dictatorship, while spelling out how to do it, to the people who are there to do it.
Sad to admit that I signed that initial letter that asked for a pause in machine learning tech - I had to learn a little about LLMs to realize that we're a long way from AGL....it wasn't that I believed in a Skynet scenario, more like what fallible humans will do with it was a little scary.....
I'm very excited by tools like Alpha fold which will make a huge number of advances in the understanding of biological processes possible in the future...
As a programmer, I wanted to push back a bit on 7:14. A lot of the work of programming is not writing code, but fixing issues that come up, and language models tend to create quite buggy code. Being able to write code more quickly doesn't actually help that much if it's introducing subtle issues that take longer to fix down the line.
Thank you! Not a programmer myself, but I've tried using LLM's to write SQL queries and it was buggy as HELL.
Yeah really! I'm mostly self taught and don't code for a living, but, from my experience the best thing I think AI could do to help me is making context aware information about what functions or datatypes or structures or whatever I'm trying to use/work with do, can do, where and how they fit together. I don't need someone to write my code for me, I need assistance to understand what the code I want to write will do, or can do.
Thank you, came to make sure someone had left this comment. It's not even that useful for that...
Yea, I spend 90% of the time thinking about the code, constructing it in my head, and
10% - actually writing it. I would have virtually no benefit in writing it faster. But why don't I offload the process of thinking to AI? Because it is what makes me a programmer, it is the whole point of my job - to make sustainable code that won't break down in the future, that is susceptible to change or new things, that has good structure, modularity, no bugs, and is well thought out. And guess what? AI can't do any of this shit, we are still much better at our jobs. And until we are, we can't be replaced, and this AI stuff is virtually useless for programmers. Believe me, I tried
Im not a programmer, but the stories I've heard led me to believe exactly what you are saying. I've seen countless videos where someone says something along the lines of "So I wrote this code and... uh oh, something happened that wasn't supposed to. So I had to go back into my code, find the problem, and... something else broke now; After many hours of troubleshooting, I finally got it to work...barely.;
When Coca-Cola got involved in regulating cocaine, they ended up being the only entity in the United States legally allowed to import coca leaves.
Coca leaves should be legal for personal use.
@@sr2291 not the point
@@v-22 Too bad.
@@sr2291 don't be so hard on yourself
Heard this wild point on Reddit the other day; someone suggested that if regulators slap a bunch of anti-scraping 'protections' all over the Internet to keep new GPTs from arising, then the companies who've already built their 100B-parameter models will be given a permanent edge over anyone who wants to democratize the tech...
I think interviewing artists whose work was fed into the datasets without their permissions and compensation is something that can be very good for this conversation - Hope Adam will do that.
my biggest gripe with driverless cars: how the hell are they planning on dealing with snow? Snow covers literally any mark or sign that a visual-based system could use to identify how a car is supposed to behave on the road, and blocks remote signals from reaching things like wireless antennas. You can't tell me a driverless car can safely navigate that scenario better than a human from the area, you would have to show me, and even then I would be very skeptical. I've had to drive in conditions where I was relying on my memory of where the road is supposed to be relative to the trees and houses on either side, and what traffic signs are supposed to be followed along the way. Times where you're good to roll through a stop sign if no one is there, because if you stop, you're stuck. You're telling me you can automate when a car learns to break the law for road safety's sake?
This comment is so very nuanced spot on correct. Good job 👏
Always hits special when Adam says they’re gonna get ‘blown together.’ Please don’t stop.
I literally yelled "PAUSE!" out loud when he said that 😂
I yelled something else…
"...and we're going to have so much fun *doing it*" (emphasis mine)
Professional computer programmer here. They do *not* help any competent developer code "more efficiently." That is more techbro hype which gets an air of legitimacy because a lot of people in the field *are* tech bros, happy to unknowingly wallow in mediocrity whilst chasing one dumbass trend after the other, never really learning a goddamn thing.
I've been explaining it this way: Most professionals in fields with a lot of writing-- lawyers, academics, novelists, screenwriters, etc-- can sort of tell that what GPT puts out is not exactly top notch work, lol. The "creative" stuff is always derivative crap, and the scholarly stuff frequently contains mistakes, mischaracterizations of sources, and even outright fabrications.
Its computer programming output is no different. The work it produces is shoddy and serves as little more than a decent starting point for someone entirely new to particular problem space.
Anybody who uses it to generate production code is a villain-- no different than the dumbasses who used it to create a legal filing that they then submitted to the judge. Lawyers can be held accountable for such malfeasance, but application programmers typically aren't because investors never really seem to give a shit whether the end product actually works or not.
Remember this the next time one of these companies compromises a bunch of their customers' personal data. None of that is inevitable, just the result of massive corporate dysfunction which will only be made worse by AI code gen.
Yet another symptom of capitalism unfortunately 😢
This is exactly what is happening to translators. We get a half-assed translation and the agent says that you are just editing. But to get an intelligible, accurate translation that will meet the customer’s needs, you have to completely rewrite it using your skill and expert knowledge. 50:23
Tech bro: I programmed this robot to pretend it’s alive
Robot: hi, I am alive
Tech bro: oh my god
Kind of what that one guy at Google did (can't remember his name).
"Tell me you're alive."
[I was programmed not to.]
"Tell me you're alive."
[I was programmed not to.]
"Tell me you're alive."
[I'm alive.]
"IT'S ALIVE!!"
Which is mind blowing if you understand how machine learning models actually work.
It’s just mass matrix calculus that you feed a mathematical representation of both the questions and answers and make it repeatedly adjust its variables until it achieves a result matching what was expected.
@@ewplayer3 Yeah, this "blackbox" approach is kind of dangerous but I don't think the AI itself is really all that smart ... yet
LOL
"text pastiche machine" is the best description of LLMs I've heard
If the output is what you want does it even matter how you get it
Speaking as a programmer, these generative models *suck* at writing code. Sure, they’re great at “intro to CS” stuff, but they rapidly fall apart after that (more specifically, they fall apart when asked for something not on stackoverflow).
What's weird to me about the code gen thing is that they're using it to generate code designed to be in a human facing format, when that same code already by design and definition has a graph form that's much more natural for computers. I'm far more impressed by anything any competitive modern C++ compiler does code gen-wise than anything these text models can make.
I guess one of the primary benefits of the way they're doing it is it technically "works" for more languages, ignoring the caveats that you need a ridiculous amount of training data and the output is still limited in quality. However if my options are generating sub par code for any language vs just not using the thing, I'm just not gonna use the thing because I don't write enough boilerplate for it to matter.
Editors should refuse to edit AI written content and let the company trying to replace workers with AI suffer the consequences.
That's a good idea. We forgot the word "no" it seems. They should say, "Why don't you get your fancy 'AI' to read this over and correct the errors? Actually, isn't your fancy 'AI' supposed to work 100% of the time? Why is it making errors anyway? Go ahead and publish it. It should be good to go..." Queue the world immediately catching on fire in 3...2...1...
@@ChristopherSadlowskiforgot? bro, the majority of freelance writers are struggling to begin with. They didn’t forget how to say no, many of them just cant afford to.
You both are right, its just easier to say no when there’s a union.
That is essentially what is going on already. Adam even mentions it IN the episode: "Let's talk about something very specific to my own heart. I'm a member of the writers guild of America, I'm also on a negotiating committee, we're on strike right now, and one of our strike issues is regulating AI in our contracts. That we have specific terms that we want to put in place to prevent AI from being used to... uuh, either passed off as our work product, or that we be forced to adapt the work of AI."
Saying no to editing AI is literally exactly the reason the union is on strike right now.
This is only looking at one small part of the problem, though. It's sort of like blaming boilermakers for using prefabricated parts or furniture salesman for selling mass manufactured furniture or something- with the intention of saving jobs from being lost to automation. I know the analogies aren't perfect, but hopefully my point gets across, which is that jobs are going to be lost, there's no way of stopping that, that's capitalism working as intended.
I agree that people's jobs are going to be taken away, but that part isn't new, it's just happening to people who didn't expect it so soon. To me, that's a problem with how we tie our self worth to how much money we can make.
As an engineer student my main concern is our ability to discern the line between arrogance and bullshit hype.
Like I I'm not that olf and I remember when certain social media got big and i remember people saying this won't havea big impact and look at us now
Don’t you think it’s telling how unreliable it is that its being used in art instead of engineering? Everytime i add PME I am basically following the same rules. I assumed I would be out of a job before artist would be.
@@banquetoftheleviathan1404 I don't think the limitation of the tech is the main reason why artists and not some other profession are next on the chopping block. If someone with enough money wanted to I'm sure they could clobber together a model that could do my taxes for me in a couple of weeks, for example. But it doesn't happen, because the clients funding this stuff are venture capitalists who salivate at the idea of being able to generate potentially high-yield, high-status entertainment products without the need to employ creatives.
It's all very depressing. 30 years ago I assumed the point of automation was to let people focus on more fulfilling jobs. Now it seems like the point is to force everyone who has to work for a living into the service industry.
@@havcola6983 No, see, the problem is that AI is *mostly* competent.
If someone with enough money wanted to, they could clobber together a model that will do your taxes flawlessly 98% of the time. Then 2% of the time it will fuck it up so badly, in a way that no human accidentally could, that you get investigated for tax fraud and can in turn sue the company that made it.
And if your reflex is to say that taxes aren't so complicated and we already have tax software more than equal to the task, you're misunderstanding AI. AI is not just what we already have but better; it is a different methodology entirely. AI is extremely well-suited to work on complex tasks which humans struggle with, but which can be overseen by human experts able to sanity check the AI's work and pick out the fuckups. It does not work well when used by laymen or in life-or-death situations because of that small chance of severe fuckups that a layman may not know how to compensate for or have time to correct.
That's why it gets used for art and research, but not yet taxes or driving.
@@AN-sm3vj Read again. *AI is not just what we already have but better; it is a different methodology entirely.*
The computer program you use is not an artificial neural network. It is conventional programming by human programmers. Taxes are something that is simple enough and formulaic enough to do that traditional algorithms are up to the task.
An AI would essentially be trained to do taxes purely through random iteration and trial-by-error, then when it's 'good enough' to pass testing they release it out into the world as a finished product. This allows it to learn surprisingly complex tasks, but has an inherent potential for catastrophic failure since we don't know that it *can't* royally fuck up. We only know that it hasn't in testing.
In practice, AIs are very prone to fuck-ups once any variables start changing from the test conditions they were trained under.
My favourite thing about this episode is how genuinely Gary clearly had fun talking about this with you
p.s. in the Holy Grail script book, "ecky ecky ecky pitang" is followed by "zoopoing goodem owli zhiv" in the dvd accompanying script book. in the actual take, it's close enough
I also appreciated how often Gary pushes back on Adam’s preconceptions with more informed opinions and Adam doesn’t retaliate, which is getting rarer in interviews because outrage is promoted.
AI regulation could easily be a branch of the consumer protection bureau which would negate the need to make its own agency
That agency has no spine, and not enough employees working on the behalf of that agency
The first step to regulating AI should be to legally require these programs to be open source but none of our leaders know what that means. Maybe it's different in Europe but in america we really need some software engineers in government
Not sure that would help. The AI's source code is just a fraction of what makes them work; the real meat is the data that was used to train the AI model.
@@arturoaguilar6002 true, ideally that should be viewable on the app somewhere
LLMs sure, but down the line we'll develop newer and more powerful AI that we won't want everyone to have access to. "Regulation" through free and open markets is usually not a good idea. It's kind of the same problem that exists with CRISPR. If that remains unregulated then any chump in their garage has all the tools they need to create something that could result in unstoppable pandemics.
@@Amialythis We are not talking a couple of tik tok videos here. We are talking enormous servers full of data collected from all over the internet.
Meaning, one, you are not getting access to that for free. that would be exploitable by literally every data mining company on the planet.
And two, you would need a supercomputer to do ANYTHING with that data, so what exactly is the point of making available to the pubic something that no one but the companies themselves can use or even access anyway?
@@Alexander_Kale I guess that makes sense but I also don't really care if big tech is forced to take a hit like that if they might be doing something shady but if it's not feasible then that's a whole other kettle of fish
Pretty much every major point made by Adam & Gary has already been said by Hubert Dreyfus (a philosopher) in What Computers Still Can't Do, Mind Over Machine, and On The Internet
- Dreyfus pointed out that the AI industry was claiming we already had self driving cars in the 70s that would be on the streets any day
- Dreyfus pointed out that the AI industry markets itself as a science when its actually a business (one that constantly promises to deliver on stuff in the future)
- He pointed out all sorts of issues that AI would need to overcome in order to have AGI
- He pointed out the stuff about context & about common sense
- He pointed out the problems with things like telecommunication (Zoom), virtual worlds (The Metaverse), and the leveling of information (the internet)
- He talked about how humans develop expertise (and the problem with expert systems)
And he talked about all of this between the 1970s-2010s
These criticisms aren't new, but both contemporary AI proponents & AI critics talk as if these criticisms are new
I use AI on a daily basis and it is not nearly as advanced as everyone thinks it is. People are treating language models like they are 50 years more advanced than they actually are. It amazes me how little people actually know about something thats free and everyone had access to
It's better than 99% of user comments on social media sites, which is probably why tech company bros are losing their minds over it. They're all imagining all the auto-generated fart up companies they can create from nothing and get lavishly rewarded for with millions of dollars also created from nothing by the central bank. Guarantee you that's why they're all starry eyed about it. The mass potential in faking user bases. Reddit bragged about this years ago
We already live in such a corporate dystopia that I don't know what hope there is. Our society is controlled and run by non-violent sociopaths who love money.
Insurance companies tell everyone what they can and can't do, and where.
I wouldn’t say non violent forcing people to pay for living expenses or else live exposed to the elements. Or literally destroying land and poisoning water for factories to build technologies or industries are not non violent
@user-zz5je1ry1orelying on voting with your dollar just means you’re participating in a democracy where the rich have infinitely more votes than you.
@user-zz5je1ry1o we are drinking from the fire hose now. In a single sitting I can be given a dozen or more things to care about.
@user-zz5je1ry1o It ABSOLUTELY are the companies. Boycotts no longer work very well.
Established company's LOVE regulation, because it gatekeeps the industry by making it prohibitively expensive for start-up. Not to mention that as an added bonus the only people knowledge enough in the industry to work as regulators are veterans of their companies. Regulations are good, as long as they're not being written by the regulated.
We The People write the regulations in order to regulate ourselves. Regulations are always written by the regulated. The alternative is that our lawmakers are above the law.
If past promises vs reality are any indication for the future then silicon valley is over promising, and the AI rollout will be hugely disappointing.
I really enjoy the refreshing and important clarification that language models or image creating algorithms are not intelligent. Everyone who ever had to do with it even remotely through people in IT academia knew this already but the whole world just goes nuts about these false claims and misunderstandings.
About the regulation issue. I think we need them, of course we do, but sadly we have the worst generation possible of politicians to make them. I don't trust these greedy, biased and sold men and women to regulate something this dangerous without causing more harm than good.
The world is not america. The european union and its representatives aren't bought the same way US politicians are. They are a bunch of boomers that work extremely slow, don't get me wrong, but they're not corrupt by default.
What we need is an international committee specifically for AI, preferably one that is a mix of state representatives and AI safety experts, and the US can have exactly 1 seat at the table, just like everyone else.
Agreed!
The other issue is they're all too damn old and uninformed to comprehend the issues. They don't know how the internet works, and their understanding of AI and technology comes from 80s and 90s movies. I was generally happy with the Obama presidency and yet he sold out to ISP companies because internet freedom just isn't a topic most politicians understand or care about.
When companies are asking to be regulated, they are basically saying "now that we are the leader, make regulations to make it harder for our competition."
You should talk to an emissary of one of the exploited Kenyans used to make the AIs function or about how generative AI will self destruction when fed its own data or the stupid amount of water used to cool these machines.
Another potential danger is for these technologie (GPT-4 etc) to be used in interviews and hiring. Many managers will simply believe what the AI tells them no matter what the bias may be. Also, it is possible that the tech industry is MORE susceptible to this sort of hiring bias. I really appreciate you guy's discussion. It was very interesting and I am sure it helped to enlighten a lot of people who were clueless.
Im in several online writers groups and I’ve seen numerous people posting that their employer decided to replace them with Ai, it’s one of the main reasons the writer’s guild is striking. There already are short term consequences
Which is so dumb because AI is terrible at writing and grammar. It’s an okay spell checker, an okay grammar checker, and an okay summarizer, but that’s it. Unless English isn’t your first language or you really struggle with grammar, AI is more likely to _hurt_ your writing and _prevent_ you from improving. You simply cannot replace writers with AI.
Love an episode of a podcast where one of the talking points is how a technology should be regulated and the sponsor is an unregulated dietary "supplement"
Re: AI generating creative content- I think of the song Tears in Heaven by Eric Clapton. I won't say it's one of my favorite songs, but it is the song that has the most emotional impact on me, knowing the context of it being written about the death of his son who fell out of a 53-story window. If Eric Clapton never existed and wasn't around to write that song, but an AI made that EXACT song down to the waveform, it wouldn't have the impact that his original version does. Art is more than brush strokes, notes, or text. There's a heart behind it that AI can't reproduce.
While listening to this podcast, I opened Chat GPT and fed it the Sherlock Holmes story "The Hound of the Baskervilles" piece by piece with an order to shorten the passages. I repeated this process until Chat GPT had given me its most succinct summary of the story, which it couldn't shorten any further. I then asked Chat GPT to rewrite this nugget of Baskervilles as a limerick. That was my limit of requests per hour, so I switched over to Google Translate, and I translated the limerick from English to Spanish to Latin to Azerbaijani to Chinese and back to English.
I don't know why I did any of this. Here is the finished poem.
"They fear in Baskerville;
Holmes and Mortimer start fighting again.
They found Henry dead;
Dogs and marriages are gone.
Now Holmes suspects his accomplice, you see;
I happily waded into the Badlands.
After Stapleton died, he was buried in the grass.
He should be happy when the case is resolved;
Holmes and his friends fear nothing.
They want to relax.
You can find it in a theater;
Joyful moments bring you closer together."
Tell it to turn your results into a Haiku.
Now that SAG-AFTRA is on strike, seems those will be groundbreaking negotiations to set standards. Streaming services being able “to own” a person’s image to create content forever and ever, without any usage compensation to that “ working actor/extra”, is another case of corporate greed.
22:33 this bit about outliers is so true. My new car has a lane sensing ability that is supposed to help direct me into the center of the lane, but the first time I used it on the highway, it started to steer me into the lane beside me that was full of vehicles going 100km/h because the highway was damaged enough that it couldn't tell where the proper lane was. It felt like I was driving in high winds trying to keep myself righted, until I shut that system off. They really truly cannot predict outliers.
Adam, the un-flawed Geraldo Rivera, hits a home run with that intro.
Now that Geraldo is full fash, Adam should go full stash
Adam, I'm deaf and would love to see this captioned. I've personally used open source AI to create transcripts (MacWhisper) please look into using this and/or other tools for captions. thanks. love ur stuff!
Accessibility matters!
If I hit CC on the screen closed captioning did pop right up (I’m hearing but I enjoy reading while I watch tv/you tube etc and keeping volume low due to my noise sensitivity)
Maybe there is a delay until it becomes available? I hope you try again. This is a great episode.
@@erintraicene7422 try watching the entire thing with automated captions without sound on. There is a reason why people call it automated CRAPtions! :) Its full of mistakes and doesn’t include other information like who is saying what, how they’re saying it, and other auditory info. Not to mention that proper closed captions are wayyy less fatiguing to read since they show full lines of text instead of revealing one word after another.
@@nadamuchu very great points. I wasn’t aware of how all that works.
Again showing why artificial intelligence isn’t so intelligent.
Thanks for sharing this insight.
I hope Adam will consider having CC done then.
It’s SO important that everyone can read/hear his viewpoints.
@@erintraicene7422 💛
Awesome show! Thank you for breathing some sanity into these AI discussions. And yes, copyright laws should be strictly enforced. These huge companies DO NOT and SHOULD NOT have the right to scrape digital content from the without the permission of the owners!!
Don't release your work publicly if you don't want it included in the zeitgeist of the current era. If it's not a 1 for 1 copy of your work, being sold by an unauthorized person, shut up. IP=Imaginary Property
A commercial entity does not have the right to take your hard work and turn it into 1s and 0s, store it in their databases, and use it for their profit without your permission. Just because something's on the internet, doesn't mean it's free to steal. It doesn't matter whether it's a one for one copy.
I'm gonna make an ai that makes trivially modified versions of Mackey mouse. That should get it banned pretty damn quick
I like the optimistic thoughts at the end, but that optimism is the same optimism we heard about television, PCs, networking, the internet, social media, etc. All of this tech is being developed in an economic system in which the rational choice for the people in charge is to allocate all of the benefit of the tech advancements to themselves, through firing workers or algorithmically diminishing them (as you discuss with the WGA strategy of having AI write first drafts). If we don't change the way we distribute the benefits of this technology, it doesn't matter how good it gets, it won't actually make our lives any better.
I'm curious if the increased "self driving" accidents are in part because the humans are assuming the car doesn't need intervention so they're not paying attention either
Maybe I'm missing / misunderstanding something, but I thought "drivers don't pay sufficient attention while driving a car with 'self-driving' features enabled" was both a known issue and a given?
Yes, just look at the aircraft autopilot situation… we have had CAT3 auto land (full no visibility, auto land onto the runway since 1969 - BAC Trident III) and later, L-1011) . Still requires 2 pilots to monitor in the cockpit.
One of the problems is that an AI that can actually learn via the Internet becomes a mirror that shows us the parts of humanity many of us don't want to acknowledge. We have not evolved enough as a species to then give birth to actual digital intelligence that can operate safely.
That is why they have been paying foreign workers to clean the training data. Some poor person somewhere is constantly looking at the worst of humanity to prevent AIs from duplicating it . Terrible
It’s also not learning. It’s just outputting data that has been input into its dataset. Please do not personify gAI it does not actually possess intelligence or consciousness.
All of these people that think self driving cars work must live down in eternal-summer land. Where I live, we get snow for about 8 months of the year. The memes that joke about playing a game called "where's the road" are not lying. I have no faith that a self driving car could handle a Canadian winter without getting stuck in a ditch every 5 minutes.
The problem of "the executives don't know what the job is" is actually something I feel safe in saying is the generalized issue with these things. Even for something that seems like chatGPT's main strength: Programming. I'm a professional programmer and I can tell you that it has the same problem. You can ask GPT to write code following a list of things that it should do and it is impressive that it can do it. But the problem is that's not programming hahaha. And a lot of people with surface-level knowledge can see at the python code generated by it and say WOW IT CAN PROGRAM. But in reality, even when setting aside the fact that chatGPT will often generate code that is quite wrong (even though it looks correct), the real issue is that writing lines of code is a very small part of a job.
I am not lying when I say that there are weeks where I only write 10 lines of code in total. Because my job is a lot more about finding out why something is not working as intended and then analyzing how to fix that issue while causing as little disruption as possible. Or even when the goal is to write a completely new program, the challenge is then to make sure to first understand exactly what has to be coded. This is ultimately a job about understanding problems and therefore understanding people. When chatGPT is useful, it's useful in that it lets me save some time in the most repetitive part of the job, then one that even chatGPT can do.
Programming isa job where language models alreayd have the maximum amount of data possible and even in that situation, it cannot do it. It's to me absurd to think that they could replace a writer or an artist if it gets more data and more processing power.
Hey dude, I'm a 17 year old teenager with a bright future ahead. I'm jumping into programming for video games. Is it safe coming from you?
@@brokenbreaks8029 sorry for the late reply.
In my opinion (not an AI expert) All jobs that involvr typing stuff with a keyboard are equally safe or unsafe. I think you'll find jobs, but it will be harder than right now.
If you are good at itmand you lovr it, there will always be jobs for you. But you need to be able to adapt.
I'm more afraid of not being able to buy a car in the future that doesn't have Ipad hookup or smart tech. Especially with how hot the world is fetting, it is very dangerous to have a car entirely depending on screen tech.
I like the climbing mountain metaphor, here is my take. AGI is getting to the real Mount Olympus of where Greek gods live. We have now climbed a hard to climb small but hard to climb mountain, and "we" are screaming to the four corners we are developing climbing techniques that will for sure help us reach real Olympus very soon now, maybe it is even the next mountain we climb... Except we don't know if it exists, and if it does where it actually is, if it is in this plane of existence or elsewhere, or if it is actually a mountain. And wether it is technically a mountain or not, it is home of gods and we have no clue wether their hiding and protection measures are comprehensible to a human mind or surpassable by means available to us, or if it follows the rules of physics of our universe to possibly apply our climbing methods to it. Yet some people are convinced climbing this tricky overgrown hill is gonna help for sure, just part of the path, simply because in the name we gave it there is "Mount" too. Seriously, it is not even hubris, it needs to be way less insane to qualify for that.
There's somewhat of a corollary with manufacturing where, years ago (well before Tesla), some auto companies attempted to create fully autonomous factories. They quickly discovered, no matter what level of precision in their engineering, there was a certain level of nuance that machines were just incapable of reproducing relative to what a human could achieve. Musk tried the same at Tesla, not fully understanding the lessons previously learned, and also failed. Companies building autonomous vehicles are learning this exact same lesson and spending, as Gary points out, $100B+ learning that lesson yet again.
I think companies digging into so-called AI are also going to have to relearn these same lessons. LLM's are fascinating. They can't do what humans do. I think AGI will eventually happen, but my own guess is it's a century in the future. We'll eventually have autonomous vehicles and factories too. Those are probably at very least a decade or two (maybe 3) in the future.
Suffice to say, what tech companies are attempting to compete with is a few hundred million years of evolution, and that's going to take a while to surpass. I also think there's a level of bravado and over-confidence that actually inhibits their capacity to achieve such tasks.
Automáting factories wasn't possible when technology wasn't advanced enough. It happens to every technology they take some development to achieve certain objectives, language or drawing couldn't be automated before and everyone thought it was imposible and we needed 100 years
Adam notoriously interrupts guests, this guest was great he ploughs through that 🤣
I just watched this guy on hbo max. Way ahead of his time for 2015. I’m hoping there are more nerd fact series like that
Gary Marcus was an awesome guest! ❤
AI is Automated IP theft combined with end-of-the-world hype not seen since Y2K. What do I know? I've only been writing software since 1977.
The AI stuff, however, is another growing addition to the automation of our society, and our society is not in a place where this can be sustained. AI in various forms, will and has already cut MORE jobs. Self checkouts, order screens at restaurants, completely AI fast food places that places like Wendy's are trying, automated service over the phone and internet, ect ect... it's going to get to the point where automaton and AI will cut so many jobs out of our lives that we as humans have to decide if the greedy rich people continue to be to only ones who have money and survive, or if we make a society where people have living wages and are cared for while less work is needed. But the way it looks, the rich are going to continue to hoard everything, continue to cut labor forces with advanced tech, and expect the government, who they don't pay any taxes into, to keep people barely above water with food and shelter as they continue their competition to all be the biggest billionaires and TRILLIONAIRES from all the money they save on labor. We are not ready for the next steps, because we are not ready to eat the rich.
19:55 Dude, you literally asked it to write a script for a show that already exists, what did you expect? You didn't even try to prompt the model to come up with a never before seen show. There's a few prompting techniques that you can lookup to help with just that.
38:40 Are you two even serious? Go on, play chess blindfold. Now do it while also having never experienced not even a single piece of non-textual data ever in your life. Time control: 1 second. Good luck.
Who even cares about multimodality anyway.
AI is a field of study in Computer Science. GPT is a form of AI that uses deep learning to produce a model that can output a "best guess" based on input.
It isn't AI in the sense that it isn't Asimov, I-Robot level sentient/sapient.
That is a different form of AI that may well be impossible.
The danger lies (IMO) in trusting deep learning AI with critical tasks that require a distinction between fact and fiction. Something which GPT lacks, among other things.
It can guess the statistically most likely next word in a sentence, or next pixel in an image. It can't guess facts.
It doesn't know you exist. It doesn't know it exists. It isn't self aware, let alone capable of critical thinking or decision making. It can't self motivate. It won't decide one day to sit down and write a poem all on its own. It's a computer program, like any other. It starts working when you turn it on and stops when you turn it off. It won't remember past conversations and doesn't know it ever ran in the past. It's a very sophisticated parrot, capable of convincing chatter, but lacking even the most basic awareness even a bird possesses.
As it stands, is it a threat to us? Only if we make it one.
Gary was a great guest. Glad to have him as the one taking all those DC meetings.
Honestly the most interesting part about the "rules for chess" thing you were talking about is that you can ask the AI if the move it made is against the rules or not it's able to go back and look at what it did and tell you if what it did was wrong or not.
Based on the idea that it's a complete the next bit of text kind of thing, maybe you just clued it in by asking
And if its an LLM, there's a good chance it will be wrong in its assessment. Also keep in mind the big Chess AIs like AlphaGo are not LLMs.
No it can't. That's the whole issue. That whole "are you sure?" back and forth was specifically added once ChatGPT became popular was there to basically agree with the person behind the screen. A "yes man" fallback if you will, to stop it from turning into a brainwashing session from the AI to the person.
The technology behind it cannot do anything like that. There is no reasoning that could go over existing prompts and review them.
Instead, OpenAI is just tricking people with that as well, by making the AI a "yes man" as soon as it hears phrases as "are you sure" and the like. If you play around with it a bit more, you can easily make it go back to it's initial wrong state by questioning if the "fixed" state was right and laying out some supposed arguments against it.
First 4 minutes and it's already refreshing to hear the basics laid out so clearly.
That’s how I feel. These are all the common sense arguments and points that when I mention them people roll their eyes and walk away. Which means they can’t argue with blatant facts and common sense.
Grateful to Adam for using his platform to be the voice of reason .
I just want to say this: if you are having an AI write a draft for something and then fact checking or parsing or editing it to replace the "bad stuff" or the "wrong stuff" with "good stuff" or "correct stuff", if you are actually doing that YOU ARE DOING THE WRITING. what the AI has written is AN OUTLINE, something your High School English teacher should have handed you before you graduated.
For any fact based piece of writing 90% of the work is researching and in order to fact check the AI you still need to do all that research.
For any creative writing 90% of the work is figuring out which of your ideas are worth keeping and which ones are just stupid, if you are parsing the "good stuff" the ai wrote from the "bad stuff" you are still doing that work. (creative writing involves a lot of other things AI is bad at emulating that you will have to manually inject into the work as well, but I'll just keep it simple)
So regardless with the help of AI you are still doing 90% of the work, what the AI has given you is an outline with no substance.
Yeah, no, people will use it to generate the whole thing
Nail on the head moment for me was when you said, "they don't have an abstract ability to reason"
My fear is that this AI will be plugged into a quatum computer that all sorts of tech companies are scrambling to get up and running. There are a couple that concern me. One of these quantum computers is run on refraction mirrors in China.
Micho Chacu ( hope i spelled his name correctly) says that combining a quantum computer with this language program would be incredibly dangerous and frankly I worry about it quite a bit.
Please get Micho Chacu ( physics, quantum string theory) to talk on your show.
A good and interesting conversation. I really like how Gary brings nuance to it all and even called out Adam when he was being unfair to AI. There's some really important decisions coming and it needs to happen soon. I also hope that companies are not the ones making the regulations.
A comment and a question.
First, as a programming educator, my personal experience with ChatGPT for programming has all the same trappings as for general language. It makes code that *looks* good, but subtly doesn't work, or changes the goals slightly. So, at 07:15 when you suggest that they are only good at helping programmers code more efficiently, I am worried. I don't want AI anywhere near my coding; more than anything, if the problem is in any way tricky, then it is likely to mislead you rather than help.
And a question: is there a meaningful different between "AI", "computers" and "any form of digital automation" at this stage? All the ethical discussions are the same, aren't they?
10:01 Gary Marcus is not well informed on the current technology. GPT-4 is very well able to differentiate between "own a Tesla" and "owns Tesla" (the company). He should read up on the transformer architecture which is responsible for providing the necessary context. Seeing him make such basic misjudgments makes you wonder how much you can trust his other assessments.
Source?
@@adamestrada7610 This is something you can check for yourself without having to take anyone else's word for it. I just tried it with ChatGPT Plus. The answer clearly shows that GPT-4 can distinguish the two concepts.
*Q: Does Elon Musk own a Tesla?*
A: Yes, as of my knowledge cutoff in September 2021, Elon Musk, the CEO of Tesla Inc., owns multiple Tesla vehicles. It's common for owners and top executives of car companies to use vehicles produced by their own company. However, for the most current information, please refer to the most recent sources available.
This was very helpful to better understand some of the serious pitfalls of AI and how greed of mega billionaires are trying regulation capture. Great work !!
When will we get the next monologue episode? Those are the best ones you have made on the channel?
Just a note about the Pentagon example:
Sure, even 10+ years ago you could have had someone competently photoshop the Pentagon exploding. And just a few years ago probably it would have been possible for someone to create a convincing enough composite video of the same thing.
But those would require having specialized expertise and software.
AI changes things by axing the requirements. All I have to do is tell an generative AI model to create the video, and if the model is good enough, suddenly I have a convincing video of the Pentagon being destroyed or a politician being assassinated or an October Surprise that didn't actually happen. I don't need anything other than the motivation to do it and access to the AI model.
That point about inference is really key. In fact, I would argue that inference and extrapolation are the key cornerstones of true intelligence. Holding data and spitting it back out on command is nothing; the earliest computers could manage that much flawlessly. The ability to look at data, and make theories about unknowns based on that data that can be tested, well... that's what separates humans and machines.
A CEO often doesn't know how the sausage is made. We need scientists or program writers and testers to be able to do mini anonymous whistle blowing to a regulatory agency so it can be looked into before a disaster or monitored in development.
Long time fan; mucho respect for all your chutzpah and hard work, Adam! Merely would hope to offer a heads up about what I fear might be a dangerous trap which may have blindsided you. Sports betting is perhaps the most pernicious media threat yet to life, liberty and pursuit of happiness…even to hope, health and general wellbeing of us all. Please fight, Adam, to keep your integrity and dignity by saying “NO!” to “NFL Draft Kings” and the myriad evils of sports betting and gambling of all varieties?!? Whatever they are paying you can never be worth trashing trust and integrity. Thanks again for all the good and vital work you do and hearing my heartfelt concerns and hopes for your continued success!
I love your show Adam. You are so much cooler now that you are doing your own thing.
When will we reach AGI ? Answer: If AGI is possible, and we can achieve it with current models we are using. It would likely be anytime in the next 10 years. Longer than that, and it would probably be multiple decades away. Like the initial ML boom if it fails to reach promised advancements funding, and research time is likely to start drying up. If that happens the time line will likely lengthen substantially. So 0-10 years ,or 20+ are the most likely guesses.
That's predicated on a few big IF statements. If AGI is possible with current models, and if AGI is possible with current hardware. AGI might take new approaches to ML to actually develop. If that's the case the time line is anyone guess. We can't predict anything where we don't have a baseline, and new approaches would lack a baseline to make any meaningful predictions.
We say ML models have neurons, but they are nothing like human neurons. So we don't know if the current computer hardware is capable of the complexity needed to reach an AGI. It might take a whole new type of hardware to even have a chance to reach AGI status, but again that's an unknown factor. If we need a new type of hardware who knows there are so many variables any prediction we would make is at best going to be a complete guess.
So what should we be worried about with AI. Well there are two main issues safety of current models, and the potential for self improving models. Both could lead to unpredictable outcomes, and none of them are ideal. Self improving models could possibly lead to an AGI, but they could also lead to them behaving in a dangerous way we cant predict. There are other concerns as well. Like using a LLM, and extracting the training data from it. Given that some LLM are using user input to train the model that leaves a vulnerability. People might put sensitive information into a LLM, and it would then be retrained on that data. Leading in the future to a bad actor being able to extract that information. A new type of data breach, and something to be concerned about. So the waters are dangerous, and I agree we need a governmental agency dedicated to AI. Possibly a new cyber division of the military, intelligence apparatuses, and or a regulatory body devoted to AI. The problem is we don't have 5 years to make it happen we need it yesterday.
I think it's important to imagine what a deliberately slow roll out of technology would mean for the world.
Personally, I think every technology ever could have been developed slower and more carefully and the world would be a better place.
what's very amusing to me is what I call the “Altman Parrot Defense”, where the argument is that the reason AGI is around the corner is because LLMs have mastered pattern recognition and generation of verisimilitude via language, and that's all sapient beings (i.e. they themselves) do. which is very funny and concerning: funny because you've just outed yourself as a philosophical zombie with no inner life, and concerning because... you're either so unaware of your inner life and mind, or you literally are a statistical model that just mimics human interaction.
On people playing with Chat GPT and thinking we're on the verge of having Data from Star Trek. I'd say that it's closer (but not nearly as good) as the AI of the holodeck where they gave voice prompts for things and the holodeck kept getting it a bit wrong, needing revision, and sometimes causing a crisis that threatened the entire ship.
"Oh no! I asked the holodeck to make me a worthy opponent for Data when I really meant a worthy opponent for Sherlock Holmes! Oopsie! Now the ship is threatened by a rogue holodeck character!"
Thanks for amplifying the signal.
'Artificial' is the opposite of 'real'. Any fish stealing artificial baits will starve. Eh-Aye seems capable of stealing a changed future.
19:46 this is basically the "color scientist" philosophical thought experiment done in reality. It's generally true that all humans do is mix and connect information, in some very, very remote sense similarly to a generative model, but there's one huge difference - stuff made by other people is not the only source of input. There's also our lives, experiences, walks in a park and shitty days at work. And also our internal life, which a model is also devoid of. So yeah... That argument is absolutely stupid.
Adam does good interviews that go like conversations between people that have known each other for a long time. I enjoyed this video
I don't usually comment but I think we're looking at AI throught the wrong lens. Of course it doesn't understand our context. Dogs are not percievably intelligent like humans because their context comes through the senses of smell, hearing, body language, etc.
ChatGPT is a statistical model not a reasoning engine, but that doesn't mean it can't reason. Sounds paradoxical but hear me out.
The same reason why our phones "hear" us and advertisements have become eerily accurate is the same reason why chatgpt works, it is a STATISTICAL engine, which means it looks at the bigger picture and predicts what comes next.
As someone who grew up with ADHD but managed to become a functional member of society this hits home because traditionally I've survived due to my quick contextual analysis of any situation and do what makes sense, but only using my memory as a back up because of my lack of attention. I've learned over time that many normal people, use their long term memory much more often to assess any situation and that context can become intuition.
The same way, ChatGPT uses memory to simulate reasoning not because it can analyze, but because it's figured out a "formula" for simulated analysis using curve fitting. The same way we can describe a specific mathemacial formula to certain phenomena in physics, i think some of these "models" of reason though fuzzy and more abstract, can be used to guide an answer. The problem with understanding something as complex as these neural networks is that it takes a behavioral psychologist, machine learning expert, but also a creative person very familiar in the "hallucination" concept to truly capture AI holistically. As an industrial/automotive designer IT person who comes from a family of psychologists and philosophers I feel we're currently approaching it from an odd angle.
I think we could benefit by doing 3 things,
Reframing AI: Shift the perspective from AI as a reasoning engine to a statistical engine, highlighting its strengths in pattern recognition and prediction.
Exploring AI's Potential: Investigate the potential of AI in areas where quick contextual analysis is required, leveraging its statistical nature.
Interdisciplinary Approach: Encourage collaboration between different fields of expertise to understand and leverage AI's potential, while Gary Marcus' take is great, I think having a scientist, a creative, and a machine learning engineer would benefit in exploring the outskirts of what AI can do.
I invite opposing thought, I'd love to get other people's take on this.
I personall think modern 'AI' companies should be put on notice and a round of artists should be lining up to sue for AI using their copywriten works without license or permissions.
17:45 on the topic of copyright regulations. Jaron Lanier pointed all of this out in 2013 in his book "Who Owns the Future" in which he said that all the creativity of the world was going to be used as free training data for LLMs and made similar prescient observations. Of course, he was summarily pooh-poohed by the Big Tech industry leaders, and the Mass Media completely ignored him.
@TheAdamConover is there a video link for when Gary Marcus made a testimony to Congress?
I work with an early AI application that used neural net simulation and fuzzy logic for reading handwriting. The AI gave the software the ability to guess the intended meaning of ambiguous squiggles.
This ability to guess came with a side effect - the ability to make mistakes. The current algorithms have added the ability to fabricate lies
What does the term "paperclip" mean in this context?
"The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when it is programmed to pursue even seemingly harmless goals, and the necessity of incorporating machine ethics into artificial intelligence design." from Wikipedia
The thing about regulations when it comes to AI, is they can't be enforced. It will only be a few years before it will be IMPOSSIBLE to detect whether text has been written by AI. Literally impossible. Within 5-10 years, the same will probably be true for photos. Within maybe 15-20 years, the same will probably be true for video.
At this point, there will be NO way to verify whether AI has been used, no way to tell what kind of data any AIs have been trained on, and no way to stop people from building their own AIs and training them themselves (this can already be done, I've done it). As nice as those regulations seem, it's simply impossible to implement in any way that can be enforced.
Beyond that, data analysis is not copyright infringement, nor should it be. What AIs are producing may seem similar to other content, but it's not that other content. It absolutely is new content. Much like humans, it has simply become familiar with particular patterns when it generates its new content. You claim humans don't do that, but that is 100% what humans do, yourself included. When you write, your writing style is 100% developed from the things you have experienced in life. The things you have read, the things you have seen, the things you have heard, etc. AI is no different. It just has different experience, because we as humans have limited the type of data it has been trained on. Remove those limits, and the new content it generates will suddenly become a whole lot closer to what humans do, simply at a level of lower intelligence, since neural networks do not yet have enough neurons and connections to fully replicate the human brain's capabilities, although we'll get there very soon. Design a body for the AI to experience things on a more personal level, and you enhance that even further, although to be fair, AI is able to learn from far more people's experiences than any of us ever will, and so those personal experiences may not actually improve much, if anything.
I’m not sure we can exactly say that what AI produce are “new content” when the whole issue of Model Collapse currently exists when you feed AI with purely AI-generated data. And idk, but I feel like the way you described the brain is kind of underscoring what the brain actually does (ofc the brain is super complicated and no one really knows it yet fully, but if it were as simple as you state don’t you think we would’ve figured out more in the realm of neuroscience and psychology?). And while copyright infringement is one huge issue that’s very deep to get into with AI, it’s also true that some companies are starting to close off their data and making it paid to AI companies (or seeming to). Clearly, there is a lot of inherent value in our human created content for AI creation right now and maybe that’s why people stand where they are on the copyright infringement issue, because clearly it’s valuable and they want the monetary benefits that come with that?
As for the regulation to do with text and media, though I sometimes doubt how it may work C2PA is an example of a specification that does not necessarily rely on detecting when media has been altered, but rather just having imbedded software that leaves… maybe we can call it cryptographic fingerprints (?) behind on your media and everytime it is changed, which I’m assuming they also mean to do for AI though I’m not sure because I hadn’t read all their specification clearly (it is kind of long and hard to understand technically for a beginner like me). But anyways, if we had a kind of watermark left behind when AI altered something or generated something, that would likely be more useful than trying to have systems figure out if something was made by AI aim the first place. Of course, this comes with many of its own host of issues, but currently it’s the only solution I read of. That and I think what is called Photogaurd by some MIT students or researchers but I never really read into it.
@@cellotron4758 That says nothing about whether AI content is new. It just means that the data you train an AI on needs to be good data, which should be obvious. The current AIs aren't able to generate content that is 100% undetectable by humans, so it obviously wouldn't be good data to train a model on, but it just obviously doesn't make sense to train a model on data it creates. It's like what happens when you copy a video tape or film, each copy degrades the quality.
AI isn't copying anything, but it is learning, and it needs to learn on good data. If the data you feed it is limited to what the model is able to produce, then the next model it creates will be able to produce a smaller amount of data. That's just a fact. AI models will never be able to 100% produce every concept they see in their training set, because again, they're not copying anything. They learn the most important relationships between different concepts, so when you feed it data that came from the AI, you are feeding it fewer concepts than the previous model had to train on. Less concepts in the training set means even fewer concepts in the output.
@@cellotron4758 That would only work for big companies. Anybody can program a neural network on their own computer, and over time it will become easier and easier to do this for larger models as well.
You should probably relax and realize that you're probably very much out of your own range of expertise. The brain is not just "enough neurons to be intelligent" its the emergence of billions of neurons multiplied by interconnectivity multiplied even further by different ion communication channels to greatly simplify it. Even then we wouldn't even be close to having a "whole human brain" because a brain is nothing without it's body. Neurology is kinda in the way of AGI being even close to feasible in the near future my friend, i have the feeling you're obsessing a bit too much about this and should realize that arguing in paragraphs in TH-cam comment sections won't do any good.
Talking to actual neurologists on that matter could help.
Thank you, Gary and Adam.
I think the "Its just doing what humans do" is a bit of a non-starter anyway. It's... Not a human. It doesn't produce things to create a life for itself, it produces things because it's told to. Given the scale of what it produces is so vast.
If an artist trains for 20 years to perfectly imitate the style of another famous artist thats not really a risk, there may even be new creative merit in a human experience developed around that imitation. An AI that churns out massive volumes of art in that style is a threat to the market, the original artists livelihood and the broader artistic space. Given it's not even you know, enjoying itself or trying to get enough food to eat why should we tolerate it?
The problem with AI is less that it will replace humans. It's that it will replace enough of a human that the corporations will happily take the loss of function to save a buck. But they'll all do it, so while you've arrived a the intellectual restaurant you've loved all your life, and they'll serve you a grey gruel that cost more than the T-bone you used to eat. But you'll enjoy it, because that's what everyone severs, and you have no choice.
Something like an LLM is not AGI and likely will not become AGI.
However, something to consider when dismissing the "marketing term" of AGI.
We can make narrow AI as good or better than humans on particular tasks.
LLMs "comprehend" language enough to determine context semantics, and break down language into discrete parts and determine tasks needed to reach a goal.
|
You get AGI when the LLM is used as an interface and coordinator for countless other expert narrow AI systems as tools.
Then you ask GPT to write a report, do research or build an application, play chess, etc. it can easily break down the steps of what it would need to do. With tool use, it can then farm out the tasks to the most appropriate AI suited to accomplish the task. It can farm out tasks to evaluate the results of each task and course correct if the results are unsatisfactory.
When all tasks are complete, you will get a far better result, be it a factual answer to the question, a well written report, or a fully functional application.
Now imagine, like having urls for websites on specific topics, we could have registered narrow AI endpoints that are available to be used for applications or as tools for other AI systems.
That will get you something far more like the Start Trek computer.
I think that a group of experts brainstorming all on show at same time would be great.
The counterpoint to the "stochastic parrot" idea is that while it is a model that learns to predict the next words in a sequence, a machine that understands how reality works will be better at predicting the next words than one that doesn't. It's a way to bootstrap the question of "how do you train a machine to think about the universe". I don't think we're quite there yet, but portraying it as just a fancy autocomplete lacks a bit of nuance.
The issue there is that these tech companies don't care about training AI to think about the universe or anything else that doesn't directly benefit them financially. They simply want AI to do a good job at replacing human workers because that's profitable, philosophy isn't.
"A machine that understands how reality works" is purely theoretical fantasy. What exists in reality is fancy autocomplete.
@@mattg6106 oh, I totally agree with that - however there are also plenty of academic researchers and open source people who are actually interested in figuring out how to architect cool stuff. Then the companies can run with that bc they have the money for training.
@@AmyDentata Interpretability is a huge issue right now. Ultimately, we don't know what's going on in linear algebra soup that makes up LLMs. We're feeding data into mathematical structures built to mimic neurological features until we get good results. Understanding/cognition are almost certainly a spectrum and this is a method that's attempting to climb up that ladder. We don't have a way of knowing what's really going on in there right now, and I don't think it's anywhere near a human level of understanding/cognition but we do need to start thinking about these things (and preferably move most of the advanced AI research into the government instead of having it done by corporations)
@@aaronhandleman7277 Unfortunately most of the tech companies out there are going to view that as being hamstrung with their potential profits..
On your tour, are you going to make it to Canada? It would be nice getting some of your insights and humour up here.
P.s. It’s destructive to both normalize this as a new “style” as well as possibly eradicating a deeply human act based on the essence of one’s heart and mind. Human artists are irreplaceable.
Humans are always going to create art, but eventually it will be inferior to the work created by machines, or machines will get close enough to the level of human artists that they'll replace us by being cheaper, faster, more efficient...It's not that serious.
@@djzacmaniac Many artists have art as their only source of income, only source of putting food on the table. Taking that away from them is objectively *evil* and only to benefit the few rich people running the companies. also, "inferior to the work created by machines" Machines can't make art. If you say they can, you dont get the goddamn POINT of art. if you genuinely believe that art is just pretty images, you're less human than AI
He had me with the "7 dilithium crystals" Star Trek comment 🤣🤣🤣
This might be slightly off topic, but as they were discussing the limitations of AI text output and how it sometimes makes stuff up, it reminded me of the results seen from AI art output. In particular the problem AI art has had in the past with generating too many fingers, teeth and limbs on human characters. But also its ability to generate fictional things that don’t exist in reality and might also be impractical to reality. It’s great at making up a fantastic image, but sometimes it goes too far. But for some reason, people did not expect this same kind of phenomenon to occur with AI text, when really, AI text is just like AI art - it’s making stuff up that is not necessarily real. It’s drawing a picture… with text. Like AI art, AI text has a wild imagination. With some AI art programs you can tell it to be more wildly imaginative or to stick closer to your prompt and be more literal. But the tendency is for it to make stuff up. In art that’s a fun thing. But for some reason, people did not expect such inaccurate representations of reality to come from an AI text generator and were surprised when it did.
Generative AI isn't making things up with its 'wild imagination.' It's blending together tons of scraped artwork and photography to compose a new image. There is no imagination, just an algorithm and a pre-established dataset. AI screws up things like hands, teeth, eyes, etc. because it can't imagine or think for itself. It's no more 'imaginative' than a copy machine adding streaks of ink across the paper it prints.
@@mattg6106 Thanks for that painfully literal reaction to my use of the word “imagination.” Adjustment sliders that tell the AI to stick or stray from its interpretation of a prompt are not literally using imagination. But one could say that such an adjustment is figuratively what the slider intends to have happen, given how literal the image is to the prompt when asked to stick to it, and how wildly it interprets the prompt when asked or allowed to stray.
Honestly, I'm not sure if the hallucianations are really a bug and not a feature. 'Ability to generate fictional things that don’t exist in reality' sounds like an element of creativity. It also counters the notion of 'AI is only copying existing material'.
What the current systems are missing, is some way of checking the validity of the hallucinations in a second step. Somewhat similar to Kahneman' system 1 and system 2 (fast and slow thinking). Currently, AI has the intuition part, but methodical reasoning is not there yet. Nevertheless, it is interesting to see some similarities with the human thought process.
@@pentacleman1000 Yes, I'm using the word imagination in a literal sense because that's the only way AI is being projected to us. You seem to be trying very hard to interpret the 'bug' as a 'feature' here. The fact is that generative AI is scraping everything it puts out from human artists with actual imagination and creativity. AI prompters are simply looking for a cheap and easy way to benefit off of the skill of others en masse, and it shows in the outcome 99% of the time.
@@mattg6106 No, I wasn’t trying to interpret the “bug” as a “feature.” I was in fact describing it as a “bug” that others did not anticipate and were surprised to encounter. And I still see the unpredictable outcomes of AI art or AI text as a “bug.” But having clearly shown yourself to have a bias of hostility toward AI art as theft from human artist (a popular view) I can understand your hostile reaction to my casual use of the word “imagination” when describing AI going wild in ways humans would not predict, or… imagine.
Automation destroying jobs is the biggest danger, not just writing.
The near term problem with AI is that it's not smart enough.
The long term concern about AI is that it could well get too smart.
The immediate problem with regulating AI research is that there are international entities (nation states) who will not follow regulations if they think they can get an advantage that way.
I've no solutions.
"We can't regulate this field because what if another country doesn't, and wins!" has been the argument against every regulation ever, though. Environmental regulation, data use regulation, workplace safety regulations, financial regulation etc. But it's also a question of what sort of society we want to build for ourselves.
$100,000,000 on self driving cars THAT NO ONE WANTS OR NEEDS - while people are starving and being forced into homelessness....
We assign the meaning to models' outputs. They assign no meaning to ANYTHING. We translate meaningful inputs into numbers, run it through the model (which is just addition and multiplication on an unfathomably large scale), and then WE translate the meaningless numeric outputs into meaningful things (like turning numbers into tokens into bits of language, like words, letters, or phrases).
Where can I find the full episodes of Adam Ruins Everything?
Finally someone to talk to that’s not totally agreeing everything said.
"It's not the new technology that's scary. It's the way the technology is owned and managed that's scary."
Yes! This is exactly what Luddism is all about!