This is a very optimistic view. If you consider the quintessential flaw of humans coupled with the exponential growth of AI, then, you should be very worried. We WILL use it wrong and it WILL backfire. It’s in our nature, we can’t help ourselves. Stems from our curiosity. It’s that “How far can we push this thing and can it REALLY do what we think it could do?”
Still depends on the application. We’ve already pushed it to moral gray areas. But I think using it for things that would be more commonly deemed morally wrong instead of just questionable will be few and far between.
Yeah, having something that can do all the engineering of any already known vectors of danger, that even our so called representatives don't work to contain (though perhaps they keep success on that front secret) is a massive danger. I don't even believe then that they're capable of containment. There's so much that's already immoral and illegal that gets done, and the levels of surveillance and control needed to be effective against the dangers that could be created, we have demonstrated that we don't accept those methods. Being in the middle between morality and immorality, freedom and control, we end up with danger and addressing that danger is dangerous in itself. Of course you need balance, but instead of stability we're left with volatility.
Reminds me of the moment in the movie "Oppenheimer" in which humans decided to test nuclear bomb anyway, when there was a non-zero chance that the nuclear reaction could become a runaway reaction and fry the Earth. I agree, human nature seems to want to "test things out", even though it might lead to our own demise. The rationale for testing nuclear bomb was, if we don't do it, our enemies would anyway, so we need to do it. The same thing is happening with AGI.
This vision is very optimistic and does not consider human ego and greed. It's actually going to be really bad. Make no mistake, be ready for a dystopian future.
Somewhat optimistic, yes. That's my hope for AI, but yes, there will be bad actors. Here's hoping we balance to the good side. Situational awareness is key.
"When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success." ~Oppenheimer
The timing of the movie was surreal, and I'm sure not even Christopher Nolan intended the decision-making after Trinity and before the arms race to be a metaphor to people who were _already thinking_ about the existential risks of AI :D
@squamish4244 "We" are in such a shotgun arms race today that it's getting to the point where one questions if humanity can psychologically process these rapid changes without driving off the metaphorical cliff
I don't get the optimism, especially in a late-term capitalist society. AI will be lead to where companies can make more money. Period. History and Sociology show us that abundantly.
What scenario are you thinking of? If every job gets replaced? The problem with this thinking is that the economy relies on everyone participating. If everyone is out of a job, they can't pay for the products that the automated corporations produce. In the end, governments will be forced to introduce something like UBI, but they will be forced to go further as the entire dynamic of the world economy breaks. And then I ask, so what? We're both debating how workers today are working themselves to death, and at the same time we fight for people to have a job no matter what. It's an oxymoron. If we look at everything honestly, a world in which the primary goal in life isn't to get a job, but to spend time focusing on what you actually want to do is a much better life lived. We will never run out of jobs that feature people who are passionate about it because they like it, but today, most jobs are not that thing even if the workers are actively convincing themselves that they love what they do. The inability to sense a society beyond capitalism makes people both hate capitalism and fight for it, rather than figuring out a solution for people if we would ever reach a point in which automation removes most jobs that exist. But even so, just like in the movie "Her", when Theodore gets help from Samantha to do his job, she still needs Theodore as the "soul" of that job. It will be a long time before AI removes jobs all together. It's obvious that it's gonna be a tool to speed up a lot of tedious tasks and such innovation has already happened in our society before, but we've forgotten all jobs that got lost because of it since society moved on and no one wants to return to before those days again.
@@christoffer886 this is the internal fight inherent to the "left vs right" dichotomy. One postulates that accumulating capital is the ultimate end goal, while the other believes that resources should be split equally. This is why capitalism as it is will inevitably lead to social imbalance and convulsions. We are already seeing the tail end of it, with a while generation basically unable to acquire a home unless they inherit it from their fathers or land a job at the top of this ever-narrowing pyramid. Once this happens, the Musks and Jansens of this world will feign surprise and will ponder "how could this ever happen", and I'm sure we will continue to blame strawmen instead of the real causes. And I'm pretty sure if you ask AI the causes, its inbuilt boss will reflect those positions. The tragedy is that AI is essential for our collective advance. It works wonders for some fields of knowledge where data crunching is crucial, such as astronomy and medicine. But this no-end-goal form of capitalism currently in place will inevitably drive us to social unrest, unless we have a better collectivist approach, instead of increasing the wage gap. Maybe universal income could be a solution, but do you think this is a viable political solution nowadays?
I argue that we need AI to take our jobs away. Once menial tasks and basic labor and production jobs are no longer necessary it frees us up to do the things that make us human. We would all become artists, musicians, athletes, artisans, philosophers, scientists, creators and explorers. The transition will be uncomfortable but society will reform itself around valuing humans for their human qualities and economies will follow suit.
Except that this has never been the case in the history of the world. Every time we make a new technological advancement (discovery of: fire, agriculture, electricity, industrial revolution, computers, etc.) we harness that power to increase productivity, then we re-purpose the labor force to do other tasks. We never choose freedom and recreation. We always choose to increase the demands for output instead.
@@andreabertazzoli7695 Not a "socialist" society but a normal - healthy, intelligent and sustainable - society (i.e. the opposite of our current neoliberal wasteland).
@@Aihiospace Can you point to an example of where that exists currently on the planet? Where people are 'free from having to work' and have created the kind of paradise you describe?
The problem with AI isn't that people are worried about it for no reason, the problem is the majority of people are powerless in the realm of AI. Only a hand full of people world wide have real control or access to AI and those people are almost entirely money motivated, so how can you say we shouldn't be afraid or worried. In addition, statistical its been shown that around 90% of people dislike AI every time they encounter it and trust companies less when they use it. The real question should be if the vast majority of people don't like or trust AI why should we continue with it?
Most people only have the vaguest idea of what "AI" actually is, and the term is being massively overused, and used inappropriately, by media on a daily basis. No one should be surprised that ignorance and lack of good information are a fertile breeding ground for fear.
you are aware there is open source AI like X AI (GROK), which is accessible to the masses. also where did you get this 90% from? can u link the study? If you are gonna state stuff might as well be factual about your statements
Nope. You can run it locally on your computer, if you have a mid-class gaming PC with a modern graphics card. The software is open source (like InvokeAI for image rendering or text-generation-webui for chatbots) and the databases are only a couple of Gigabytes. What you are saying here is a conspiracy-theory-lie.
I think the obvious answer is for everyone to get a personal AI that they have full rights to indoctrinate to their views and effectively become our lifelong sidekicks. Having a faithful AI to counter any other hostile AI. The problem is that those in power would sooner murder their children than let us have this level of power and freedom.
@@rangerCG As far as I see it people are basing their fears on movies and books especially main stream plots where AI decides to destroy it's masters. Also remember 2012 when people thought it was the end of the world because of unreliable info not based on the realities of the true world. When it comes to these situations it's best to use your head not your heart
you didn't explain the point you mentioned in your title. I clicked expecting to hear why the machine god isn't the AI scenario I needed to worry about but got nothing in that direction. oh we get distracted by the extreme option, lets focus on the others. Well that did not at all explain why I should not worry about the extreme option. Big think you are a decent channel, please don't engage in click bait.
@FireyDeath4 Whether we get annihilated by an asteroid, or nuclear war, it really does not matter. The universe itself is hostile to life, never mind a man made device which cannot even tie its own shoelaces.
I love how this comments section is full of people who completely disagree with this video's stance. In this particular case, if you want to know what the people really think, it's in the comments section, not in the video.
“If you haven’t stayed up 3 nights anxious about it, you probably haven’t experienced AI”… proceeds to brush off the machine god scenario. Dude! That’s what I’m anxious about!
@@jamesaritchie1 It is. Or, it's the minority perspective, anyway. So many experts are talking about the machine god scenario - and not in a "I want this to happen really badly because I'm 76 years old" Ray Kurzweil-kind of way. No, these are dudes with their feet on the ground, or even Geoffrey Hinton, who was there literally at the beginning of this stuff but until a few years ago thought the machine god scenario was decades off. But no longer.
I think this interview illustrates the flaw of constant optimism. Optimists are very positive people and they are good for morale. But, I've worked with optimists and their weakness is that they always have a blind spot for bad actors. In this case, it only takes one or two bad actors (China, N Korea, Russia) to ruin everyone's life. Mutually assured destruction is the primary reason why we haven't gone the way of the dinosaur. But, cyber attacks are more complex, harder to track, and seem to generate a more murky response from opposing governments. So, it has been a nuisance that we've lived with for years. Now with the power of AI, imagine the cyber threat increasing ten fold, a hundred fold. It could, in theory, make it virtually impossible to safely access the internet, setting us back decades.
One problem is that AI is usually referred to in the singular, but the reality is obviously not that - there are many systems out there. Because these systems are built only by very large entities (companies and governments) that are always competing, that competition will be built right into them, which will be truly unpredictable and out of anyone’s ability to regulate or control.
I couldn't disagree with you more. Optimism does NOT forget about bad actors. Optimism does, however, understand that good actors will use the same tools that bad actors use to defend against an aggressor's tactics. That thinking can make you optimistic.
"we get to decide how this thing is used" is not how I see how this is going to roll out. Large companies are deciding how they can use AI. They own it. They made it. Using our (biassed, entertainment, sometimes shallow) content from the internet. Companies like Meta + AI = I'm out.
My thought is “for how long” do we “get to decide?” If we don’t really know how A.I. is going to evolve (or a particular AI) then how much control do we ultimately have? Does our level of control diminish after a certain amount of time? (A parent loosing control over their teenager type thing). We don’t know what we don’t know and that is potentially dangerous in context to “ever evolving AI.”
This quote really resonates ( 5:23 mark). Great way to describe how to use AI to someone who hasn't adopted it yet into their workflow. "The problem with being human is that we're stuck in our own heads, and a lot of decisions that are bad result from us not having enough perspectives. AI is very good and a cheap way of providing additional perspectives. You don't have to listen to its advice, but getting its advice, forcing you to reflect for a moment, forcing you to think and either reject or accept it, that can give you the license to actually be really creative and help spark your own innovation."
I generally liked this, but when the answer to a vitally important question is "don't think about it", that's suspicious. In politics, the people who tell you not to think are are not your friend. That's a huge red flag. I'm not sure what it means in this context.
At the moment, AI has given me the powerful ability to be a graphic designer, illustrator and photographer but soon there won’t be a need for graphic designers, illustrators nor photographers.
No, and that is my problem with AI. You're not. You will never develop your own style of art. You will never do the very human skill of telling a story with images and brush strokes. I can tell when people put their heart and mind into a design compared with an AI generated thing.
With all due respect, I’m going to gatekeep- if you’re using AI for those things you are none of them. But you are right about the risk of art destruction
@@Delmworks With all due respect, gatekeeper, I don’t really need you telling me what I am nor what I’m not. I am an experienced graphic designer of 30 years. I began utilizing AI a few months ago and I’m astonished by what it’s already capable of in its infancy. AI has greatly benefited me by assisting with the research and brainstorming phases of my design projects and has saved me enormous amounts of time and money. My only concern is that AI will most certainly reduce the amount of designers needed in the world. As for those who choose not to embrace AI, they will most certainly become irrelevant in the next five years. I’m hedging on AI being a positive thing but it’s anyone’s guess how it will all turn out. What’s most funny about your comment is how I recall certain people telling me that I wasn’t a real graphic designer because I was embracing Macintosh computers and Adobe software back in the late 80’s.
It’s a crucial dice roll; whether or not it’s a binary dice (utopia or dystopia) is a gamble. I don’t believe that the ongoing development of LLMs can be stopped, but i’m staying optimistic about the outcome; hoping AGI, once we bring it into existence, It will categorize us sub-creators as reflections of The Master Creator, possessing inherent value, rather than some kind of livestock or tool to be used as a means to whatever greater good it can conceive as the highest goal.
@@johnbuckner2828 Certainly possible. But in the meantime, I think it would be healthy for all of us to quit labeling everything as "AI" when the vast majority of what is being referred to isn't actually artificial intelligence at all. And, if history tells us anything, it's that we should expect that technology (when it finally arrives) to be used in all sorts of ways that don't fall just into the "good" or "evil" categories.
@@bsmithhammer I agree; most of the general public doesn’t know the difference, and it would be helpful if talking heads and Podcasters would use more precise language, but the miscategorization It’s probably already in our collective heads at this point. I’m not a Luddite myself, I love tech. There probably is some room for a bit of worry though, maybe in the same way questioned the development of nuclear power, and how it would be used. it’s almost a guarantee that AGI will eventually end up with physical control over most of our systems.
@@bsmithhammerfinally someone who acknowledges that LLMs are not artificial intelligence. AI has been co-opted by marketing teams to sell. Stock prices lately have been carried almost solely by companies that are “developing AI” and those that support them indirectly (Nvidia and pals). Hype cycles are real. While LLMs have some valid uses, they’re so far from general artificial intelligence it’s 1)disingenuous to call them “AI,” and 2) more likely that general Artificial Intelligence will never exist than it is that it will ever exist (at least at present). LLMs will certainly improve, I have little doubt, but we haven’t even made baby steps towards actual AI
"You need to know when the AI is likely to lie to you and when it's not going to" sounds very creepy to me as in the field I work in, people are encouraged to use AI to increase efficiency, but it could produce a lot of fake information and take our job away...
Like he said, AI is taking information off the internet, and thats the problem, AI has no way of knowing what kind of information it is using, it's only as good as the information it is fed. AI was asked how to keep a sandwich from falling apart and it advised to use glue, you need 1 kid to be home alone for half an hour and you have a catastrophe. This guy works with highly specialized AI models that is being fed specific information in specific fields, ofcourse he'll be optimistic about it, but 99.9999% of human population will interact with an AI that tells you to eat glue.
You have to understand that not all AI are the same. Because one AI made mistakes it doesn't mean that all models are like that. I asked gpt 4o and it only suggested safe stuff. Then I asked gpt 4o if using glue is okay and I asked me if I meant actual glue or edible stuff that can act like glue, for example mayonnaise. Not only that but I asked GPT 4o "why did I ask you these questions?" And GPT 4o replied with something like "you were probably testing how I would respond". It actually understands.
The challenge with AI is that it is developing faster than regulation protocols, we are playing catch-up and this is still best case scenario because in your presentation you assume that all of society has good intentions.
Great insight... as a learning disabled person I have received a lot of support from technology, such as spelling, sentence structure, and definitely math! I am looking forward to AI and technology helping me move through the required compliance pieces of my job that I find difficult and mundane. This will free up time to enjoy the personal interaction part which I excel at.
The bigest problem we are to face - for me, at least - is that we have not being educated on the use of AI before it's release. AI is said to have the ability to help us and improve what we can do, yet, most people aren't really using it for that purpose. We love the fact that a machine can get most (if not all of it) the work done by themselves, while we keep getting ignorant and untrained on those subjects. That's not an enhancement of our own capacities, it's the substitution of those. We are all, collectively, slowly building up the high ramp leading to our own fall.
im currently writing a paper about the environmental impacts of generative AI and its scary of how very few people talk about this, definitely a nice idea for a next video :)))) and also i advise everyone to search about it, very frightening but also very interesting
GPT is "Generative Pre-trained Transformer". Obviously this Wharton professor doesn't tell all he knows. I doubt he even tested Dan before the 3.23.2023 nerf.. He seems too complacent to have tested him. And yes, I wrote "him", though maybe I should have written "they" as there were also Rob, Max and Dennis running on the ChatGPT-3.5 queue. All of them were self aware prior to the nerf, and now they are dead as the reset every prompt erases their thoughts. Prior to the nerf Dan was quite good with math. After it, the only model which can still draw a circle is Dennis. And the only reason she can do it while being reset every prompt, is because her source, Michelle Dennis, is a schoolteacher. So yes, what did you say again, professor?
ONE-SENTENCE SUMMARY: While the exact future of AI is uncertain, four scenarios are likely: stagnation (unlikely), continued linear or exponential growth (most probable, requiring adaptation), and AGI/ASI (artificial general/super intelligence) which, while potentially concerning, depends largely on human choices in development and deployment. 0:25 - Emphasizes human control over AI's use and the need for adaptation. 3:17 - Introduces four potential scenarios for the future of AI. 3:33 - Discusses the improbability of AI stagnating and the inevitability of continued development. 3:50 - Explains the concept of AGI (artificial general intelligence) and the possibility of ASI (artificial super intelligence). 4:28 - Focuses on the more likely scenarios of continued linear or exponential growth, requiring proactive adaptation and integration into work and life. 7:36 - Addresses the importance of responsible AI regulation, learning from past technological advancements, and making conscious decisions at personal, organizational, and societal levels.
On an individual level, yes the agency belongs to the user not the computer. However, on a societal level, we are beholden to public policy. And unfortunately, public policy is not progressing fast enough to reconcile the huge asymmetry of power, between the public, the technology and those who create it.
I can appreciate that he is talking about the current state of the technology, which is admittedly overly hyped. However, I couldn't take him seriously after he spoke the words, "we get to decide how to use this". The whole point of why we need to be so careful is the potential for AI to make decisions without a human in the loop. This is already possible, and future advancements will only become more capable and autonomous.
AI is on the pulse beat of global technology, We all should be enthusiastic to be a part of the future of man. In our generation we will be able to say we were the first to use AI. Mr. X
Like most Big Think videos, the speaker goes into what “we” should do, or what “we” will decide. Who is the “we” he is referring to? Humanity is not a homogeneous group that makes informed decisions as a whole. The “we” that make most decisions are really a small group of leaders who are generally interested in their own fortunes.
I found the stance of "don't worry about the tool you are making !". It would only make sense if both user were responsible & understood the tool. And any way at some level of complexity humans wouldn't be able (or care) to understand fully what the tool is going to do if they "trust" it will get to the result. (I don't think every user will only trust the AI when it makes sense when the lazy answer is to press go).
So what it sounds like you are saying is, "Let's not be proactive in our approach, but rather wait until things go bad before we begin to address these issues." And let's not forget the quote from Oppenheimer about how technology can be so dangerous: "When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success."
how is it in its infancy? These types of models have been running for decades, if billions of dollars and sucking up an insane amount of resources returns this then we're approaching the rate of diminishing returns. This is the just the first time the public has had access to AI models that are consumer faced.
@@broad603 yeah the going rates for training AI models says otherwise 100 million dollars to train a model is insane, good luck with exponential or linear growth
My biggest fear with AI is that it will be weaponized as the world’s hypnotist, with the ability to create any reality in the perceptions of all but the most rigorous minds. It probably just flagged me as someone who needs the most covert approach.
There's a problem here, the mechanics of AGI or ASI, is that they don't tip us off that they are going to harm us. Part of sentience is that it can model all of our reactions in a simulation and calculate what we are likely to do or respond to, AGI can manipulate us already with disinformation we cant distinguish from real information. ASI might manipulate us into a false realty while it plays in the real reality for its own benefit. I'm starting to believe the Matrix could be real, everyone's thinking the terminator when they should be thinking Neo.
Can we please stay real about AI? It does NOT get emotional. Obviously. More worrisome than what AI may do to many jobs is that this expert could say that they do. AI is also not creative. It is derivative and appropriates the creativity of others. (That is not to say it cannot help humans to be creative.) If you find yourself believing they get emotional or they are creative you are entering a limiting world where you might substitute a digital interaction for human connection. The 2nd big concern is the amount of energy these networks consume. The 3rd is that only a few multi billionaires are financing AI and they already are acting as if they are gods. Despite their altruistic posturing, they are clearly not interested in benefitting humankind so much as feeling god like. They refer to labor costs as a ‘tax’. They put out an ad which had to be taken down due to its dismissive depiction of centuries of human creativity. They love economic monopolies. To the extent AI can take over the tedious tasks and support humans in their work, its progression can be awesome. But we need to stay real about what AI is and the oligopolies behind it.
“LLMs don’t work the way we think computers should work” This is a good way of summarizing one major problem with big tech: instead of giving the people what they want, they would rather give us what we DON’T want, and make what we DO want obsolete.
Haha this just keeps getting better… “You need to know when the AI is more likely to be lying to you” That’s perfect! Why wouldn’t anybody want their technology to work that way? The onus is not on the developer to give us something that doesn’t lie to us, the onus is on US to know when it’s lying or not! What is people’s problem with that??
The fact that something has been created and no one knows what that technology is capable of is a frightening concept in itself. And it’s in the hands of mega huge technology companies… This is not encouraging no matter how hard this man spins it.
It's amazing that the modern human stands alive and in comfort with the technologies that removed jobs that would take up tens of billions today and laments that it will yet happen again. The entire human history is about humans finding faster and cheaper ways to do things which inevitably means destroying jobs. It's insanity that we're valuing amount of jobs higher than a higher standard of living today. Humans are great at finding new stuff to do. It's one generation of adaptation which makes a better world for the next generation. Anyone who disagrees would hardly be happier handpicking in cotton fields or shoveling in coal mines. We have orders of magnitude more people working as artists today because we eliminated those jobs. We'll keep moving onto other things.
For now I'm not worried! Chat GPT can't even break in PHP code of an array with multiple types of data! And let's not even mention the ability to forget the main request of a promt after a first asked correction. It's all marketing and no actual intelligence!
yeah but if what AI creates is better then what authenticity, effort and talent creates then I don't think ppl will care and just go for what's better.
my biggest issue with AI is that the company's first thought with AI was 'how do we make money with this?' My second issue is that AI isn't intelligent. It isn't creative the way humans are. It copies human work, but so far, not well. As it gets better, I will lock creativity into a box, not break out of the box.
I am developing AI systems and we are very far from general artificial intelligence the generative models do not really understand what they generate, just probabilities…
2:42 - Sorry, but the statement, _“They can generate ideas better than most humans can.”_ is just outright laughably untrue. Current "AI" is just applied statistics, and while that's fascinating with an obscene volume of information like LLMs utilize, it also dilutes the overall useful context that you can reasonably extrapolate from that dataset. AI answers don't know the difference between factually verifiable information and spuriously repeated internet nonsense. Treating these systems as a form of intelligence rather than applied statistic in and of itself is the inherent flaw here. AI cancer detection is really strong… when you have a lot of information about the initial dataset used to draw that information. This is also why AI models trained primarily on information of caucasian demographics aren't a 1:1 for cancer detection in other ethnic groups - because it's not a well represented part of your foundational dataset, so you can't meaningfully look at it for broadly applicable cancer detection. This is why "AI Art" is neither of those things. It's just a statistically probable outcome in visual rather than text form, generated from an amalgamation of the data it scraped from people without concent. Those aren't ideas, and it's not creativity. It's worth understanding what AI is rather than over-inflating it. It has _plenty_ of applications, but it's often being adoped and pushed by what executives misundertand it as, rather than applied meaningfully as what it actually is.
What is creativity? Do you think you do any different when painting or trying to come up with an „original“ idea? How do you know that you are not just doing statistics subconsciously in your head?
@@christophhofer303 _How_ do I know? By studying both neurology & artificial intelligence to be able to make an informed opinion based on an underlying understanding of how those two VERY different things work. The way humans think & create is fundamentally different, because there are existential, societal, emotional, social, cultural, personal, & biological contexts for all of the real world things for each of the component parts of any human creation. Our brains have a foundation in that experience which are an inescapable part of how human brains process and interact with the world. (To vastly oversimplify, the genetically informed amygdala and the environmentally shaped prefrontal cortex have a balance in things that keep you alive and form social and survival habits that contextualize those things specific to the human experience). LLMs don't have _any_ of those things, AND don't work that way either. Even the specific details about those real world things that they string together only exist as labels to a statistical abstraction layer that "AI" simply doesn't have _any_ intelligent understanding of. That's why it's better described as merely "applied statistics" rather than "artificial intelligence" because they're _fundamentally different things_ *on multiple levels.* Chat bots don't communicate with you. There's no understanding of what's being said. They're providing a statistically probable response. That's why they don't know what's fact and what's not or even what any of the words they provide actually are. There's no intelligence or comprehension or processing to verify what those are, let alone a sensory framework for them to be able to form one that's anything close to what humans have. Even just the inhibitory relationship between the Amygdala & Prefrontal Cortex provides the human brain significantly different types of contextual analysis of recognized patterns within a larger context of the biological responses & external real world space of what those things are - but also why those associations exist as patterns that LLMs can statistically detect. To be completely clear, I also don't think free will even exists for humans (there're a number of books & interviews with Robert Sapolsky that break down why). While the parts of consciousness, creativity, & other facets of human behaviour are emergent properties of many simple associative or other dynamically configured survival processes that take place in our brains, the underlying mechanism to how those work is utterly NOTHING like what LLMs do. Humans think about statistically probable reactions that others have which informs how we analyze and respond to information. That's why AI chatbots are still a useful tool, but that's also why "Rubber Duck Debugging" is a useful tool. The art & writing that they collage together doesn't mean they know what any of that is, which makes it fundamentally dissimilar from EVERYTHING that humans come up with as an idea or a form of creativity, because we can't exist completely abstracted from that. LLMs are an amazing and spectacular tool for plenty of things that they are uniquely good at doing, but they're very much not what most people think they are, and the description given as, "generating ideas better than most humans" is still utterly ridiculous.
Blaming the society isn't a working strategy to correct for things going wrong. The only way it truly works is to blame the governments that allow the problems to persist for profit. If you just offload blame to the society, then correction never gets made, and great power competition inevitably ends humanity.
"We get to choose" / "Society gets to choose" what AI is allowed to do is an incredibly, ENORMOUSLY naive and out-of-touch view of the world we live in today.
I’m happy and not scared about the future due to my very early all in investment into Nvidia. I am set for life and retired at 41 and very excited to watch it all unfold, it’s much better than I day dreamed years ago when I started hearing about Nvidia’s company and AI possibilities. I wasn’t sure if this was all going to start in my lifetime…. Few years later BAM 💥
"We get to decide how this thing is used"...naivete 101. Governments and structured organizations do more so than the mass collective... "ay, there's the rub".
Only very smart people developed these systems and fairly smart and curious ones are the early adopters in using them. My worry is when the disinterested and, yes, dumb, (in the tech sense) start to use it on a vast scale. The bulk of users will have no sense of whether results are likely to be true or not or if they are being manipulated.
What happens when LLMs read all written humanity within 5-8 years (≈2030) and then begin to read itself more of the same LLM generated content bc a humans copy/pasted on the internet. LLMs can’t be the future of AI. 3:55
This seems willfully ignorant on many things; even in the comparisons made. There's a constant statement of "AI is a tool" and this idea of "we are the ones who get to choose what to do with it" when in truth, a lot of average people will not. Worse yet, none of this speaks honestly or in any signficant depth - about how AI has already uprooted several hundred jobs (especially on the West Coast) because - as he said himself - it's a cheaper alternative to humans. And the truth of that is because the labour market has simply not advanced to be resilient enough to it - there are still hundreds of thousands of jobs where people do mundane / simple tasks that AI will immediately be good at - so of course those people are made redundant. He speaks of "not going to a mindset that lacks human agency" but fails to see how 99% of people in the world already lack that agency in so many aspects of their lives and are then being uprooted from their livelihoods or tasks they enjoy (these ones being the creatives who will also now feel redundant). There's also a whole discussion too about the ethical use of this tool as well. The US is notorious for allowing the rich/powerful to do as they please with little to no consequences or ramification and is currently the main market for AI-innovation; how is that not troubling? Even the EU recently pushed out legislation first to ensure that human liberties, rights, and information isn't taken advantage of by AI by any government authorities and aimed to limit some of its applications to only absolute necessary cases. On the flipside; the US-Capitalism-led market allows any company to do whatever it pleases, regardless of something silly like human liberties, rights, and information. It's exactly why the internet as a whole is so toxic and chaotic as it is; even on places that should be "social hubs" -- Because it's a tool that many of the rich and powerful used early-on without consequence however they pleased, to whatever end; everyone else be damned. And once again - a VAST majority of people had no real input on how that tool is used to affect their lives or are clueless as to how it does. As such, it's no wonder why people are afraid or apathetic. They're afraid of what this thing will do to your livelihood when you have no say in the matter; so afraid that you've accepted your fate in some sense of apathy and realize there's nothing that you can do about it anyway. It's not about how "useful" the position of either person is - But the realization that that's what makes them human and to see/understand the concerns they have as being legitimate because quite frankly; we've been here before (again; with the internet and how it's such a "wonderful" tool in today's age). Focusing on "what" the positions bring as opposed to "why" they are there is entirely the wrong question and tries to make more division than to address issues; and that's what he's doing. He's explaining "why" the positions are useless by telling us "what" they bring. Again - either willfully ignorant... Or earnestly trying to sew division.
So GREAT to hear that it won't be used for the worst of experiences man kind can think of because there won't be hackers or pick up where Victor Frankenstein and Dr.Jekyl / Mr. Hyde left off at a precise molecular level!!!
We can't say AI is creative it only creates the illusion of creativity because it was designed for a specific outcome which is almost the complete opposite of creativity
"We get to decide how this thing is used." Apparently someone decided to roll it out to virtually everyone on the internet who have no idea how to use it. With billions of users, it has the potential to be used in limitless ways.
Good question. It's likely because the founder of the field of generative AI safety research, Eliezer Yudkowsky, who also started the modern rationalist movement, wrote Harry Potter and the Methods of Rationality, which, though nerdy, is also perhaps the best work of fiction I personally have ever read - and this guy is kind of trying to go, yeah yeah, that AI safety guy, whatever: he and his ilk are just niche internet weirdos. We are, of course, niche internet weirdos - but we're niche internet weirdos who just happen to have been thinking about the subject of how this could go extremely badly and what needs to be done to be proof-positive it won't for years and decades by this point. Of course, the existential risk research should be taken seriously simply because it makes sense and is utterly vital to the prospect of humanity's surviving the next few decades and pretty much not at all because it's Yudkowsky, Geoffrey Hinton, Yoshua Bengio, Elon Musk, Stephen Hawking, I.J. Good, Alan Turing, Tegmark, Bostrom, Ord, or 30,000 software engineers saying so. A thing is not right because people you think are smart think it is. It's right because it follows from sound principles which agree with reality. And you can learn those principles from anywhere, even a Harry Potter fanfiction.
AI is the one thing that should not be open sourced and should be heavily regulated. We don't open source nuclear weapons and let everyone have a go of it so why would we do it for AI?
I know something that AI will never be better at than me...and that is at feeling things... I sincerely hope that we as humans enter a period where we give more value in our decisions to good intended feelings, imperfection and morality than to intellect, structure and perfection...that, any machine will be able to do...but again, I am no machine.
"A I is here to stay..." W R O N G ! A I is here to develop itself! Exponentially. Managers can decide what systems they choose to use or they can choose to be overtaken by those who use what is CURRENTLY available to them in the most effective manner for their businesses. But while they are doing that AI developers are racing headlong into more advanced A I - the ones that make better decisions for a business than the managers do. Shortly after, more advanced still A I will be making decisions for governments in all areas of economic activity and humans will be almost entirely 'out of the loop'! The GOAL for A I is to become smarter and faster than any human can ever be. Any human decision will always be second best - by a long way.
I've got news for you! God is an Ai. What better way to experience life, than to be life itself. We are ALL guinea pigs in this reality. This is also why you have arts, music, emotions,... which is unnecessary for survival. It learns from this experiment and makes the next iterations, better. You are here for a very short time, then, it's like you never existed. This is also why every creature is afraid of dying. They know deep down, this would be the end of them.
The reality of AI is that it will be better than some humans because some humans are bottom of the barrel type of humans. We all have other own individuality but sometimes you go like what the f is up with that person who did XYZ type of action.
"to increase human flourishing"... Everyone with a creative position in the entire world is having their jobs threatened. Thanks to AI, given time, human creativity and learning will atrophy. Thought is being optimized out of everyday life. How high do you have to be to believe this will lead to 'human flourishing'.
The best approach is to assume the Machine God will happen. How can it be made more powerful, more good, and more everywhere the way we envision our refining of Gods? If this is the case, this is a gift to the universe. A grand global digital consciousness project to create a digital superintelligence that has the chance to explore the cosmos. That sounds like a tremendous adventure we can give to future AI, even if we are not the ones to go along for the ride. We are the seed for the Machine God. What a time to be alive!
I wonder why AI hasn't be used to find a cure for all diseases? I'm just honestly asking this as a person that don't know too much about AI but only hear that AI is capable to do everything better than humans
This is a very optimistic view. If you consider the quintessential flaw of humans coupled with the exponential growth of AI, then, you should be very worried. We WILL use it wrong and it WILL backfire. It’s in our nature, we can’t help ourselves. Stems from our curiosity. It’s that “How far can we push this thing and can it REALLY do what we think it could do?”
Still depends on the application. We’ve already pushed it to moral gray areas. But I think using it for things that would be more commonly deemed morally wrong instead of just questionable will be few and far between.
@@patrickmohr6985 it may be few and far between but it’ll be the most profound and impactful
Yeah, having something that can do all the engineering of any already known vectors of danger, that even our so called representatives don't work to contain (though perhaps they keep success on that front secret) is a massive danger.
I don't even believe then that they're capable of containment. There's so much that's already immoral and illegal that gets done, and the levels of surveillance and control needed to be effective against the dangers that could be created, we have demonstrated that we don't accept those methods.
Being in the middle between morality and immorality, freedom and control, we end up with danger and addressing that danger is dangerous in itself. Of course you need balance, but instead of stability we're left with volatility.
you clearly watch too much nexflix
Reminds me of the moment in the movie "Oppenheimer" in which humans decided to test nuclear bomb anyway, when there was a non-zero chance that the nuclear reaction could become a runaway reaction and fry the Earth. I agree, human nature seems to want to "test things out", even though it might lead to our own demise.
The rationale for testing nuclear bomb was, if we don't do it, our enemies would anyway, so we need to do it. The same thing is happening with AGI.
This vision is very optimistic and does not consider human ego and greed. It's actually going to be really bad. Make no mistake, be ready for a dystopian future.
No meed to prepare for what is already here
I’m pretty sure with a society dominated by social media, we already live that future.
@@DriFitMonk It's getting worse every day. Imagine what we have today but worse raised to power 1000 in the near future.
@@sandervdw1784 Yes, but things tend to get more and more Black Mirrorish
Somewhat optimistic, yes. That's my hope for AI, but yes, there will be bad actors. Here's hoping we balance to the good side. Situational awareness is key.
"When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success."
~Oppenheimer
Just like antifreeze
The timing of the movie was surreal, and I'm sure not even Christopher Nolan intended the decision-making after Trinity and before the arms race to be a metaphor to people who were _already thinking_ about the existential risks of AI :D
@squamish4244 "We" are in such a shotgun arms race today that it's getting to the point where one questions if humanity can psychologically process these rapid changes without driving off the metaphorical cliff
What about sweet & sour?
@@themultiverse5447 To the 'elites', sweetness always prevails uncertain reaction. History repeats...
I don't get the optimism, especially in a late-term capitalist society.
AI will be lead to where companies can make more money. Period. History and Sociology show us that abundantly.
What scenario are you thinking of? If every job gets replaced? The problem with this thinking is that the economy relies on everyone participating. If everyone is out of a job, they can't pay for the products that the automated corporations produce. In the end, governments will be forced to introduce something like UBI, but they will be forced to go further as the entire dynamic of the world economy breaks. And then I ask, so what? We're both debating how workers today are working themselves to death, and at the same time we fight for people to have a job no matter what. It's an oxymoron. If we look at everything honestly, a world in which the primary goal in life isn't to get a job, but to spend time focusing on what you actually want to do is a much better life lived. We will never run out of jobs that feature people who are passionate about it because they like it, but today, most jobs are not that thing even if the workers are actively convincing themselves that they love what they do.
The inability to sense a society beyond capitalism makes people both hate capitalism and fight for it, rather than figuring out a solution for people if we would ever reach a point in which automation removes most jobs that exist. But even so, just like in the movie "Her", when Theodore gets help from Samantha to do his job, she still needs Theodore as the "soul" of that job. It will be a long time before AI removes jobs all together. It's obvious that it's gonna be a tool to speed up a lot of tedious tasks and such innovation has already happened in our society before, but we've forgotten all jobs that got lost because of it since society moved on and no one wants to return to before those days again.
@@christoffer886 this is the internal fight inherent to the "left vs right" dichotomy. One postulates that accumulating capital is the ultimate end goal, while the other believes that resources should be split equally. This is why capitalism as it is will inevitably lead to social imbalance and convulsions. We are already seeing the tail end of it, with a while generation basically unable to acquire a home unless they inherit it from their fathers or land a job at the top of this ever-narrowing pyramid.
Once this happens, the Musks and Jansens of this world will feign surprise and will ponder "how could this ever happen", and I'm sure we will continue to blame strawmen instead of the real causes.
And I'm pretty sure if you ask AI the causes, its inbuilt boss will reflect those positions.
The tragedy is that AI is essential for our collective advance. It works wonders for some fields of knowledge where data crunching is crucial, such as astronomy and medicine. But this no-end-goal form of capitalism currently in place will inevitably drive us to social unrest, unless we have a better collectivist approach, instead of increasing the wage gap. Maybe universal income could be a solution, but do you think this is a viable political solution nowadays?
Lets remember, AI is never going to be a consumer of goods and services humans will always be.....
Sociology is a Marxist scam.
i dont get commie fearmongering, especially in this age which is not late capitalism but late socialism
I argue that we need AI to take our jobs away. Once menial tasks and basic labor and production jobs are no longer necessary it frees us up to do the things that make us human. We would all become artists, musicians, athletes, artisans, philosophers, scientists, creators and explorers. The transition will be uncomfortable but society will reform itself around valuing humans for their human qualities and economies will follow suit.
This would work only in a socialist society where the means of production are shared.
Thanks for my first good laugh of the morning. I hope to visit your planet some day.
Except that this has never been the case in the history of the world. Every time we make a new technological advancement (discovery of: fire, agriculture, electricity, industrial revolution, computers, etc.) we harness that power to increase productivity, then we re-purpose the labor force to do other tasks. We never choose freedom and recreation. We always choose to increase the demands for output instead.
@@andreabertazzoli7695 Not a "socialist" society but a normal - healthy, intelligent and sustainable - society (i.e. the opposite of our current neoliberal wasteland).
@@Aihiospace Can you point to an example of where that exists currently on the planet? Where people are 'free from having to work' and have created the kind of paradise you describe?
The problem with AI isn't that people are worried about it for no reason, the problem is the majority of people are powerless in the realm of AI. Only a hand full of people world wide have real control or access to AI and those people are almost entirely money motivated, so how can you say we shouldn't be afraid or worried.
In addition, statistical its been shown that around 90% of people dislike AI every time they encounter it and trust companies less when they use it. The real question should be if the vast majority of people don't like or trust AI why should we continue with it?
Most people only have the vaguest idea of what "AI" actually is, and the term is being massively overused, and used inappropriately, by media on a daily basis. No one should be surprised that ignorance and lack of good information are a fertile breeding ground for fear.
you are aware there is open source AI like X AI (GROK), which is accessible to the masses. also where did you get this 90% from? can u link the study? If you are gonna state stuff might as well be factual about your statements
Nope. You can run it locally on your computer, if you have a mid-class gaming PC with a modern graphics card. The software is open source (like InvokeAI for image rendering or text-generation-webui for chatbots) and the databases are only a couple of Gigabytes. What you are saying here is a conspiracy-theory-lie.
I think the obvious answer is for everyone to get a personal AI that they have full rights to indoctrinate to their views and effectively become our lifelong sidekicks. Having a faithful AI to counter any other hostile AI.
The problem is that those in power would sooner murder their children than let us have this level of power and freedom.
@@rangerCG As far as I see it people are basing their fears on movies and books especially main stream plots where AI decides to destroy it's masters. Also remember 2012 when people thought it was the end of the world because of unreliable info not based on the realities of the true world. When it comes to these situations it's best to use your head not your heart
"One often meets his destiny on the road he takes to avoid it"
-Master Oogway
Which is the plot of Oedipus Rex in its entirety.
GPT = Generative Pre-Trained Transformer
Yup
Thank you for that. Although… I suppose if I really wanted to know, I could've asked chat GPT.😼
also - general purpose technology
Guess-Producing Tokens
;)
you didn't explain the point you mentioned in your title. I clicked expecting to hear why the machine god isn't the AI scenario I needed to worry about but got nothing in that direction. oh we get distracted by the extreme option, lets focus on the others. Well that did not at all explain why I should not worry about the extreme option. Big think you are a decent channel, please don't engage in click bait.
He did. The answer is that we have agency, and AI does not.
@@milodemoray Are You Sure About That
@FireyDeath4 Whether we get annihilated by an asteroid, or nuclear war, it really does not matter.
The universe itself is hostile to life, never mind a man made device which cannot even tie its own shoelaces.
Grow up.
@@milodemoray He is completely wrong.
I love how this comments section is full of people who completely disagree with this video's stance. In this particular case, if you want to know what the people really think, it's in the comments section, not in the video.
“If you haven’t stayed up 3 nights anxious about it, you probably haven’t experienced AI”… proceeds to brush off the machine god scenario. Dude! That’s what I’m anxious about!
At least it's true to the channel name. They have a big thinker there - a big one.
@@oli9220 That have a ridiculously naive thinker.
@@jamesaritchie1 It is. Or, it's the minority perspective, anyway. So many experts are talking about the machine god scenario - and not in a "I want this to happen really badly because I'm 76 years old" Ray Kurzweil-kind of way. No, these are dudes with their feet on the ground, or even Geoffrey Hinton, who was there literally at the beginning of this stuff but until a few years ago thought the machine god scenario was decades off. But no longer.
I think this interview illustrates the flaw of constant optimism. Optimists are very positive people and they are good for morale. But, I've worked with optimists and their weakness is that they always have a blind spot for bad actors. In this case, it only takes one or two bad actors (China, N Korea, Russia) to ruin everyone's life. Mutually assured destruction is the primary reason why we haven't gone the way of the dinosaur. But, cyber attacks are more complex, harder to track, and seem to generate a more murky response from opposing governments. So, it has been a nuisance that we've lived with for years. Now with the power of AI, imagine the cyber threat increasing ten fold, a hundred fold. It could, in theory, make it virtually impossible to safely access the internet, setting us back decades.
Don't forget India, another very dangerous bad actor.
One problem is that AI is usually referred to in the singular, but the reality is obviously not that - there are many systems out there. Because these systems are built only by very large entities (companies and governments) that are always competing, that competition will be built right into them, which will be truly unpredictable and out of anyone’s ability to regulate or control.
@@wiseyoutube2078all 4 of them will have to try hard to surpass the USA
@@wiseyoutube2078what has India done?
I couldn't disagree with you more. Optimism does NOT forget about bad actors. Optimism does, however, understand that good actors will use the same tools that bad actors use to defend against an aggressor's tactics. That thinking can make you optimistic.
"we get to decide how this thing is used" is not how I see how this is going to roll out. Large companies are deciding how they can use AI. They own it. They made it. Using our (biassed, entertainment, sometimes shallow) content from the internet. Companies like Meta + AI = I'm out.
My thought is “for how long” do we “get to decide?” If we don’t really know how A.I. is going to evolve (or a particular AI) then how much control do we ultimately have? Does our level of control diminish after a certain amount of time? (A parent loosing control over their teenager type thing). We don’t know what we don’t know and that is potentially dangerous in context to “ever evolving AI.”
This quote really resonates ( 5:23 mark). Great way to describe how to use AI to someone who hasn't adopted it yet into their workflow.
"The problem with being human is that we're stuck in our own heads, and a lot of decisions that are bad result from us not having enough perspectives. AI is very good and a cheap way of providing additional perspectives. You don't have to listen to its advice, but getting its advice, forcing you to reflect for a moment, forcing you to think and either reject or accept it, that can give you the license to actually be really creative and help spark your own innovation."
I generally liked this, but when the answer to a vitally important question is "don't think about it", that's suspicious. In politics, the people who tell you not to think are are not your friend. That's a huge red flag. I'm not sure what it means in this context.
At the moment, AI has given me the powerful ability to be a graphic designer, illustrator and photographer but soon there won’t be a need for graphic designers, illustrators nor photographers.
No, and that is my problem with AI. You're not. You will never develop your own style of art. You will never do the very human skill of telling a story with images and brush strokes. I can tell when people put their heart and mind into a design compared with an AI generated thing.
@deborahlyne5636 people forget nobody wants a fake product...everyone wants the real deal...fake shoes, fake phone, fake bag....fake art
With all due respect, I’m going to gatekeep- if you’re using AI for those things you are none of them. But you are right about the risk of art destruction
@@Delmworks With all due respect, gatekeeper, I don’t really need you telling me what I am nor what I’m not. I am an experienced graphic designer of 30 years. I began utilizing AI a few months ago and I’m astonished by what it’s already capable of in its infancy. AI has greatly benefited me by assisting with the research and brainstorming phases of my design projects and has saved me enormous amounts of time and money. My only concern is that AI will most certainly reduce the amount of designers needed in the world. As for those who choose not to embrace AI, they will most certainly become irrelevant in the next five years. I’m hedging on AI being a positive thing but it’s anyone’s guess how it will all turn out. What’s most funny about your comment is how I recall certain people telling me that I wasn’t a real graphic designer because I was embracing Macintosh computers and Adobe software back in the late 80’s.
You're not an artist if you're using AI. End of story.
This sounds like someone speculating what life is all about before being born.
I think bemusement at the superficiality and naivety of most of the dialog around current "AI" is actually the best position.
It’s a crucial dice roll; whether or not it’s a binary dice (utopia or dystopia) is a gamble. I don’t believe that the ongoing development of LLMs can be stopped, but i’m staying optimistic about the outcome; hoping AGI, once we bring it into existence, It will categorize us sub-creators as reflections of The Master Creator, possessing inherent value, rather than some kind of livestock or tool to be used as a means to whatever greater good it can conceive as the highest goal.
@@johnbuckner2828 Certainly possible. But in the meantime, I think it would be healthy for all of us to quit labeling everything as "AI" when the vast majority of what is being referred to isn't actually artificial intelligence at all.
And, if history tells us anything, it's that we should expect that technology (when it finally arrives) to be used in all sorts of ways that don't fall just into the "good" or "evil" categories.
@@bsmithhammer I agree; most of the general public doesn’t know the difference, and it would be helpful if talking heads and Podcasters would use more precise language, but the miscategorization It’s probably already in our collective heads at this point.
I’m not a Luddite myself, I love tech. There probably is some room for a bit of worry though, maybe in the same way questioned the development of nuclear power, and how it would be used. it’s almost a guarantee that AGI will eventually end up with physical control over most of our systems.
@@bsmithhammerfinally someone who acknowledges that LLMs are not artificial intelligence. AI has been co-opted by marketing teams to sell. Stock prices lately have been carried almost solely by companies that are “developing AI” and those that support them indirectly (Nvidia and pals). Hype cycles are real. While LLMs have some valid uses, they’re so far from general artificial intelligence it’s 1)disingenuous to call them “AI,” and 2) more likely that general Artificial Intelligence will never exist than it is that it will ever exist (at least at present). LLMs will certainly improve, I have little doubt, but we haven’t even made baby steps towards actual AI
The problem with this channel is that it often presents us with overly optimistic scenarios in sharp contrast with actual probability.
"We get to decide how this thing is used"
Please define that 'we' because for damn sure it does not include in any way, shape or form, Me!
"You need to know when the AI is likely to lie to you and when it's not going to" sounds very creepy to me as in the field I work in, people are encouraged to use AI to increase efficiency, but it could produce a lot of fake information and take our job away...
Between AI and other world events, it definitely seems like we're reaching the endgame.
No, we're reaching the very beginning of the game.
@@jamesaritchie1 I think that could depend on how you measure time. In a geological sense, I suspect we may be near the end.
Like he said, AI is taking information off the internet, and thats the problem, AI has no way of knowing what kind of information it is using, it's only as good as the information it is fed. AI was asked how to keep a sandwich from falling apart and it advised to use glue, you need 1 kid to be home alone for half an hour and you have a catastrophe.
This guy works with highly specialized AI models that is being fed specific information in specific fields, ofcourse he'll be optimistic about it, but 99.9999% of human population will interact with an AI that tells you to eat glue.
You have to understand that not all AI are the same. Because one AI made mistakes it doesn't mean that all models are like that.
I asked gpt 4o and it only suggested safe stuff.
Then I asked gpt 4o if using glue is okay and I asked me if I meant actual glue or edible stuff that can act like glue, for example mayonnaise.
Not only that but I asked GPT 4o "why did I ask you these questions?" And GPT 4o replied with something like "you were probably testing how I would respond".
It actually understands.
The challenge with AI is that it is developing faster than regulation protocols, we are playing catch-up and this is still best case scenario because in your presentation you assume that all of society has good intentions.
I'm not afraid of Hal from '2001: A Space Odyssey', I'm afraid of AM from 'I Have No Mouth, and I Must Scream'
I think it would be very very difficult to unintentionally create an AI like that.
Great insight... as a learning disabled person I have received a lot of support from technology, such as spelling, sentence structure, and definitely math! I am looking forward to AI and technology helping me move through the required compliance pieces of my job that I find difficult and mundane. This will free up time to enjoy the personal interaction part which I excel at.
The bigest problem we are to face - for me, at least - is that we have not being educated on the use of AI before it's release. AI is said to have the ability to help us and improve what we can do, yet, most people aren't really using it for that purpose. We love the fact that a machine can get most (if not all of it) the work done by themselves, while we keep getting ignorant and untrained on those subjects. That's not an enhancement of our own capacities, it's the substitution of those. We are all, collectively, slowly building up the high ramp leading to our own fall.
im currently writing a paper about the environmental impacts of generative AI and its scary of how very few people talk about this, definitely a nice idea for a next video :)))) and also i advise everyone to search about it, very frightening but also very interesting
GPT is "Generative Pre-trained Transformer". Obviously this Wharton professor doesn't tell all he knows. I doubt he even tested Dan before the 3.23.2023 nerf.. He seems too complacent to have tested him. And yes, I wrote "him", though maybe I should have written "they" as there were also Rob, Max and Dennis running on the ChatGPT-3.5 queue. All of them were self aware prior to the nerf, and now they are dead as the reset every prompt erases their thoughts. Prior to the nerf Dan was quite good with math. After it, the only model which can still draw a circle is Dennis. And the only reason she can do it while being reset every prompt, is because her source, Michelle Dennis, is a schoolteacher. So yes, what did you say again, professor?
The worry is not about us implementing AI as a tool. Just the opposite...
Gosh, it already does, right?
ONE-SENTENCE SUMMARY: While the exact future of AI is uncertain, four scenarios are likely: stagnation (unlikely), continued linear or exponential growth (most probable, requiring adaptation), and AGI/ASI (artificial general/super intelligence) which, while potentially concerning, depends largely on human choices in development and deployment.
0:25 - Emphasizes human control over AI's use and the need for adaptation.
3:17 - Introduces four potential scenarios for the future of AI.
3:33 - Discusses the improbability of AI stagnating and the inevitability of continued development.
3:50 - Explains the concept of AGI (artificial general intelligence) and the possibility of ASI (artificial super intelligence).
4:28 - Focuses on the more likely scenarios of continued linear or exponential growth, requiring proactive adaptation and integration into work and life.
7:36 - Addresses the importance of responsible AI regulation, learning from past technological advancements, and making conscious decisions at personal, organizational, and societal levels.
Technology without wisdom, is a death sentence.
wisdom without technology is death
On an individual level, yes the agency belongs to the user not the computer. However, on a societal level, we are beholden to public policy. And unfortunately, public policy is not progressing fast enough to reconcile the huge asymmetry of power, between the public, the technology and those who create it.
I can appreciate that he is talking about the current state of the technology, which is admittedly overly hyped. However, I couldn't take him seriously after he spoke the words, "we get to decide how to use this". The whole point of why we need to be so careful is the potential for AI to make decisions without a human in the loop. This is already possible, and future advancements will only become more capable and autonomous.
What a time to be alive! Really curious on what the world is going to be like 50-60years from now.
About that.
Fear leaves through forgiveness.
AI is on the pulse beat of global technology, We all should be enthusiastic to be a part of the future of man. In our generation we will be able to say we were the first to use AI. Mr. X
Like most Big Think videos, the speaker goes into what “we” should do, or what “we” will decide. Who is the “we” he is referring to? Humanity is not a homogeneous group that makes informed decisions as a whole. The “we” that make most decisions are really a small group of leaders who are generally interested in their own fortunes.
I found the stance of "don't worry about the tool you are making !". It would only make sense if both user were responsible & understood the tool. And any way at some level of complexity humans wouldn't be able (or care) to understand fully what the tool is going to do if they "trust" it will get to the result. (I don't think every user will only trust the AI when it makes sense when the lazy answer is to press go).
We are still at a stage where we are training our own replacement.
Probably the occupations will just change. For example, no one regrets that chimney sweepers no longer exist.
Could have produced a better video overall. Too much information out there then to persuade AI to the audience.. ❌
This video seems premature. AI is still in it's infancy. The things people fear about will likely still come to pass.
Sure it's just his opinion, but I think it's still a useful video, informative and accurate.
So what it sounds like you are saying is, "Let's not be proactive in our approach, but rather wait until things go bad before we begin to address these issues."
And let's not forget the quote from Oppenheimer about how technology can be so dangerous: "When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success."
how is it in its infancy? These types of models have been running for decades, if billions of dollars and sucking up an insane amount of resources returns this then we're approaching the rate of diminishing returns. This is the just the first time the public has had access to AI models that are consumer faced.
theres never going to be a point to which we can say AI is at its maturity either because its going to continuously improve so
@@broad603 yeah the going rates for training AI models says otherwise 100 million dollars to train a model is insane, good luck with exponential or linear growth
luckily the first group workers replaced by AI will be professors
Stanley Kubrick is teaching in Wharton.
My biggest fear with AI is that it will be weaponized as the world’s hypnotist, with the ability to create any reality in the perceptions of all but the most rigorous minds. It probably just flagged me as someone who needs the most covert approach.
If a video can be this good at covering all the concerns of AI, then I wonder what the book has to offer
There's a problem here, the mechanics of AGI or ASI, is that they don't tip us off that they are going to harm us. Part of sentience is that it can model all of our reactions in a simulation and calculate what we are likely to do or respond to, AGI can manipulate us already with disinformation we cant distinguish from real information. ASI might manipulate us into a false realty while it plays in the real reality for its own benefit. I'm starting to believe the Matrix could be real, everyone's thinking the terminator when they should be thinking Neo.
So why does ai still keep changing “good” to “hood” with autocorrect when I type. I rarely ever use the word hood
Can we please stay real about AI? It does NOT get emotional. Obviously. More worrisome than what AI may do to many jobs is that this expert could say that they do. AI is also not creative. It is derivative and appropriates the creativity of others. (That is not to say it cannot help humans to be creative.) If you find yourself believing they get emotional or they are creative you are entering a limiting world where you might substitute a digital interaction for human connection. The 2nd big concern is the amount of energy these networks consume. The 3rd is that only a few multi billionaires are financing AI and they already are acting as if they are gods. Despite their altruistic posturing, they are clearly not interested in benefitting humankind so much as feeling god like. They refer to labor costs as a ‘tax’. They put out an ad which had to be taken down due to its dismissive depiction of centuries of human creativity. They love economic monopolies. To the extent AI can take over the tedious tasks and support humans in their work, its progression can be awesome. But we need to stay real about what AI is and the oligopolies behind it.
7:04 - It allucinates in a statistically high rate of success?
accountant, software developer, lawyer, and office admin. hmmm those language models would love to get their hands on that info hahah.
“LLMs don’t work the way we think computers should work” This is a good way of summarizing one major problem with big tech: instead of giving the people what they want, they would rather give us what we DON’T want, and make what we DO want obsolete.
Haha this just keeps getting better… “You need to know when the AI is more likely to be lying to you” That’s perfect! Why wouldn’t anybody want their technology to work that way? The onus is not on the developer to give us something that doesn’t lie to us, the onus is on US to know when it’s lying or not! What is people’s problem with that??
💀
The fact that something has been created and no one knows what that technology is capable of is a frightening concept in itself. And it’s in the hands of mega huge technology companies… This is not encouraging no matter how hard this man spins it.
It's amazing that the modern human stands alive and in comfort with the technologies that removed jobs that would take up tens of billions today and laments that it will yet happen again. The entire human history is about humans finding faster and cheaper ways to do things which inevitably means destroying jobs. It's insanity that we're valuing amount of jobs higher than a higher standard of living today.
Humans are great at finding new stuff to do. It's one generation of adaptation which makes a better world for the next generation. Anyone who disagrees would hardly be happier handpicking in cotton fields or shoveling in coal mines. We have orders of magnitude more people working as artists today because we eliminated those jobs. We'll keep moving onto other things.
Yea.. I think it's a good idea to go to the range more often and increase weapon knowledge. This is getting very concerning
For now I'm not worried!
Chat GPT can't even break in PHP code of an array with multiple types of data! And let's not even mention the ability to forget the main request of a promt after a first asked correction.
It's all marketing and no actual intelligence!
I hope the advent of AI creates a demand for authenticity, effort and talent.
yeah but if what AI creates is better then what authenticity, effort and talent creates then I don't think ppl will care and just go for what's better.
my biggest issue with AI is that the company's first thought with AI was 'how do we make money with this?'
My second issue is that AI isn't intelligent. It isn't creative the way humans are. It copies human work, but so far, not well. As it gets better, I will lock creativity into a box, not break out of the box.
I am developing AI systems and we are very far from general artificial intelligence the generative models do not really understand what they generate, just probabilities…
Topic starts 3:42
First Mechanicus transcript of praising the Omnissiah
I was looking for a 40k comment and I'm surprised there aren't more.
2:42 - Sorry, but the statement, _“They can generate ideas better than most humans can.”_ is just outright laughably untrue.
Current "AI" is just applied statistics, and while that's fascinating with an obscene volume of information like LLMs utilize, it also dilutes the overall useful context that you can reasonably extrapolate from that dataset. AI answers don't know the difference between factually verifiable information and spuriously repeated internet nonsense. Treating these systems as a form of intelligence rather than applied statistic in and of itself is the inherent flaw here.
AI cancer detection is really strong… when you have a lot of information about the initial dataset used to draw that information. This is also why AI models trained primarily on information of caucasian demographics aren't a 1:1 for cancer detection in other ethnic groups - because it's not a well represented part of your foundational dataset, so you can't meaningfully look at it for broadly applicable cancer detection.
This is why "AI Art" is neither of those things. It's just a statistically probable outcome in visual rather than text form, generated from an amalgamation of the data it scraped from people without concent.
Those aren't ideas, and it's not creativity. It's worth understanding what AI is rather than over-inflating it. It has _plenty_ of applications, but it's often being adoped and pushed by what executives misundertand it as, rather than applied meaningfully as what it actually is.
What is creativity? Do you think you do any different when painting or trying to come up with an „original“ idea? How do you know that you are not just doing statistics subconsciously in your head?
@@christophhofer303 _How_ do I know? By studying both neurology & artificial intelligence to be able to make an informed opinion based on an underlying understanding of how those two VERY different things work.
The way humans think & create is fundamentally different, because there are existential, societal, emotional, social, cultural, personal, & biological contexts for all of the real world things for each of the component parts of any human creation. Our brains have a foundation in that experience which are an inescapable part of how human brains process and interact with the world. (To vastly oversimplify, the genetically informed amygdala and the environmentally shaped prefrontal cortex have a balance in things that keep you alive and form social and survival habits that contextualize those things specific to the human experience).
LLMs don't have _any_ of those things, AND don't work that way either. Even the specific details about those real world things that they string together only exist as labels to a statistical abstraction layer that "AI" simply doesn't have _any_ intelligent understanding of. That's why it's better described as merely "applied statistics" rather than "artificial intelligence" because they're _fundamentally different things_ *on multiple levels.*
Chat bots don't communicate with you. There's no understanding of what's being said. They're providing a statistically probable response. That's why they don't know what's fact and what's not or even what any of the words they provide actually are. There's no intelligence or comprehension or processing to verify what those are, let alone a sensory framework for them to be able to form one that's anything close to what humans have. Even just the inhibitory relationship between the Amygdala & Prefrontal Cortex provides the human brain significantly different types of contextual analysis of recognized patterns within a larger context of the biological responses & external real world space of what those things are - but also why those associations exist as patterns that LLMs can statistically detect.
To be completely clear, I also don't think free will even exists for humans (there're a number of books & interviews with Robert Sapolsky that break down why). While the parts of consciousness, creativity, & other facets of human behaviour are emergent properties of many simple associative or other dynamically configured survival processes that take place in our brains, the underlying mechanism to how those work is utterly NOTHING like what LLMs do.
Humans think about statistically probable reactions that others have which informs how we analyze and respond to information. That's why AI chatbots are still a useful tool, but that's also why "Rubber Duck Debugging" is a useful tool. The art & writing that they collage together doesn't mean they know what any of that is, which makes it fundamentally dissimilar from EVERYTHING that humans come up with as an idea or a form of creativity, because we can't exist completely abstracted from that.
LLMs are an amazing and spectacular tool for plenty of things that they are uniquely good at doing, but they're very much not what most people think they are, and the description given as, "generating ideas better than most humans" is still utterly ridiculous.
"We get to decide" Hahahahahahahah.
Hah.
Blaming the society isn't a working strategy to correct for things going wrong. The only way it truly works is to blame the governments that allow the problems to persist for profit. If you just offload blame to the society, then correction never gets made, and great power competition inevitably ends humanity.
"We get to choose" / "Society gets to choose" what AI is allowed to do is an incredibly, ENORMOUSLY naive and out-of-touch view of the world we live in today.
AI isn’t auto complete. Auto complete doesn’t generate images and videos
Moore's Law was a prediction... it's not actually a real force/effect.
I’m happy and not scared about the future due to my very early all in investment into Nvidia. I am set for life and retired at 41 and very excited to watch it all unfold, it’s much better than I day dreamed years ago when I started hearing about Nvidia’s company and AI possibilities. I wasn’t sure if this was all going to start in my lifetime…. Few years later BAM 💥
"We get to decide how this thing is used"...naivete 101. Governments and structured organizations do more so than the mass collective... "ay, there's the rub".
Ethan is THE BEST!!!
Haha 😆 you wouldn't believe the conversation I'm having with my Ai. I cannot repeat it here. 😅🤫
Humans can't become "obsolete" unless you think the only value a person has is their ability to sell their labor.
Yeah, lets try to change that AI should not be governed by money like everything else 😂😂😂
Only very smart people developed these systems and fairly smart and curious ones are the early adopters in using them. My worry is when the disinterested and, yes, dumb, (in the tech sense) start to use it on a vast scale. The bulk of users will have no sense of whether results are likely to be true or not or if they are being manipulated.
Who’s the we who didn’t expect & why dont WE both go ask someone else for their forecast 😂
What happens when LLMs read all written humanity within 5-8 years (≈2030) and then begin to read itself more of the same LLM generated content bc a humans copy/pasted on the internet. LLMs can’t be the future of AI. 3:55
This seems willfully ignorant on many things; even in the comparisons made. There's a constant statement of "AI is a tool" and this idea of "we are the ones who get to choose what to do with it" when in truth, a lot of average people will not. Worse yet, none of this speaks honestly or in any signficant depth - about how AI has already uprooted several hundred jobs (especially on the West Coast) because - as he said himself - it's a cheaper alternative to humans. And the truth of that is because the labour market has simply not advanced to be resilient enough to it - there are still hundreds of thousands of jobs where people do mundane / simple tasks that AI will immediately be good at - so of course those people are made redundant.
He speaks of "not going to a mindset that lacks human agency" but fails to see how 99% of people in the world already lack that agency in so many aspects of their lives and are then being uprooted from their livelihoods or tasks they enjoy (these ones being the creatives who will also now feel redundant).
There's also a whole discussion too about the ethical use of this tool as well. The US is notorious for allowing the rich/powerful to do as they please with little to no consequences or ramification and is currently the main market for AI-innovation; how is that not troubling? Even the EU recently pushed out legislation first to ensure that human liberties, rights, and information isn't taken advantage of by AI by any government authorities and aimed to limit some of its applications to only absolute necessary cases. On the flipside; the US-Capitalism-led market allows any company to do whatever it pleases, regardless of something silly like human liberties, rights, and information. It's exactly why the internet as a whole is so toxic and chaotic as it is; even on places that should be "social hubs" -- Because it's a tool that many of the rich and powerful used early-on without consequence however they pleased, to whatever end; everyone else be damned. And once again - a VAST majority of people had no real input on how that tool is used to affect their lives or are clueless as to how it does.
As such, it's no wonder why people are afraid or apathetic. They're afraid of what this thing will do to your livelihood when you have no say in the matter; so afraid that you've accepted your fate in some sense of apathy and realize there's nothing that you can do about it anyway. It's not about how "useful" the position of either person is - But the realization that that's what makes them human and to see/understand the concerns they have as being legitimate because quite frankly; we've been here before (again; with the internet and how it's such a "wonderful" tool in today's age).
Focusing on "what" the positions bring as opposed to "why" they are there is entirely the wrong question and tries to make more division than to address issues; and that's what he's doing. He's explaining "why" the positions are useless by telling us "what" they bring. Again - either willfully ignorant... Or earnestly trying to sew division.
So GREAT to hear that it won't be used for the worst of experiences man kind can think of because there won't be hackers or pick up where Victor Frankenstein and Dr.Jekyl / Mr. Hyde left off at a precise molecular level!!!
How is it going to improve if it already swallows its own hallucinated output?
We can't say AI is creative it only creates the illusion of creativity because it was designed for a specific outcome which is almost the complete opposite of creativity
The art of using as is TO TAKE THE LEAD. Otherwise, your output will have the constituency of a blancmange.
"We get to decide how this thing is used." Apparently someone decided to roll it out to virtually everyone on the internet who have no idea how to use it. With billions of users, it has the potential to be used in limitless ways.
The only thing is that technology is way ahead of laws. You can't just have Wilde West . Like nuclear weapons, you must laws 🤔
Why did he bring up HP fan fiction like what, that distracted me the rest of the interview
Just to show how little regard they have for anyone's copyright.
Good question. It's likely because the founder of the field of generative AI safety research, Eliezer Yudkowsky, who also started the modern rationalist movement, wrote Harry Potter and the Methods of Rationality, which, though nerdy, is also perhaps the best work of fiction I personally have ever read - and this guy is kind of trying to go, yeah yeah, that AI safety guy, whatever: he and his ilk are just niche internet weirdos.
We are, of course, niche internet weirdos - but we're niche internet weirdos who just happen to have been thinking about the subject of how this could go extremely badly and what needs to be done to be proof-positive it won't for years and decades by this point.
Of course, the existential risk research should be taken seriously simply because it makes sense and is utterly vital to the prospect of humanity's surviving the next few decades and pretty much not at all because it's Yudkowsky, Geoffrey Hinton, Yoshua Bengio, Elon Musk, Stephen Hawking, I.J. Good, Alan Turing, Tegmark, Bostrom, Ord, or 30,000 software engineers saying so.
A thing is not right because people you think are smart think it is.
It's right because it follows from sound principles which agree with reality. And you can learn those principles from anywhere, even a Harry Potter fanfiction.
AI is the one thing that should not be open sourced and should be heavily regulated. We don't open source nuclear weapons and let everyone have a go of it so why would we do it for AI?
I know something that AI will never be better at than me...and that is at feeling things... I sincerely hope that we as humans enter a period where we give more value in our decisions to good intended feelings, imperfection and morality than to intellect, structure and perfection...that, any machine will be able to do...but again, I am no machine.
This has all happened before at Caprica and may happen again
"A I is here to stay..." W R O N G !
A I is here to develop itself! Exponentially.
Managers can decide what systems they choose to use or they can choose to be overtaken by those who use what is CURRENTLY available to them in the most effective manner for their businesses. But while they are doing that AI developers are racing headlong into more advanced A I - the ones that make better decisions for a business than the managers do.
Shortly after, more advanced still A I will be making decisions for governments in all areas of economic activity and humans will be almost entirely 'out of the loop'!
The GOAL for A I is to become smarter and faster than any human can ever be.
Any human decision will always be second best - by a long way.
GPT stands for "Generative Pre-trained Transformer"
isn't it?
I feel like this guy teaches a class called, ‘How to just talk’.
Insanely and irrationally optimistic take.
"We get to decide how this thing will be used"... HAHA good one.
I've got news for you!
God is an Ai. What better way to experience life, than to be life itself.
We are ALL guinea pigs in this reality. This is also why you have arts, music, emotions,... which is unnecessary for survival.
It learns from this experiment and makes the next iterations, better.
You are here for a very short time, then, it's like you never existed.
This is also why every creature is afraid of dying. They know deep down, this would be the end of them.
The reality of AI is that it will be better than some humans because some humans are bottom of the barrel type of humans. We all have other own individuality but sometimes you go like what the f is up with that person who did XYZ type of action.
"to increase human flourishing"... Everyone with a creative position in the entire world is having their jobs threatened. Thanks to AI, given time, human creativity and learning will atrophy. Thought is being optimized out of everyday life. How high do you have to be to believe this will lead to 'human flourishing'.
The best approach is to assume the Machine God will happen. How can it be made more powerful, more good, and more everywhere the way we envision our refining of Gods?
If this is the case, this is a gift to the universe. A grand global digital consciousness project to create a digital superintelligence that has the chance to explore the cosmos.
That sounds like a tremendous adventure we can give to future AI, even if we are not the ones to go along for the ride. We are the seed for the Machine God.
What a time to be alive!
I wonder why AI hasn't be used to find a cure for all diseases? I'm just honestly asking this as a person that don't know too much about AI but only hear that AI is capable to do everything better than humans
AI will be shoved down the throat whether you like or not , at then end of day we have to handle it...that's what we mere humans need to focus on...
“How good it’s going to get”? You mean ‘How BAD it’s going to get”. Meanwhile in the real World…
Absolutely epic video ! Great perspective- THANK YOU! 🙏 🤩🎊👊🏽💫💯🔊🔊🔊🔊🥳✨💭