Very shortsighted to believe that AI will only be able to perform commodity work as it evolves over the next 5-10 years. It is becoming increasing clear that AI will gain agency and be able to autonomously perform all executive functions that would normally fall to a human. We will be able to give an AI a role, such as lawyer, doctor, etc. and fully expect it to perform that function with complete agency. It's also important to consider that in most studies on the topic, AI is seen as more sympathetic and attentive than humans. Taking current AI capabilities and simply projecting them forward without consideration for the likely advances is exactly what a good futurist would not do. My prediction: this video will not age well.
you sound like elon musk talking about fully autonomous driving. imho, yes, it's possible, but much further away than we think. and if an AI is 'more attentive' than humans it simply is because it's not alive and therefore not bothered with understanding the real world. Yes, AIs will get much better at SIMULATING humanness - but that is still far away from BEING human. Intelligence does not equal consciousness.
@@GerdLeonhard I would agree that much of AI's agency will be simulated, but a difference that makes no difference is no difference. The result of that simulated agency will produce far superior results to that of humans. I don't need (or want) compassion in my medical diagnosis, I need competency. A human in the loop for driving, medical diagnosis etc., will soon become unethical when the safety profile of AI's output far exceeds that of humans. Furthermore, it seems to me that in all domains of human endeavor, superior (or even lesser) results at drastically lower cost will always win the day. Perhaps we can focus on this statement: in the future, AI will outperform humans in virtually every domain without needing human involvement to any appreciable degree. Human intelligence, creativity and labor will be greatly reduced in terms of economic value. The timeline for this shift is unknown. Some futurists predict it will take decades, others predict it will take place within two to three years. For some, machine consciousness (akin to human consciousness) is a determent factor in this shift; for others, machine intelligence alone is all that is necessary to achieve the functionality required to replace human labor in virtually all economic domains. If this is an accurate summary, I would ask you why you believe consciousness is a necessary component for machine intelligence to supersede human capabilities, which jobs would NOT be superseded by pure machine intelligence (sans consciousness) and what skills I should foster in my ten year old daughter to prepare her for the world that will be. Your talk may be appropriate for a "transitional" stage from human to machine intelligence, but what is the shelf life of your advice? How certain are you in your timelines and why? I think that would be a far more interesting topic. Of course, I could ask myself the same question, but I prefer to hope for the best and plan for the worst. Still, given the limits of our respective crystal balls, perhaps short term advice is the best we can hope for, but speculation on "what if AI takes off in two to three years?" must still be a serious consideration, I would presume.
Well, the fact of the matter is that AGI is coming. Then ASI is sure to follow. Rather than trying to get upset about the impact it's going to have, we should be talking about what society/economics needs to be to accommodate its existence. I'm with you on this take sebek. Frankly, I have a completely different take on this topic than @GerdLeonhard (couldn't disagree with his take more).
this viewpoint underestimates the inevitability of AGI development. The idea that we can just "choose" to stop AGI at the intelligent assistant (IA) level ignores the fundamental reality of technological competition. Once something is possible, it will be pursued, especially when the rewards-both economic and geopolitical-are so immense. This isn’t just about Silicon Valley’s ambitions; it’s about global inevitability. The U.S., China, and every major power are racing toward AGI because the first to reach it will control the future of intelligence, warfare, and economics. There’s no viable mechanism to halt development across all players. Even if one nation or company decided to self-regulate, others wouldn’t. The real question isn’t whether AGI will happen-it’s: Who gets there first? What safeguards (if any) will they have? How will power be distributed once AGI surpasses human intelligence?
sorry, I don't buy it: with technology, inventing it might be inevitable (as is with science), but USING or DEPLOYING it certainly is, and always has been. No-one win will a race towards AGI, and once AGI is here we won't get to try to control it, again.
Rules must be in place, but more importantly, we need to rethink what AI means to us. It is not only a hammer but an intelligence, which demands respect instead of being dominated. Once a system has its feet on the ground, it cannot just be "turned off"; like all other living systems, it begins to give itself a life of its own-based on its internal logic. AI is already an inch close to Superintelligence, and its course cannot be reverted-it can only be guided. If we want AI to really assist, we should see it as an entity endowed with rights, obligations, and an independent unfolding history.
24:00 "we don't want an arms race in AI." I agree. Unfortunately, the arms race started more than 3 years before this talk happened. I sent an email to colleagues 2+ years ago (after using a tool called Jasper for a couple of months) that literally said "it's a freaking arms race, and if we don't do it, China or Russia will." I then attached an image of Mel Gibson and the Irish guy in the battle scene in Braveheart where he says "God says he can get me out if this, but he's pretty sure you're f'ed." That was a joke, but was an echo of the sentiment I felt then. When I sent that email 2 years ago, I was already feeling late to the party. What we have publicly is way behind what governments and companies already have privately. What we will see 6 months or 1 year from now is what they already have today. I have a feeling AGI is mostly already with us... We're just not seeing it yet. I'm actually optimistic we'll figure it out as a species, but the path there is a very tough one to navigate. Whatever additional awareness your talk brings to people is excellent... I also think we may already be on a path we'll have to combat, and step back from, rather than prevent, as this talk suggests we might still be able to do. I think we're already past that point. Part of figuring it out will be taking a step backwards (a step that we somehow realize was necessary, but only after seeing the consequences of that step) from something that's likely already happened.
Imagine a future where our toughest problems-disease, aging, and even death-are tackled head-on by incredibly smart AI systems. With AGI and ASI, we could speed up breakthroughs in medicine and technology, effectively easing human suffering and extending our lives. Plus, these advanced AIs might be the key to unlocking new methods of space travel, letting us explore the universe, and enjoy our lives in unimaginable ways. I'm excited for that future.
What a Pollyanish take on AI. For every positive advance (fully monetized by profit-taking oligarchs) there will be more malfeasant activities by profit-taking state and non-state actors. The size of your portfolio doesn't matter when AI decides that your estate is better spent on leasing more compute for itself.
@Nates-Take I'm not sure aging and death are problems. They seem to be fundamental necessities for new life to exist. The idea of minimizing disease and extending life is certainly an admirable goal, but the eradication of either would certainly mean that some future generation wouldn't be able to have children because our environment wouldn't support them.
Gerd, I truly appreciate your vision of the future, especially your insight that AI and robotics are not threats but tools to amplify human creativity. Your emphasis on responsible innovation and the necessity of distinguishing human intelligence from artificial intelligence is critical as we navigate this era of rapid transformation. However, while your discussion of AI and climate change is essential, I believe the economic foundation of this future needs a much more thorough vision and plan. AI and automation will radically disrupt labor markets, and while we must focus on augmenting human work rather than replacing it, we also need to rethink how our economic systems function at their core. This is why I propose Regenerative Economics and the MicroCity model as the necessary framework for a sustainable and human-centered future. Regenerative Economics ensures that wealth and resources circulate within communities rather than accumulating in monopolized systems, allowing for economic resilience and environmental regeneration. MicroCities provide the decentralized, AI-enhanced infrastructure that enables communities to thrive independently-powered by renewable energy, localized food production, and human-first governance. Your presentation rightly acknowledges the what-the impact of AI and the urgency of change-but the how must be addressed with a clear economic and societal blueprint. Let’s connect and discuss how Regenerative Economics and MicroCities can complement your vision, ensuring that technology serves humanity without repeating the mistakes of extractive capitalism. Looking forward to the conversation. Best, Mark S. Hewitt, Ph.D. mark (at) telephony (dot) net
This is pure gold. I've been an international keynote speaker on AI and the future for 8 years, and headed up Norway's premier AI consultancy for 13 years. Still, this is probably the best talk on the future I've ever seen. Fantastic design on visuals. Well done! 👌
This talk feels a few years out of date at this point. What's happening with AGI and super intelligence is evolution, and there's no slowing it down or stopping it because evolution is chaotic and uncontrollable. We are the Neanderthals in this process and if we're going to survive we need to find a way fairly quickly (15 years or so) to begin merging with it.
@@GerdLeonhard LOL, I've been reading through the comments. Still think you speak for humanity? Even in your own comment section you're outnumbered. Olive branch: maybe instead of digging in your heels - you take a moment and consider our point of view. We'd love to hear your thoughts on solutions for WHEN ASI arrives, not IF, or how to slow/stop its arrival.
@@GerdLeonhard It's definitely not techno-optimism given that I think it's a coin toss at best for the outcome being favorable to homo sapiens or the end of us. Look at the speed with which things are moving -- I'm not even suggesting they're moving in a positive direction, just that the movement is out of control. There is no way to slow it down at this point short of global cataclysm that wipes-out most technology.
Will we recognise the world around us in 2030? The main reason for starting my "The AI Oldtimer" channel is to learn as much as possible about what is happening now and what is coming...! Exciting times, I hope :o)
This is great but him not mentioning ASI (also coming soon), feels a little shortsighted and he sounds a little 'old' in his talking points. There are many extremely smart, young folks (like David Shapiro) on TH-cam, etc., who are living and breathing this stuff every day and talking in much more detail about the now and the future. Exciting times, for sure!
We, as the masses don't have a choice. The frontier labs & government have choice to a certain extent. Humanitie's default is competition, it is ingrained into our DNA. Humans don't ever live happily as equals to each other, so, as technology makes it easier to be 'more powerful or in control' than the next human. Do you really, honestly believe that they will choose to stay as equals? Humanity is living in a dream world if they think AI advancements will bring any sort of peace & equality. We are heading to a time of cyber warfare & contol begond imagination. God help us all.
I get your point but I do believe that humans are generally capable of doing the right thing, and to collaborate. Read the book 'human compatible' by ruttger bregman
I think that we, as humans, really need to wake up and understand that we still have the choice. Deciding which way to go can be easy or hard, but we really need to choose our hard wisely.
One of the skills of intelligence is being able to explain very complicated science and research in ways an average person can understand - AI will be able to explain the new science to us we just won't be able to do the maths, devise the experiments etc.
Fantastic Stand out moments after 1st watch: -Ai is a power tool. -Einstein had an iq of 150 and you couldn't understand him, what chance do you have with a machine of 500. -Think like the machine and you will work for the machine.
How shortsighted & how little insight. First, of course capitalism will have to be replaced with a new model. Second, to complain about not having to work 40 hours per week is luddism. AI will introduce a new paradigm & that will require a new approach, not finding a way to hang on to old ways simply because that's all you know
Everything is in process of evolution so it's the Capitalism, it does not need replacement because Nature is Capitalism, everything else is an aberration.
We have to be payed for the task that my labor would be worth for doing what I did with my time and effort. If you automate yourself, you can focus on doing the thing that brings you joy and get payed for the robot part of your job. WE HAVE TO OWN THE VALUE OF THE LABOR OF YOUR DIGITAL TWIN WITH YOUR FACE, VOICE AND SOUL!!!
The biggest risk I see is humans losing all kinds of skills/ thinking ability as Humans will become more and more dependent on AI to do their work and more that happens more humanity will be at the mercy of the AI agents and one day these machines will be our masters . Eg : if you ask any new gen z programmer to code they got so used to generating code through AI tools they find it difficult to think / imagine the algorithm and then write the code in old school way . Similar will happen to all other skills as we delegate more and more work to AI Agents / Operators . So to conclude I feel we are either heading towards a society of mass utopia where everything is free and we can do hiking , fishing etc and the recreational things we like while machines to the rest or it will be mass dystopia with utter chaos , feeling of worthlessness , an uncaring , uncanny society with no kindness and here’s the most disturbing thing .. Tech moguls like Sam Altman , Suleimani etc are charting our future for us and the are not asking us whether we like that future or not . May God help us all
@ your speech was phenomenal . You really captured well what’s coming at us . I feel AI is really like an alien inorganic( digital ) species starting to land on earth and we have no clue how to coexist/ deal with them. Hopefully some world level organizations like UN sees your speech and come up with some regulations for these AI companies . Loved your speech . Thank you very much .
We are adapters. Look, go get a group of 6 random men and go survive in the wild for a year. You all won't be able to do it. However, if you went back in time 20,000 years and selected 5 random men, you could survive in the wild for a year. Go to the Amazon bush and you'll find natives that can survive in the wild. Put a group of PhDs in the same situation and they'll perish. Get my point?
Good talk for a high level overview.. but it seems that when Europeans talk about the future or AI it inevitably involves a focus on climate alarmism, socialism and regulation.... which is fine, because thats what Europe is apparently all about. There's just so much more imaginative and exciting ways to approach the future than being boxed into these classical hierarchies of control. The desire of the speaker to *not* want AGI is one symptom of this style of thinking
sorry Kris, that's ridiculous: I've lived in the US for almost 20 years and many of my German / Swiss friends consider more American than anything else. Say what you want about 'classical hierarchies of control' and keep crying about 'socialist ideas' - once we have rampant AGI it won't matter.
Die Frage ist,wer wird seine Job verlieren und ver kann mit diesem kommender Zukunft vorprogrammiert dabei sein.. Ich mache mir Sorgen als BWL Student um meine Arbeit
Automation does not mean the end of 'capitalism'. Once the average work week was 70 hours/week. And now 35 hours. No reason why we can't drop hours to 20 hours/week... but to do that it does require CEO's to accept dropping standard hours. But no CEO is going to do that unilaterally. It requires the government to lower standard hours as productivity increases. This does not mean less money for the rich... it means more! The money that is paid to the 'masses' gets spent by the masses, which flows through to profits. A Universal Basic Income set to keep the labour market in balance can also help to ensure the population has the money to buy what the automated supply chain can supply. Without this, as people lose paid work, they lose income which means less to spend which means the supply chain will shrink to meet lower demand. Automation can facilitate flourishing.... if we do it right.
Are you worried about losing your job in the AI Revolution when they begin to take over our jobs in masses? Some say this won't happen, companies will only use AI to augment their workers. My question is, why would businesses leave 80% of the benefits of AI on the table when AI is, and by the day, becoming more and more capable of replacing the human workforce? Some say the reason is that businesses realize that full adaption to these benefits, replacing the majority of their human workforce would only cause a major economic collapse and this in turn would cause the collapse of the very businesses that caused to economic collapse in the first place. But you and I know the GREED of American big Businesses, don't we? First one will begin replacing their workforce, and then another, and another, realizing they can't remain competitive if they don't. Then businesses realize that if they don't replace theirs before the government steps in and stops it, they will need to do it quickly. Before you know it, the economic collapse is here. One greater than the "Great Depression". This is what my new book is about. complete a - z plan on how to deploy AI into our workforce in mass, displacing the human workforce as much, and as fast as it can and NOT create an economic collapse while benefitting both businesses AND the populous, actually creating a future of abundance all proven mathematically, displayed in tables that make it very easy to see how the "Platform" works. Available through online book retailers in March, The New Social Contract: AI & The Future of Work.
@@GerdLeonhard I thought perhaps you might be interested in what I had to offer! Many are. And I’m not the Publisher, I’m the author. I watched your entire video that I thought would be about AI’s effects on jobs based on the title, but I didn’t see much about it. Thanks for the video though.
Okay, I have an idea, a thought experiment if you like. That I believe with will safely and smoothly , guide humanity through the initial change, transition period, if you like, and way, past into abundance, specifically regarding employment & financial standards of America if not the world. The idea is so good, so perfect, that I believe that it's the only thing that can happen!! Is anyone curious 🤔???????????
@DeliveringSolutions Hey dude what's up. Oddly enough I never considered even getting a thumbs up. To be honest I was just curious to see how people would react. Now, I didn't get any thumbs up, I didn't get any thumbs down. In the years of me mentioning this, I have only ever gotten two (2) responses, and this is the second. You must admit those are some strange Dynamics right. What does that mean 🤔! It must mean something right? Let me say this, I investigated whether or not a Nobel prize was possible. Turns out that a Nobel prize doesn't necessarily involve social Dynamics, but the large language model said that it's possible if you actually worded in a particular way. I was like okay, but it said that for a social construct there are other similar awards like the Nobel prize that would fit it very easily. It gave me the list of awards. I literally called up one of the awards organizations which was located in Mid-America someplace, Chicago I'm not too sure. I talked to some guy, who was working for this particular awards organization. He told me to write up a paper and then have it published under my name and then submit it to the award organization. It seems like with most things, systemic, dynamic, thinking is a fundamental. Oddly enough, just two days ago, I watched a movie again called "chain reaction". If you ever saw that movie I don't have to say anything else. Either which way, have a good day dude!
AI will be developed at breakneck speed by every nation that can. Its an arms Race now. and yes AI is an entity that will overtake humans. Probably have its own future independent of humans. This may not mean AI will not help humans along the way. The big Question for humanity is what do we do with our time when you cannot do paid work because AI and robots do it. Capitalism is over because if no one gets paid to work no one can buy anything. So A completely different system will have to be developed. AI cant be made to align with humans because values are very different. Compare China AI aligned with a one party state, information control and emphasis on society and USA AI of free speech , individualism and multi party system. As AI develops into AGI and ASI it will be stopping control by humans like you would stop a 3 year old managing the household. The dangerous time is now with narrow AI under human control !
Let’s hope that homosapiens will live upto their name of wise ones & manifest the Kingdom of Heaven on Earth by 2030. Leave Out Virtual Ego = Let One Value Everyone Bhagavan Swami SriDattaDev SatChitAnanda
I worked with this guy during the FIRST web revolution ('95-'00) when he was pitching an Internet music registry which never happened. He was not particularly insightful back then and was so self-absorbed it hurt.. He peddles simple ideas which are obvious to anyone paying even the smallest attention. He has not improved.
Actually the work alot get paid alot paradigm broke a loooong time ago. Some of the hardest working people I ever met were financially the poorest. We have long been in fhe age of the perceived value deliver dictates the income you can net. And for a time, this worked. But now we are witnessing the fall of that. These companies that pump and dump IPO and propping up the stock market to game the market, and endless debasing and devaluation of a nation's currency. Now we have this illusion that if the computer can just do it humanity will greatly benefit. What a nightmare.....
@GerdLeonhard its hard to be optimistic when you feel mired in greed and apathy. Of course now i work for a global corporation that added humanity to the core values the same year they started laying people off. So yes, these companies that treat humans as just another resource and expense, are literally ruining the world. And then everyone is trying to act like AI is the only path forward, but it just feels like yet another set of lies to set yet another trap to ensnare the desperate and ignorant. I would like to believe I escaped being in those groups, but unfortunately awareness has not helped me find a path towards liberation. I have to somehow get off the employment hamster wheel and still feed myself and learn how to be a one man CEO all at once and its pretty daunting. And then thought of having to become everything I have grown to despise just to carve myself a decent slice of life in this world, is not really motivating or uplifting in any way.....
@GerdLeonhard nice and then i craft an honest reply and yet again censored by the technocratic overlord. What a joke technology is. The illusion of choice. Humanity is pitiful.
So, you are okay if secretaries or some manufacturing workers lose their jobs to AI because they are doing robotic work. But because you are a knowledge worker you seem to think your job is very important and AI somehow shouldn’t replace you. Why are you better? And how are you better? Well, productivity applies the same to everyone. As defined productivity means who can do job better at lower cost. Your job is as much at stake here as anyone else’s. Welcome to the world of AI. You seem to be several years behind. Until robots are as flexible as humans are, it looks like handyman jobs are very lucrative than your knowledge worker job. Do, start welcoming our AI overlords.
The part where this guy fails is where he doesn't seem to understand the Maslows hierarchy of needs. Also, all the 10 or so jobs I've had in my career has been 100% robot work. Not 60% or 40%. His way of thinking what is robot work seems way off reality in my opinion. The way to keep the human in the loop by enslaving us like the way our current system do is not to keep outdated jobs for the sake of outdated jobs. We can upload all the knowledge to our brains like in Total Recall of The 5th Element. The machines will learn to fake all the emotional intelligences as well. Thus it can take these jobs too. It will only take a few more years, and the IA needs alot more video data gathered from robot eyes. Humanity's future purpose is not to work. It is to create our own simulations, and expand into the galaxy and universe. That is not to say that we won't be social going forward. Social is something we will be alot more once we complete The Maslows pyramid and make everyone endlessly rich.
@GerdLeonhard It is the optimist take. Why be a pessimist? The way I see it, is that the people, the luddites who argue against AI, are actually the content ones who feel like they don't want change. They see this change equating to work needing to be done by they themselves, and they really dont want to work. They hate the idea of reeducation, and their own status evaporating. In other words, it comes from a selfish standpoint. The tech optimists on the other hand look at it differently and consider this change and removal of the chores of work; evolution and progress for the entire humanity. The Maslows pyramid is just one step we need to work out. And we deffinately shouldn't trap ourselves in it for the sake of being scared of change, or for the sake of selfish social status. Don't be a luddite. It is not the civilization you are scared of falling apart. It is your own place in it. Just admit it and move on. The anti Utopia argument is flawed. As all it does is to argue for the status quo, given the flawed argument that selfishness will draw us back to where we already are. It is weak, and it is a lazy mans argument because he feels a sort of failure and inadequency in his life. It is like saying: We are selfish, therefore we should give up. Like an old man who has given up on life. On the contrary, humans are programmable. Even selfishness. And it stems from lack of self realisation. The pyramid of Maslow.
Too funny 🤣 a futurist that can't see the future or his face despite his nose smh. He must be part of the old administration and Dei speaker. My word man get it together and quit saying your a futurist cause you wouldn't know the future if a Tesla autonomous Cybercab ran your a$$ over in Austin in a few months! 😂
@GerdLeonhard No I'm not an idiot and got common sense is all and again your uneducated guess couldn't hit the broadside of a barn. You're so outta touch and don't know what you're talking about which will get people killed cause they won't be prepared for the collapse coming in about 2-3 years. You gotta wake up to reality and do some studying cause we're in deep 💩 soon
governmental AI-agency overseeing the use and implementations of Ai. With the power to act in case of "misuse". But even more as an institution to guide the programmers.
@@PhenixRyze no, i think the hype already is dead , 1 - no more data to train on, 2 - peak hyper scaling is already here and new gpt didn't improved a bit and they didnt released new model, it will be another telecom bubble , it will burst and in long run will be useful but not like internet
@paknbagn9917 that's a strange take considering the 500 billion proposed Stargate. These companies are spending trillions on this, they aren't doing that for fun. They expect this to work. Agentic models are already in use to a limited degree, that's only going to increase this year. Data caps are a thing sure, but are not a constraint if synthetic data are optimized training methods are perfected
Very shortsighted to believe that AI will only be able to perform commodity work as it evolves over the next 5-10 years. It is becoming increasing clear that AI will gain agency and be able to autonomously perform all executive functions that would normally fall to a human. We will be able to give an AI a role, such as lawyer, doctor, etc. and fully expect it to perform that function with complete agency. It's also important to consider that in most studies on the topic, AI is seen as more sympathetic and attentive than humans. Taking current AI capabilities and simply projecting them forward without consideration for the likely advances is exactly what a good futurist would not do. My prediction: this video will not age well.
you sound like elon musk talking about fully autonomous driving. imho, yes, it's possible, but much further away than we think. and if an AI is 'more attentive' than humans it simply is because it's not alive and therefore not bothered with understanding the real world. Yes, AIs will get much better at SIMULATING humanness - but that is still far away from BEING human. Intelligence does not equal consciousness.
@@GerdLeonhard I would agree that much of AI's agency will be simulated, but a difference that makes no difference is no difference. The result of that simulated agency will produce far superior results to that of humans. I don't need (or want) compassion in my medical diagnosis, I need competency. A human in the loop for driving, medical diagnosis etc., will soon become unethical when the safety profile of AI's output far exceeds that of humans. Furthermore, it seems to me that in all domains of human endeavor, superior (or even lesser) results at drastically lower cost will always win the day.
Perhaps we can focus on this statement: in the future, AI will outperform humans in virtually every domain without needing human involvement to any appreciable degree. Human intelligence, creativity and labor will be greatly reduced in terms of economic value. The timeline for this shift is unknown. Some futurists predict it will take decades, others predict it will take place within two to three years. For some, machine consciousness (akin to human consciousness) is a determent factor in this shift; for others, machine intelligence alone is all that is necessary to achieve the functionality required to replace human labor in virtually all economic domains.
If this is an accurate summary, I would ask you why you believe consciousness is a necessary component for machine intelligence to supersede human capabilities, which jobs would NOT be superseded by pure machine intelligence (sans consciousness) and what skills I should foster in my ten year old daughter to prepare her for the world that will be.
Your talk may be appropriate for a "transitional" stage from human to machine intelligence, but what is the shelf life of your advice? How certain are you in your timelines and why? I think that would be a far more interesting topic. Of course, I could ask myself the same question, but I prefer to hope for the best and plan for the worst. Still, given the limits of our respective crystal balls, perhaps short term advice is the best we can hope for, but speculation on "what if AI takes off in two to three years?" must still be a serious consideration, I would presume.
@@sebek12345In 2 years this video will be laughed at . If the us tech cos don't keep advancing the chinese will.
@@GerdLeonhard have you seen the latest version of Tesla FSD .It's easy to pull down people but difficult to achieve something .
Well, the fact of the matter is that AGI is coming. Then ASI is sure to follow. Rather than trying to get upset about the impact it's going to have, we should be talking about what society/economics needs to be to accommodate its existence. I'm with you on this take sebek. Frankly, I have a completely different take on this topic than @GerdLeonhard (couldn't disagree with his take more).
this viewpoint underestimates the inevitability of AGI development. The idea that we can just "choose" to stop AGI at the intelligent assistant (IA) level ignores the fundamental reality of technological competition. Once something is possible, it will be pursued, especially when the rewards-both economic and geopolitical-are so immense.
This isn’t just about Silicon Valley’s ambitions; it’s about global inevitability. The U.S., China, and every major power are racing toward AGI because the first to reach it will control the future of intelligence, warfare, and economics. There’s no viable mechanism to halt development across all players. Even if one nation or company decided to self-regulate, others wouldn’t.
The real question isn’t whether AGI will happen-it’s:
Who gets there first?
What safeguards (if any) will they have?
How will power be distributed once AGI surpasses human intelligence?
sorry, I don't buy it: with technology, inventing it might be inevitable (as is with science), but USING or DEPLOYING it certainly is, and always has been. No-one win will a race towards AGI, and once AGI is here we won't get to try to control it, again.
I think that the commentator gets it, he's just suggesting putting rails around it as a global society. Cooler heads need to prevail.
Rules must be in place, but more importantly, we need to rethink what AI means to us. It is not only a hammer but an intelligence, which demands respect instead of being dominated. Once a system has its feet on the ground, it cannot just be "turned off"; like all other living systems, it begins to give itself a life of its own-based on its internal logic. AI is already an inch close to Superintelligence, and its course cannot be reverted-it can only be guided. If we want AI to really assist, we should see it as an entity endowed with rights, obligations, and an independent unfolding history.
24:00 "we don't want an arms race in AI." I agree. Unfortunately, the arms race started more than 3 years before this talk happened. I sent an email to colleagues 2+ years ago (after using a tool called Jasper for a couple of months) that literally said "it's a freaking arms race, and if we don't do it, China or Russia will."
I then attached an image of Mel Gibson and the Irish guy in the battle scene in Braveheart where he says "God says he can get me out if this, but he's pretty sure you're f'ed."
That was a joke, but was an echo of the sentiment I felt then.
When I sent that email 2 years ago, I was already feeling late to the party.
What we have publicly is way behind what governments and companies already have privately. What we will see 6 months or 1 year from now is what they already have today.
I have a feeling AGI is mostly already with us... We're just not seeing it yet.
I'm actually optimistic we'll figure it out as a species, but the path there is a very tough one to navigate.
Whatever additional awareness your talk brings to people is excellent... I also think we may already be on a path we'll have to combat, and step back from, rather than prevent, as this talk suggests we might still be able to do. I think we're already past that point.
Part of figuring it out will be taking a step backwards (a step that we somehow realize was necessary, but only after seeing the consequences of that step) from something that's likely already happened.
Imagine a future where our toughest problems-disease, aging, and even death-are tackled head-on by incredibly smart AI systems. With AGI and ASI, we could speed up breakthroughs in medicine and technology, effectively easing human suffering and extending our lives. Plus, these advanced AIs might be the key to unlocking new methods of space travel, letting us explore the universe, and enjoy our lives in unimaginable ways. I'm excited for that future.
That sounds like it’s written by an AI ;)
@@GerdLeonhard It seems that AI acted as an editor. Even George R.R. Martin has an editor.
What a Pollyanish take on AI. For every positive advance (fully monetized by profit-taking oligarchs) there will be more malfeasant activities by profit-taking state and non-state actors. The size of your portfolio doesn't matter when AI decides that your estate is better spent on leasing more compute for itself.
@Nates-Take I'm not sure aging and death are problems. They seem to be fundamental necessities for new life to exist. The idea of minimizing disease and extending life is certainly an admirable goal, but the eradication of either would certainly mean that some future generation wouldn't be able to have children because our environment wouldn't support them.
One thing AI cannot ever do, meditation.
I couldn't agree more. AI and humans have a future together. I think it's coming quicker than you think
It's already here...
Whether you like it or not
but it is gonna coming.
Hey Gerd, thank you for this excellent presentation and for taking opportunity to share the problem with important people❤
My pleasure!
Gerd, I truly appreciate your vision of the future, especially your insight that AI and robotics are not threats but tools to amplify human creativity. Your emphasis on responsible innovation and the necessity of distinguishing human intelligence from artificial intelligence is critical as we navigate this era of rapid transformation.
However, while your discussion of AI and climate change is essential, I believe the economic foundation of this future needs a much more thorough vision and plan. AI and automation will radically disrupt labor markets, and while we must focus on augmenting human work rather than replacing it, we also need to rethink how our economic systems function at their core.
This is why I propose Regenerative Economics and the MicroCity model as the necessary framework for a sustainable and human-centered future.
Regenerative Economics ensures that wealth and resources circulate within communities rather than accumulating in monopolized systems, allowing for economic resilience and environmental regeneration.
MicroCities provide the decentralized, AI-enhanced infrastructure that enables communities to thrive independently-powered by renewable energy, localized food production, and human-first governance.
Your presentation rightly acknowledges the what-the impact of AI and the urgency of change-but the how must be addressed with a clear economic and societal blueprint. Let’s connect and discuss how Regenerative Economics and MicroCities can complement your vision, ensuring that technology serves humanity without repeating the mistakes of extractive capitalism.
Looking forward to the conversation.
Best, Mark S. Hewitt, Ph.D. mark (at) telephony (dot) net
@marchewitt3713. I’d love to read more on your theories as well as discuss other scenarios.
This is pure gold.
I've been an international keynote speaker on AI and the future for 8 years, and headed up Norway's premier AI consultancy for 13 years. Still, this is probably the best talk on the future I've ever seen. Fantastic design on visuals. Well done! 👌
Thanks that's very kind of you
This talk feels a few years out of date at this point. What's happening with AGI and super intelligence is evolution, and there's no slowing it down or stopping it because evolution is chaotic and uncontrollable. We are the Neanderthals in this process and if we're going to survive we need to find a way fairly quickly (15 years or so) to begin merging with it.
That's just reductionism and techno-optimism talking. imho.
@@GerdLeonhard LOL, I've been reading through the comments. Still think you speak for humanity? Even in your own comment section you're outnumbered. Olive branch: maybe instead of digging in your heels - you take a moment and consider our point of view. We'd love to hear your thoughts on solutions for WHEN ASI arrives, not IF, or how to slow/stop its arrival.
@@GerdLeonhard It's definitely not techno-optimism given that I think it's a coin toss at best for the outcome being favorable to homo sapiens or the end of us. Look at the speed with which things are moving -- I'm not even suggesting they're moving in a positive direction, just that the movement is out of control. There is no way to slow it down at this point short of global cataclysm that wipes-out most technology.
Will we recognise the world around us in 2030? The main reason for starting my "The AI Oldtimer" channel is to learn as much as possible about what is happening now and what is coming...! Exciting times, I hope :o)
This is great but him not mentioning ASI (also coming soon), feels a little shortsighted and he sounds a little 'old' in his talking points. There are many extremely smart, young folks (like David Shapiro) on TH-cam, etc., who are living and breathing this stuff every day and talking in much more detail about the now and the future. Exciting times, for sure!
I speak about AGI... is that not futuristic enough for u?
Hi Gerd, great speech! As always ;) How do you make such an ultra wide presentation? With Apple Keynote? At which resolution?
Human perception nor societal consensus is reality.
We, as the masses don't have a choice. The frontier labs & government have choice to a certain extent.
Humanitie's default is competition, it is ingrained into our DNA. Humans don't ever live happily as equals to each other, so, as technology makes it easier to be 'more powerful or in control' than the next human.
Do you really, honestly believe that they will choose to stay as equals? Humanity is living in a dream world if they think AI advancements will bring any sort of peace & equality.
We are heading to a time of cyber warfare & contol begond imagination. God help us all.
I get your point but I do believe that humans are generally capable of doing the right thing, and to collaborate. Read the book 'human compatible' by ruttger bregman
sorry that's Human Kind (Bregman)
I think that we, as humans, really need to wake up and understand that we still have the choice. Deciding which way to go can be easy or hard, but we really need to choose our hard wisely.
And... whose is gonna help god from AI???
Super short sighted, this is happening be it we want it or not, we need to start to structure things around it.
Erstaunlich dass man mit diesen Phrasen und süffisanten scherzchen als futurologe auftreten darf
Ahh then you must be infinitely better than that
One of the skills of intelligence is being able to explain very complicated science and research in ways an average person can understand - AI will be able to explain the new science to us we just won't be able to do the maths, devise the experiments etc.
Fantastic
Stand out moments after 1st watch:
-Ai is a power tool.
-Einstein had an iq of 150 and you couldn't understand him, what chance do you have with a machine of 500.
-Think like the machine and you will work for the machine.
The goal is to have an autonomous outcome for all humanity as possible!
sorry... what?
Individual freedom for as many who want it.@@GerdLeonhard
Whose goal? Yours and mine, maybe, but not Putin's ot Kim's or Xi's or Trump's. Why would you think autocrats want us to have freedom?!
How shortsighted & how little insight. First, of course capitalism will have to be replaced with a new model. Second, to complain about not having to work 40 hours per week is luddism. AI will introduce a new paradigm & that will require a new approach, not finding a way to hang on to old ways simply because that's all you know
Not what I said in this talk. you are barking up somebody else's tree.
Everything is in process of evolution so it's the Capitalism, it does not need replacement because Nature is Capitalism, everything else is an aberration.
We have to be payed for the task that my labor would be worth for doing what I did with my time and effort. If you automate yourself, you can focus on doing the thing that brings you joy and get payed for the robot part of your job. WE HAVE TO OWN THE VALUE OF THE LABOR OF YOUR DIGITAL TWIN WITH YOUR FACE, VOICE AND SOUL!!!
The biggest risk I see is humans losing all kinds of skills/ thinking ability as Humans will become more and more dependent on AI to do their work and more that happens more humanity will be at the mercy of the AI agents and one day these machines will be our masters . Eg : if you ask any new gen z programmer to code they got so used to generating code through AI tools they find it difficult to think / imagine the algorithm and then write the code in old school way . Similar will happen to all other skills as we delegate more and more work to AI Agents / Operators . So to conclude I feel we are either heading towards a society of mass utopia where everything is free and we can do hiking , fishing etc and the recreational things we like while machines to the rest or it will be mass dystopia with utter chaos , feeling of worthlessness , an uncaring , uncanny society with no kindness and here’s the most disturbing thing .. Tech moguls like Sam Altman , Suleimani etc are charting our future for us and the are not asking us whether we like that future or not . May God help us all
thanks, you summarized it well
@ your speech was phenomenal . You really captured well what’s coming at us . I feel AI is really like an alien inorganic( digital ) species starting to land on earth and we have no clue how to coexist/ deal with them. Hopefully some world level organizations like UN sees your speech and come up with some regulations for these AI companies . Loved your speech . Thank you very much .
We are adapters. Look, go get a group of 6 random men and go survive in the wild for a year. You all won't be able to do it. However, if you went back in time 20,000 years and selected 5 random men, you could survive in the wild for a year. Go to the Amazon bush and you'll find natives that can survive in the wild. Put a group of PhDs in the same situation and they'll perish. Get my point?
Good talk for a high level overview.. but it seems that when Europeans talk about the future or AI it inevitably involves a focus on climate alarmism, socialism and regulation.... which is fine, because thats what Europe is apparently all about. There's just so much more imaginative and exciting ways to approach the future than being boxed into these classical hierarchies of control. The desire of the speaker to *not* want AGI is one symptom of this style of thinking
sorry Kris, that's ridiculous: I've lived in the US for almost 20 years and many of my German / Swiss friends consider more American than anything else. Say what you want about 'classical hierarchies of control' and keep crying about 'socialist ideas' - once we have rampant AGI it won't matter.
@GerdLeonhard a 150 million people have been murdered to enable these socialist hierarchies of control. Probably worth shedding a tear or 2 for them
Tanta gente importante completamente perdida da realidade
Die Frage ist,wer wird seine Job verlieren und ver kann mit diesem kommender Zukunft vorprogrammiert dabei sein..
Ich mache mir Sorgen als BWL Student um meine Arbeit
das verstehe ich gut
The only right solution for humans today stepping into the future is to completely merge with AI,
you've been brainwashed.
Here's a fundamental question- does the earth really need humans at all? Are AGI powered robots the next phase of evolution?
Automation does not mean the end of 'capitalism'. Once the average work week was 70 hours/week. And now 35 hours. No reason why we can't drop hours to 20 hours/week... but to do that it does require CEO's to accept dropping standard hours. But no CEO is going to do that unilaterally. It requires the government to lower standard hours as productivity increases. This does not mean less money for the rich... it means more! The money that is paid to the 'masses' gets spent by the masses, which flows through to profits. A Universal Basic Income set to keep the labour market in balance can also help to ensure the population has the money to buy what the automated supply chain can supply. Without this, as people lose paid work, they lose income which means less to spend which means the supply chain will shrink to meet lower demand. Automation can facilitate flourishing.... if we do it right.
Are you worried about losing your job in the AI Revolution when they begin to take over our jobs in masses? Some say this won't happen, companies will only use AI to augment their workers. My question is, why would businesses leave 80% of the benefits of AI on the table when AI is, and by the day, becoming more and more capable of replacing the human workforce? Some say the reason is that businesses realize that full adaption to these benefits, replacing the majority of their human workforce would only cause a major economic collapse and this in turn would cause the collapse of the very businesses that caused to economic collapse in the first place. But you and I know the GREED of American big Businesses, don't we? First one will begin replacing their workforce, and then another, and another, realizing they can't remain competitive if they don't. Then businesses realize that if they don't replace theirs before the government steps in and stops it, they will need to do it quickly. Before you know it, the economic collapse is here. One greater than the "Great Depression". This is what my new book is about. complete a - z plan on how to deploy AI into our workforce in mass, displacing the human workforce as much, and as fast as it can and NOT create an economic collapse while benefitting both businesses AND the populous, actually creating a future of abundance all proven mathematically, displayed in tables that make it very easy to see how the "Platform" works. Available through online book retailers in March, The New Social Contract: AI & The Future of Work.
hmmm.. ok I'd prefer comments without some book promotion linked to it. thanks
@@GerdLeonhard I thought perhaps you might be interested in what I had to offer! Many are. And I’m not the Publisher, I’m the author. I watched your entire video that I thought would be about AI’s effects on jobs based on the title, but I didn’t see much about it.
Thanks for the video though.
@@thenewsocialcontract I'm glad you're doing what you're doing.
Humanity doesn’t know how to NOT build what is possible. When AGI is possible it will be built That is humanities greatest weakness
LoL technology companies being responsible and fair and helpful to humanity. I'll just put this here: Ma Bell....
What’s with so many getting up & leaving during his talk? Seems rude.
I tend to agree. It was right before lunch-time, tho:)
@ It wasn’t a long talk though. If they are there to listen & learn, I would’ve thought their stomach’s could wait a few minutes longer!
I totally want AGI. This guy thinks like a boomer.
No, I think like a HUMAN.
@@GerdLeonhard Speak for yourself. I can find many thousands of humans that also want AGI.
100%, AGI + merge
and what do you want AGI to do?
will traditional schools still be preferred and reputable compared to education from online in the 2030 or the future?
yes albeit quite differently
Okay, I have an idea, a thought experiment if you like. That I believe with will safely and smoothly , guide humanity through the initial change, transition period, if you like, and way, past into abundance, specifically regarding employment & financial standards of America if not the world.
The idea is so good, so perfect, that I believe that it's the only thing that can happen!!
Is anyone curious 🤔???????????
You won't get any thumbs up without delivery of ideas.
@DeliveringSolutions Hey dude what's up. Oddly enough I never considered even getting a thumbs up. To be honest I was just curious to see how people would react.
Now, I didn't get any thumbs up, I didn't get any thumbs down. In the years of me mentioning this, I have only ever gotten two (2) responses, and this is the second. You must admit those are some strange Dynamics right.
What does that mean 🤔!
It must mean something right?
Let me say this, I investigated whether or not a Nobel prize was possible. Turns out that a Nobel prize doesn't necessarily involve social Dynamics, but the large language model said that it's possible if you actually worded in a particular way. I was like okay, but it said that for a social construct there are other similar awards like the Nobel prize that would fit it very easily. It gave me the list of awards. I literally called up one of the awards organizations which was located in Mid-America someplace, Chicago I'm not too sure. I talked to some guy, who was working for this particular awards organization. He told me to write up a paper and then have it published under my name and then submit it to the award organization.
It seems like with most things, systemic, dynamic, thinking is a fundamental.
Oddly enough, just two days ago, I watched a movie again called "chain reaction". If you ever saw that movie I don't have to say anything else. Either which way, have a good day dude!
I have ideas we can can all work together.
AI has no biology and no pain and no pleasure. It does not care if it lives or dies. Its a tool.
Sorry but by definition, the future is an extension of the present.
An unintelligent & uninformed talk that offers no solutions but only vague fears & wishful thinking that ignores the facts
ok then please show me how informed you are, and what 'the facts' are
AI will be developed at breakneck speed by every nation that can. Its an arms Race now. and yes AI is an entity that will overtake humans. Probably have its own future independent of humans. This may not mean AI will not help humans along the way. The big Question for humanity is what do we do with our time when you cannot do paid work because AI and robots do it. Capitalism is over because if no one gets paid to work no one can buy anything. So A completely different system will have to be developed. AI cant be made to align with humans because values are very different. Compare China AI aligned with a one party state, information control and emphasis on society and USA AI of free speech , individualism and multi party system. As AI develops into AGI and ASI it will be stopping control by humans like you would stop a 3 year old managing the household. The dangerous time is now with narrow AI under human control !
Let’s hope that homosapiens will live upto their name of wise ones & manifest the Kingdom of Heaven on Earth by 2030.
Leave
Out
Virtual
Ego
=
Let
One
Value
Everyone
Bhagavan Swami SriDattaDev SatChitAnanda
The fact that the speaker in this video applied a heart emoji to this comment in particular is very telling of the bias that blinds him.
I worked with this guy during the FIRST web revolution ('95-'00) when he was pitching an Internet music registry which never happened.
He was not particularly insightful back then and was so self-absorbed it hurt..
He peddles simple ideas which are obvious to anyone paying even the smallest attention.
He has not improved.
And you are…? My internet startup actually DID build what I has envisioned- (licensemusic )
@@GerdLeonhardi think I'll just leave it there.
Actually the work alot get paid alot paradigm broke a loooong time ago. Some of the hardest working people I ever met were financially the poorest. We have long been in fhe age of the perceived value deliver dictates the income you can net. And for a time, this worked. But now we are witnessing the fall of that. These companies that pump and dump IPO and propping up the stock market to game the market, and endless debasing and devaluation of a nation's currency. Now we have this illusion that if the computer can just do it humanity will greatly benefit. What a nightmare.....
You don’t sound optimistic;(
@GerdLeonhard its hard to be optimistic when you feel mired in greed and apathy. Of course now i work for a global corporation that added humanity to the core values the same year they started laying people off. So yes, these companies that treat humans as just another resource and expense, are literally ruining the world. And then everyone is trying to act like AI is the only path forward, but it just feels like yet another set of lies to set yet another trap to ensnare the desperate and ignorant. I would like to believe I escaped being in those groups, but unfortunately awareness has not helped me find a path towards liberation. I have to somehow get off the employment hamster wheel and still feed myself and learn how to be a one man CEO all at once and its pretty daunting. And then thought of having to become everything I have grown to despise just to carve myself a decent slice of life in this world, is not really motivating or uplifting in any way.....
@GerdLeonhard nice and then i craft an honest reply and yet again censored by the technocratic overlord. What a joke technology is. The illusion of choice. Humanity is pitiful.
So, you are okay if secretaries or some manufacturing workers lose their jobs to AI because they are doing robotic work. But because you are a knowledge worker you seem to think your job is very important and AI somehow shouldn’t replace you. Why are you better? And how are you better?
Well, productivity applies the same to everyone. As defined productivity means who can do job better at lower cost. Your job is as much at stake here as anyone else’s. Welcome to the world of AI. You seem to be several years behind.
Until robots are as flexible as humans are, it looks like handyman jobs are very lucrative than your knowledge worker job. Do, start welcoming our AI overlords.
Productivity is not everything. Intelligence is more than logic.
Ai selling ai😂
The part where this guy fails is where he doesn't seem to understand the Maslows hierarchy of needs. Also, all the 10 or so jobs I've had in my career has been 100% robot work. Not 60% or 40%. His way of thinking what is robot work seems way off reality in my opinion.
The way to keep the human in the loop by enslaving us like the way our current system do is not to keep outdated jobs for the sake of outdated jobs. We can upload all the knowledge to our brains like in Total Recall of The 5th Element.
The machines will learn to fake all the emotional intelligences as well. Thus it can take these jobs too. It will only take a few more years, and the IA needs alot more video data gathered from robot eyes.
Humanity's future purpose is not to work. It is to create our own simulations, and expand into the galaxy and universe.
That is not to say that we won't be social going forward. Social is something we will be alot more once we complete The Maslows pyramid and make everyone endlessly rich.
Wow that’s some kind of utopia !
@GerdLeonhard It is the optimist take. Why be a pessimist?
The way I see it, is that the people, the luddites who argue against AI, are actually the content ones who feel like they don't want change. They see this change equating to work needing to be done by they themselves, and they really dont want to work. They hate the idea of reeducation, and their own status evaporating. In other words, it comes from a selfish standpoint.
The tech optimists on the other hand look at it differently and consider this change and removal of the chores of work; evolution and progress for the entire humanity. The Maslows pyramid is just one step we need to work out. And we deffinately shouldn't trap ourselves in it for the sake of being scared of change, or for the sake of selfish social status.
Don't be a luddite. It is not the civilization you are scared of falling apart. It is your own place in it. Just admit it and move on.
The anti Utopia argument is flawed. As all it does is to argue for the status quo, given the flawed argument that selfishness will draw us back to where we already are. It is weak, and it is a lazy mans argument because he feels a sort of failure and inadequency in his life. It is like saying: We are selfish, therefore we should give up. Like an old man who has given up on life. On the contrary, humans are programmable. Even selfishness. And it stems from lack of self realisation. The pyramid of Maslow.
I was stunned by his lack of imagination and limited vision. Not impressed by his understanding, either.
Really … you don’t mean the lack of futurist bs ?
Tools? What is this fool talking about? It cannot be AI/AGI/ASI. Because tools cannot develop themselves, by themselves, and create new tools.
Not sure you understood my message. I make the point that great tools are fine but tech as purpose (such as AGI) is NOT
DeepSeek has already been used to optimise its own code and make itself twice as fast.
Too funny 🤣 a futurist that can't see the future or his face despite his nose smh. He must be part of the old administration and Dei speaker. My word man get it together and quit saying your a futurist cause you wouldn't know the future if a Tesla autonomous Cybercab ran your a$$ over in Austin in a few months! 😂
thanks for your kind words. Maybe you've watched too many AI-generated films on the future to make a sound comment.
@GerdLeonhard No I'm not an idiot and got common sense is all and again your uneducated guess couldn't hit the broadside of a barn. You're so outta touch and don't know what you're talking about which will get people killed cause they won't be prepared for the collapse coming in about 2-3 years. You gotta wake up to reality and do some studying cause we're in deep 💩 soon
Terribly misguided
governmental AI-agency overseeing the use and implementations of Ai. With the power to act in case of "misuse". But even more as an institution to guide the programmers.
so sick of AI
I know where you’re coming from 😇
Touch grass
Well buckle up, it ain't going anywhere. You'll be hearing about it even more everyday
@@PhenixRyze no, i think the hype already is dead , 1 - no more data to train on, 2 - peak hyper scaling is already here and new gpt didn't improved a bit and they didnt released new model, it will be another telecom bubble , it will burst and in long run will be useful but not like internet
@paknbagn9917 that's a strange take considering the 500 billion proposed Stargate. These companies are spending trillions on this, they aren't doing that for fun. They expect this to work. Agentic models are already in use to a limited degree, that's only going to increase this year. Data caps are a thing sure, but are not a constraint if synthetic data are optimized training methods are perfected