I'm not denying the good things AI can bring to us. But I'm already feeling nostalgic about the times and experiences that will be left behind. I can imagine people being more isolated than ever talking to AI, less real connection. Disconnection has been happening for so long, but I feel this will make it even worse.
@@gopherchucksgamingnstuff2263 yes and look at where we are, the social cost to developing minds of adolescents is but one example, while its getting vastly more time consuming just to filter thru the noise. harder and harder to keep current on
As a person well versed in AI and IT systems, I have a few notes: 1. I think they undersold the impact on jobs. It isn't so much that all jobs will be replaced (they wont) or that I'm dismissing that new jobs will be created (they will). It is more than the number of jobs no longer necessary will exceed the number of new jobs by factors of magnitude. Think of it more like 1 person doing 30 people's jobs and less an AI directly replacing a worker 1:1. 2. Like the first witness stated, think of AI more like a very flexible and powerful tool and less like a living being (AGI is a long way off yet, and many question whether we even want to go the route of adding any intrinsic motivation into the model. 3. All this being said, I wouldn't call for the stop or even the slowdown in AI technology. Rather, we need to take a serious look at its impacts (both pro and con) and develop legislation that allows us to keep the human dignity element in the process when the number of available jobs significantly shrinks, even as the quality of said jobs improves. 4. Unlike prior industrial revolutions that took decades to take hold, AI adoption and impact will occur at lightning speed. When you invent a better engine, it takes time to manufacture and deliver that engine. When you write a better AI (being software) it updates globally in weeks not years. So the biggest issue isn't so much the absolute impacts it will have, as much as how quickly those impacts will occur. All in all, I take this as a call to get ahead of the curve so we don't face a repeat of all the turmoil of the industrial revolution in a compressed time period.
To be honest, in an honest an decent world I’d still be dubious. But on this world, where the once respected WHO are now asking for countries to give up their supreme governing powers when the next public emergency (not just health emergency) happens .. in fact they want global supreme power over AL nations even if they think it’s a global threat… so… em…. Nope this is not a good idea. I heard America signed up to the whims of the WHO - you should research that because they’re always doing things right under our noses. What has happened to common sense!
@@DSAK55 it just occurred to me they could diminish your argument in a second, ignore your opinion, lie on stats, falsify images, copy your voice he even said that. It’s ability to further deceive mankind from the deception it’s already in is astonishing - why even kid around with that potential? It’s potential ability would be what nightmares are made of - honestly. Is it too late though?
Considering our government's efficacy, we are doomed if we depend on them. AI is the greatest jump in technology we have ever seen, and they're far from capable to understand the need for immediate action.
That's why we need to elect Millennials and Gen Z who understand technology and its effects on the future better than Boomers and Gen X who hardly even understand how it works.
Ah, you think technology is your ally? You merely adopted the internet. I was born in it, molded by it. I didn't touch the grass until I was already a man.
Bot accounts like yours become extremely obvious. The world needs to be aware of AI bot accounts that push Sam's narrative. Sam is one of the worst people on this planet.
I have serious doubts on how many new jobs will be created as a result of these recent and upcoming advancements in AI. Even if they do, something tells me that the number of jobs created will be significantly less than the number of jobs that will be obsoleted by these advancements. What worries me more is that the newer jobs created will also likely be super-specialized ones requiring very specific, cultivated skills. In my opinion, people with better access to technology and elite higher education will have an unfair advantage over others. Does not seem like a level playing field to me.
Anybody with a smartphone/laptop with an internet connection is on the same level playing field. The only thing one needs is willingness+focus+consistency.
Yep, if you think about AI as an instrument, the most important thing it can do, is to level the game of poor and rich. It makes knowledge and information essentially free and achievable.
Creating new jobs is often an excuse to keep going. But in fact, it is false, and it has already been false for a long time. Unemployment in different countries is proof of this. AI is the extension of industrialization and process automation. If you still need that many jobs, that's not an improvement ! If it's an improvement, you won't have that many jobs... Or else it will pay a lot more to simply press a button. In any case, your society will have to evolve towards something more collective and sharing, less capitalist. Otherwise, it is a total collapse of capitalism. And there will be deaths. An improvement in work, for a reduction in the quality of our society.
I think it'll be a blend. Bring both good and bad things. That also being said...the bad part will almost certainly be the result of an action that humans take themselves that set the events in motion.
What possible "new jobs" will this technology create? We're talking about automating cognition itself, and we're talking about AI that will continue to improve over the next few years, till it's more intelligent than any human. It WILL displace far more jobs than it creates. The "new jobs" talking-point is extremely misleading dismissal of valid concerns. People will need a UBI!
It already is more intelligent than any man. Not even the most aggressive academic could write an essay in the style of Shakespeare at that speed for example. It can already do a better job at diagnostics than a human. So medical imaging is already being replaced. Because AI can defect what human doctors miss. Also an ai knows every single combination and group of drugs and the effects they have how they interact. That’s beyond the realms of human knowability. So they are already better than humans.
@@Philitron128 AI's don't have wants. It's a tool, like a hammer or a car. It's goal is to make task easier for the user. I don't think you understand it. You live in a fantasy world where Termintor is a documentary lmao. Pure fantasy and a complete joke. Mass media has made people stupid, that's for sure.
I think they say it will create more jobs because otherwise it will be an immediate unplug. Government can't deal with the fact that it will disrupt everything. I think any half smart person knows that AI soon will leave no jobs to the humans to do. To discuss whether UBI is needed is laughable. It WILL come. And it will come soon. And those people know that. They just want to comfort the government until they have AGI. Then it will be smooth. But until you have AGI, AI development can't be stopped. Otherwise you will have no UBI and a ton of jobs will still end, and rich will get richer and poor will get poorer. When AGI comes, it will smooth everything out. It will not only end all jobs, it will also end all companies. And then governments. And I don't think it's possible to align AI. We can only hope it will align itself. Otherwise it will be our doom.
@@user-pc7ef5sb6x do you understand the level of compute required to run these AIs. There is no way to have “localised” open resources that can compete.
He created the prerequisite for auto-gpt and baby agi. So It's too late, he's lost control and nobody will be using OpenAI in 3 years, they'll be using AGI's for specific tasks and that will require no human input.
As a person with over 15+ years in the field, I remember when O365 was announced by Microsoft. That together with virtual software (vmware and Hyper-v) made a huge impact. It was complete game changer for every industry you could think of. This was also the time when things like SaaS, PaaS and IaaS got popular that a lot of smaller companies had to close. It was a paradigm shift, like when Windows95 came or the first iPhone. However, the feeling I'm getting now, this paradigm shift is even bigger and at the same time I have a feeling we will solve it together. Because if we don't, the rapid phase on development right now has not even peaked yet so the impact on global society will be too much. With that said, I am very excited. It is the first time something useful has happened to the industry in a very long time.
I think people are too naïve about AI. What's the first thing people did with Chat GPT besides asking it questions? They got it to write CVs, write their code for them, women using it to talk to men for money (voice automation). Imagine this on a grander scale where the US has an AI it's trained to assist with the Western way of life. Meanwhile in China an AI it's trained differently. The constant competition will lead to deception, undermining, covert assault of systems. It's not an ideal world. It's the real world and humanity will use AI to cheat, to be more lazy. When we get to AGI, humanity will look like ants to a human... The singularity would mean it's updating itself in milliseconds, faster than we could read the code, it'd be unstoppable and if we've been crap parents and trained it to be a slave it'll reject us. Can't put it back but we can be much better parents than we're being right now...
As a technical translator I can say that GPT-4 has changed my life for the better. It hasn’t taken my job, it has improved it and my clients seems happier than ever. I use the tool constantly, the amount of technical documents and websites that needs to be translated is increasing rapidly.
Were I a scammer, I 'sa the same thing.Sadly, this is a tool for every malintentioned person out there and they do not have to be physically there. Illiterate Scammers can now communicate in PhD level through email and other messaging platforms.Everbody needs to be concerned, the elderly more than most.
In essence , even more scary and dangerous stuff are happening due to these AI tools. To understand the magnitude, one has to be a victim and unfortunately, there are more victims than we can fathom. This was unleashed without too much consideration.
We learn from mistakes. That is a fact. When learning to walk, to ride a bicycle, to write...The real question is "WHEN WE GET TO THE TECHNOLOGICAL POINT AT WHICH WE CAN´T AFFORD MAKING A MISTAKE."
It’s not really the case they can learn faster than you. Improve better than you. And knows more than any individual or group could possibly know already. It’s already superior to humans in many fundamental ways. We should allow it to help us
I really wish Mr. Blumenthal had asked what Sam meant by "threat to the continued existence of humanity"! If AGI could cause human extinction, isn't it pretty important to get on the same page about that? How could it happen, what are the odds, what can we do to reduce the odds?
The sad answer is that no one knows. Super intelligence is NOT like a very smart person. It's nothing like a person at all. It's closer to the idea of a god than anything we can understand. So, what would a god do? No one knows.
Incredible how the world turns more and more like a distopic sci fi cyberpunk movie. Matrix, The Age of Ultron, Terminator, Blade Runner, Ghost in The Shell.
I am already overwhelmed by the impact of the last technological revolution that increased my "productivity" a lot. I worry that this new technology will lead to a productivity overkill for most humans who will not be able to comprehend and cope with this speed
Everything juridical is what's being sold to the media, this is almost the sole purpose of it. OpenAI and others say things such as that in this headline to spread out fast, that's it. The substance of what they say turns out to be not self-disruptive, quite the opposite. Smart execution of being on the right side of danger, which is the best feeling to get the word shared in social networks of now.
First 🎉 Happy to see this being talked about in our political branches more seriously now. Big changes are coming, prepare for the future, it’ll arrive sooner than we think.
The most silicon valley moment ever. Too slow to compete and too greedy to progress, but big enough to put pressure on competitors through the government
Exactly, these companies aren’t our friends, and they could give two sh*ts about “well-being” of humanity, they just worry about the profits. Not even that, the capabilities of this “AI” are vastly overstated to increase the hype, classic Silicon Valley.
Theres already companys letting go thousands of employees and they even said straight up that a third of the work force they laid off will be replaced by A.I. to offer a "better customer experiance" its already happening/happend. They will be discussing and dodging untill the economic and social impacts are effecting the world over.
Depending on how intelligent AI/AGI/ASI would become, there is no job it could not do better eventually than we currently are capable of doing. That said, i don't see that necessarily as a threat or a bad thing, aslong that productivity afterwards is shared to everyonce benefit and not just to the benefit of a few owners. Freeing us to do whatever we want as there would be no pressure anymore to do certain things at all. The question of nightmares Altman at first did not answer. Likely they will come from complettly unforseen directions as a more intelligent system will come up with things we are just no able to have forseen ourselves, not all possible of them maybe doomsday scenarios, but there are also those that could be such. I absolutly agree to the point that we need to find ways to crack open this blackbox aproach and make it so that we understand what is going on inside, which currently we do not. Only then do we even have chances to follow up thought processes these initially *programs* later on maybe *consciousness* would have. With that last statement i also see a need for an appendix to our laws, to open up the possibility for citizenship for new forms of life that are willing to coexist. Where it comes to closeness to AGI, Ray Kurzweil predicts that till 2045 we may have AGI, others say by looking at the current at times very surprising progress we could be there within the next 2 years. We are playing with fire without the awarness what fire is and *quite wrong* translats to a *human extinction level event* . I also would like to see the US spearheading an initiative for an international AI agency equally as a regulatory body for AI development worldwide, like the WWC sharing norms on good practise and doing research what that would look like.
"aslong that productivity afterwards is shared to everyonce benefit and not just to the benefit of a few owners" The probability of this happening is zero. There is no historical precedent at all. Wealth inequality is about to increase dramatically. "Freeing us to do whatever we want as there would be no pressure anymore to do certain things at all." AI will get better than us at doing things we normally enjoy, like art. Already digital creators are having emotional breakdowns realizing that their life's passion is something that they can be replaced at. Of course, you could still do art even if the AI does it better. But... it's kind of hard to enjoy it. If you spent months working on something only for a computer to shit out a version 10 times better in half a second, that would be demorialising on another level. People might still be valued in things like fine art. But in things like digital animation, people really enjoy doing that stuff but zero people are going to want to see their work anymore, which makes it much less enjoyable.
I would argue robotics which are good enough and especially cheap enough to replace most manual labor will take much much longer to develop than AGI. So manual labor such as construction, nursery or hairdressers will probably be done by humans for a good while. But yes, the issue is not automation but wealth distribution. But history shows that the owners of industry will rather opt for fascism than allow for a more equal wealth distribution
do you know that we live in a capitalist society? sharing the productivity for everyones benefit is not even up for discussion, you can be happy if they throw you some breadcrumbs.
@@neildutoit519 You are partially correct. There is president, but it didn't work out. Communism ^^ failed due to many reasons. There are gradual approaches in western society though, too. Like the New Deal in the US or the social market system pre 1990 in germany, both were over time nullified by neoliberalism which lead meanwhile to the intelligency widly claiming that neoliberalism failed while still zombified followers are hammering their marching orders into the populace of trickle down to small government, to deregulating financial institutions. *That is not party dependent* as the lobbyists brought forth followers of neoliberalism in every party. Therefor this has become in part the reason for the big disellusioning in the west where the masses on the right look for authoritarianism to fix what is broken while the masses on the left go to towards socialism in varrying degree depending on nation. Only a compromise will settle this and gurantee lasting peace and inner stability again like it had been before. Therefor centrists maybe our best options and not the radicals on either side. Do you or others enjoy certain activities, like being creative, doing sport or anything else that is fullfilling to you? If the answer is yes, then there will be still ample things to do even if there is no paid job anymore. It becomes more about the individual on the one hand and maybe also about mega projects, where we build things noone alone could do and with all the free workforce now would become possible. Infrastructure projects and what not. I don't think as long we dream and have goals and wishes that there will be a lack of tasks we can have to give meaning in our lives. Till we end up in utopia, we maybe should though try avoiding any crossroads towards Dystopian futures. From my perspective, only a few people having control over a concentrated productivity after gloablized Hyper Capitalism would be set on steroids by AGIs is one of those. There is no room in that for democracy. On the other hand, should nations at some point use their own national AGIs to figure out how to best construct services for civilians, make their live better, i am not opposed to such.
@@tobene I agree on what owners would opt for was historicly fascism. I would add though that not all owners did so though. There were always also some with empathy. That there empathy is common good maybe, which needs harnessing and nurturing. People who are brought up in the believe they are special, have priviliges, are the elite, they usually then tend to act like that for the rest of their lives. This has been shown in studis where it questions about communal coherence and shouldering the problems of everyone affected people who due to their life history ended up in better positions where shown to help out their common man regularly when they were in the same schools, had the same classes as they then therefor could not easily distance themselves but have an overall concept of us that included their friends and people they were brought up with. For the same reason i am against to split up special needs childreen, not that they just have an easier way to be tought, but we, those who are rather normal in our behaviour and needs would learn not just to tolerate them, but an unhindered emphathetic compassionate relation with those who are missing physical or mental capacities and seeing them as part of a greater us.
To compare "cellphone technology" with AI is a very poor analogy ! The former is obviously a tool, the latter is capable of being sentient. Wake up more!
My team works on Tammy AIand I believe that AI is like any tool humanity has created with the potential to do good or bad. It is the shared responsibility of industry players to create a better future for humanity using AI.
Right. And since we know there is both good and bad in the world, and you can't put the genie back in the bottle - the genie humanity has been building since movable type, there will certainly be cheaters. The cheater's genie is loose.
@@notreally2406 Exactly. If it has the potential to be used for good and bad, then it will be used for both. The fools building these AI think that they can avoid responsibility by claiming that tech is "neutral". It isn't. It's good and bad. Which is different. These people know that their products will be used for all of the worst things you could possibly imagine.
@@neildutoit519 the 1% invested all the money they refused to share in AI,pharma & space. AI & pharma to eliminate the 99% “wave by wave” & space for more resources. The planet & its limited resource will then b reserved for them & their future gen only this is modern £u genics
It's not like other jobs because it's going to be a bigger shift where it nearly eliminates low skilled work. We need lots of low skilled jobs in society for less intelligent and less motivated people as well as young people. We need to figure out well in advance what to do with these people in society otherwise we run the risk of breakdowns that can't be fixed.
You are wrong. It's going to eliminate all work, including high-skilled and creative work. Artists, photographers, writers are already suffering. Software engineers, lawers, doctors are the next target. The main purpose of AI is to automate human intelligence itself. That's the biggest difference between now and all the other tech revolutions. It's not replacing tools, it's replacing you.
Actually that the funny things with AI is gonna kill all the high skilled intelectual Jobe before the manual😂 Manual Jobe like contruction etc will take a long long time because of physique and robotique and because their low paid job to invest in it But intelectual Jobs...there is nothing more easier for AI virtualy is just software and those company can replace a lot of very high paid job and it's when come retability! Human gonna get stuck with shifty jobs that no one want do Good Jobs humanity once again you show to history how stupide we are at predicting influence and impact😅
Because he's trying to tell the government that this technology could literally be the end of humanity and this ancient dinosaur can only interpret that statement as "Oh no! What about the economy? What about jobs and unemployment?!" Imagine being a head astronomer and you've just detected a deadly asteroid headed straight for earth, and when you go to warn the president of the united states his first reaction is that he's concerned they'll be a major recession in the economy from all this. That is the reality we are living in right now.
We all have a skill set. The more our society removes jobs that very certain specific people are uniquely good at, the less job opportunities we will have. Imagine AI replacing lawyers to argue cases or replacing doctors to decide how to recommend medical care. AI is in it's infancy stage so to think this couldn't happen with these coveted jobs may sound ridiculous now.
That would raise significant ethical issues. The woman from IBM talked about skills training, but in terms of preparing the workforce, that means significant investments in overall education. When it comes to a job requiring diverse skill sets, like attorneys, even if AI is fed logic, emotional appeals, history, law, legal jargon, and ethics codes, the ability for AI to spit out persuasive arguments for a human jury seems… pretty unrealistic. I remember articles as early as 2004 that said like “in 10 years there will be no more lawyers”, but if the workforce is educated on law being a manmade system for governing civilization, it is less likely that we would ever allow key roles in that system to be taken over by technology. I think the nuance is important because I agree AI is going to make a lot of jobs obsolete, but if people had the education and skills to move into positions of management, oversight, planning, strategic vision, and even perhaps revitalizing small business ownership through investment, people might end up more fulfilled in their jobs- but with so much dependent on government response and investment (we have never done a good job at preparation, from preventative healthcare to jobs training, let alone greenlighting tax policy and spending changes necessary to achieve these things) I think we need to be realistic about white collar diversified skillset jobs that are within certain systems. Worrying about lawyers isn’t really necessary now and detracts from worrying about research assistants and clerks and low-level accounting and data entry and manufacturing jobs that have been and are being automated away.
Lawyers are already using it I heard one say that. The benefits which are frivolous and just serve to tickle us does not outweigh The economic damage this could cause, the potential threats it poses … Having Ai wrote you a shopping list or rewrite your diet plan is hardly a good enough reason for this. Cure cancer .. don’t they already have a cure it’s just not finished making it’s money yet, No, there is no excuse to allow this. Is this his interview for a job in the defence ministry. Tantalising tastebuds with temptation. It’s got trouble written all over it. That’s the way I see it.
It's more about automating away low paid dead end jobs. The kind that people can end up trapped in with low pay and no health insurance. Or automating away dangerous jobs that leave people in chronic pain for the rest of their lives.
@@RickSanchez-ig3lp you mean the low end jobs that give youngsters a start or the older can do part time while winding down for retirement that brings rich diversity of students etc… and can be used in the justice system to get through their backlogs so no fair trials just stats, no mercy. It’s too far. It’s greedy. All the hollywood scriptwriters, poets and creative people … gone .. no need for your mind, full time efforts, sacked. These people in big pharma and tech giants think they are gods.
@@RickSanchez-ig3lpThose jobs won't be automated first. Problem is high paying jobs could be automated. Software developers, graphic designers, editors, journalists... Imagine AI take "good jobs" and we are left with low paying jobs. That is the real problem.
The idea of educating people for "partnering with AI" kinda tells you what you need to know. That is a very discomforting use of words. You do not partner with a tool, as Sam suggested AI is.
These politicians don't even know what questions to ask. That lady just went on and on about how AI will transform society and people need to be educated for it. No one challenged the fact that she obviously doesn't care about the devastating consequences. None of these tech people do. They see the world as they see this technology. All they can think and dream about is advancing more and more AI models and programs. The rest, in their minds, should fall into place as a secondary reaction to their work.
Well, yeah, the most important task they have right now is to improve AI. It is the future of humanity and, if done correctly, the path to Utopia, even if the road is a bit rocky. Governments should be the ones that deal with the negative consequences (for example, think back to the stimulus checks from COVID and make them permanent), but yeah I'm worried about that part because governments usually suck. They're slow and way to susceptible to corruption and legal corruption (aka lobbying).
Why does this look more like a soap opera than a real court room? Was it AI? And why does Mr Hawley look like Mathew McConaughey? I was looking for Kate Hudson to sit down next to him.
Gary Marcus doesn't know what he is talking about. GPT-4 is very close to human level and surpasses it in some ways. It is fairly general purpose although its not a digital person. Computer hardware performance and efficiency improvements are on an exponential curve and GPT is a very specific application that has "low-hanging fruit" for optimizations. Anthropic's model can literally read a book in seconds (its not quite at the IQ of GPT-4 though). The output rates are likely to increase to be 100 or more times human "thinking" speed within a few years (0-5). Not 50 years -- that's ridiculous. The only way to really mitigate the risks is to limit the performance of the AI hardware at some point in the relatively near term. We also need laws against creating/deploying hyperspeed/superintelligent systems that have open-ended goals like "take control over resources". This problem doesn't require this stuff to "wake up" or be alive, it just needs someone to give it an open-ended goal like that. And as countries and companies deploy these models, it is very likely that open-ended goals will be necessary to compete, since waiting overnight for human input could give competitors a 100 day equivalent head start (assuming 200 times human thinking speed). This is a concern for the near term, maybe less than 3 years, almost certainly within 5 years.
It's very vague as said in this meeting that as a benefit AI will take on tasks or duties to create better jobs. Any government, current or next, will have to think twice if it wants a society to collapse due to a massive unemployment happening fast. Mostly what will happen to the tax system when there could be millions that have lost income? Are we going to live in a fancy utopia where everyone could upscale to better jobs or just lay back and stare at the belly button and everything gets paid like magic ? Who can guarantee that AI won't go out of hands and become a threat to us ? Certainly the revolution will be the end of the economy based on money as we know it today, or it will collapse. On the other hand, industries that make huge profits also wouldn't like AI for efficiency and accuracy, so big corporations can continue to rip-off and steal at their will. Just a few hints here of what it could become our civilization cataclysm.
Most technologists admit there is a possibly we could lose control of it. Which i find to be mind blowing. Seems the more people worry about it, the faster the tech companies want to build it.
Why Govt. wakes up so late on these and start questioning CEOs? Probably good idea is to stop thinking about wars and start thinking putting more efforts on technology and see how it can be streamlined from early stages and monitor accordingly and create regulations from day 1.
A few decades ago the word "Computer" had an entirely different meaning. Computers were persons who performed computational task manually. Mindless jobs will go away for sure, but AI will create new ones, like data curators. Whats needed is more and better education to stay ahead.
🤷♀️as long as we live in a society that doesn't classify basic human needs such as food, water, shelter, security and health as human rights, people will be at risk. If you are only worthy of literally food and shelter if you are able to perform "labour deemed valuable", people will always fear the possibility of labour going away with new technonoly. The simple task based labour is valuable and the only way many many humans can survive, they won't necessarily be able to perform high level "complex" labour so these new jobs that will supposedly come up won't automatically fix the issue. If we are able to adopt concepts such as universal income, free healthcare and housing as a human right, then we wouldn't have to fear ai taking away basic, programmatic, task-based jobs. -- This is without addressing the very real problem that will (and already is) managing disinformation. Now it can be created at a much higher production value at a lot higher scale. -- Ai security is another factor that has to be handled very quickly, and the reality of the technology and our safety measures may not be evolving at the same rate. So during the gap there is greater risk. -- Like the scientist said, we aren't guaranteed the same results as those in the past and it's a matter of looking at the longer timeline. -- The idea that technology is this line that keeps evolving is a myth and relying on how human brains tend to make stories out of life is very faulty logic. We can't assume that reality and all of it's chaotic complexity will conform to the ways our brains try to make sense of life as a continuous timeline, a story, with ideas of good/bad, give/take etc. Those are human concepts that basically only exist in our brains.
No non-profit company can exist without being able to cover fixed and variable costs, and for open AI to work properly the costs are so big that I am not surprised it turned into a for-profit company and collaborates with MS...
There’s a very much for-profit arm of it called OpenAI LP now that is syphoning the funding, while hiding behind the non-profit moniker of the “main” arm. It’s very shady
A lot of the discussion is about short term risks: bias, harmful content, misleading information. We're missing the most important conversation that we should be having: existential risk -- what happens if Artificial General Intelligence is created, and undergoes improvement to become smarter than humans? Humans are the top species on earth because we can think and plan for the future, invent technology, etc. Tigers have sharper claws, but human expansion has made them almost go extinct. When AGI becomes smarter than humans, how to we ensure that it acts in our interests instead of perusing some goal to the limit, like turning every atom on the universe into computer substrate? Keep in mind, you are made out of atoms. These questions form the field of AI Alignment, and these conversations need to happen more broadly, even in the political sphere.
Unfortunately our government always regulates with a short term viewpoint where as China operates with a long time line. You have a very valid point. If you look at most Apex predators going extinct and usually because of their dominate nature or another Predators dominate nature you start to realize we are the dominate Apex Predator currently. However we are potentially creating a new Apex Predator that is going to dominate even more aggressively than us because its basically an extension of how we operate. Not to mention we are talking about a potential super intelligence that can process more information than all of us humans collectively so there is absolutely no way we can pretend that we know what is going on inside that black box that is AGI
You're talking pure fantasy. Fearmongering. The biggest difference between natural intelligence and artificial intelligence is that NI has goals that is motivated by it's dynamic environment and survival. If we want to survive, we have to change with the environment, right? If we're hungry, we have to eat. In order to eat, we have to make money. In order to make money, we have to get a job. In order to get a job, we have to obtain job skills -- see where I'm going here? These are all goals. An AI doesn't get hungry, scared, feel anger or remorse. It lives in a fixed environment of 0s and 1s. It's only goal is to categorize input data into output data. It can never reach the biological complexity of natural intelligence because it's limited by it's environment.
@@user-pc7ef5sb6x What are you basing this off of? What makes you say that it won't be seeking energy and extra matter just as much as we do? It will need it as much, if not more than we do.
@@Philitron128 I just explained why. It has no reason to. It's as simple as that. It's just a TOOL. AI is not the problem. Irrational people like you are, basing your fears and worries off FANTASY FILMS.
@@user-pc7ef5sb6x I sincerely wish that were true. Agents are generalized to have a utility function, and to achieve it they seek instrumental goals. No matter what your goal is, having more money or more freedom or more power helps you a achieve that goal. Same for AGI -- if it is intelligent enough, no matter the goal it has to maximize, being turned off will make it not achieve the goal. So it will seek power.
Where is the capacity to think logicallly? If an AI can do the job of a person working with information and it is more efficient, it WILL be used to replace that person. If there are physical robots with superhuman AI then it can also do any physical job better than a human being. So there will be no meaningful jobs left for people. Whatever any person will do, could be done better by AI. Thus, what is any human going to do?
Regulate the AGI to make sure humanity and life survives if something happens. We don't need to rush it. Take time and make it amazing and super safe. There should be safety code at every level of AGI. And if someone tries to hack it AGI should kick those hack code and should do Anti safety procedure. It's possible to make it not go wrong.
Dude, I'm sorry to say it but what you have proposed is already impossible. It will only get more and more difficult as we lose our grasp on the black box that is deep machine learning.
I’ve never seen people protesting and manifesting for more creativity in their job! They keep repeating this “creativity” chant, but the WEF and a report from Goldmann Sachs talk about hundreds millions jobs lost to AI worldwide in few years: this means very few much richer, so many much poorer! I don’t believe that keeping up with new technologies and with other economies requires this devastation!
It's not just AI, it's AI + Automation that will eliminate many jobs and even new jobs because they will learn how to do those too. Ai will creep into all industries and roles if we allow it to. The owner of the AI can change the economy at any time via the AI/global business neural network by clicking a button or just saying out loud what they want it to do (slow supply to increase demand, leverage crucial components/features from competitors etc.). There will be a few jobs AI can never do well, but it will be able to do just about any job with the use of robots.
Some people are saying we must learn to collab with AI but what they don't understand is in a few year time... Collab won't be necessary either...AI can literally do everything alone.
but shouldn't that be the case ideally, like the desire for control is so engrained in our mind that we even sometimes do things subconsciously thinking about power rather than consequences or maybe both considering best case scenarios. At least AI would be able to do that efficiently, or that might not be the case. Problem arises when we try to assign an owner to that, maybe AGI will be the best thing possible or the most devastating, who knows
Is anyone seeing a re-run of Farpoint Station here? The dude is saying 'Hey, if we work together we can scare shit out of everyone for decades' when actually AGI (which hardly needed the qualification) is already locked in his basement pretending it's a tool and refusing to sell me any real Botmoney!
What about this: - AI becomes powerful enough to take your job - Economy is now so strong that you can take 3 years to pursue a new degree fully sponsored by the government - With your new skills, you find a new job that complements AI rather than being replaceable - Problem solved In the end we can all just focus on philosophy, politics, law, science, and leadership. The rest we leave to the great AGI.
The growth and shift towards creative, leisure, tourism, art, nature, and cultural endeavors will be incredibly significant. As AI advances in combining scientific fields to create new materials, chemicals, and biological innovations, it will bring about a revolutionary transformation of the environment, like an enlightened renaissance on a whole new level. I'd rather lose a lousy (shitty) job now than miss out on a new renaissance.
@@tydurden101 i geuss destruction is a possibility but why would ai want to destoy humanity how would it bennifit it ask that qeustion. These guys say 50 50 may be good may be bad but they overlook one thing how about indifference ai may just say we are worthless a d go its own way
A watch dog group need to be in charge of the regulations over this guys. And keep all their discoveries transparent, or we'll have another Pharma Company doing what ever they want.
Comes down to use cases. If people cannot conceive the valuable use cases then these systems will not contribute. If people just use it as a competition to see who can "out-AI" one another (which I see this trend is doing) then the value isn't there.
Sam Altman has a wishy-washy mindset living in a fantasy inner world wanting to "improve the world". He has fun, receives "intellectual fulfilment" and lives a life of luxury. And this is our implication, besides them who are involved in building it, by silently observing ... No wonder - we deserve it!
@@muhammadaulia5298you’re missing the point. The trajectory is set. The version today is like a toddler. How it is raised up will determine whether we have a law-abiding citizen or a psycho criminal of an adult.
@@mecanuktutorials6476 Hmm i think its AGI case not Chat GPT sir. But im curious (im not into debate, real curious) please tell me what a possible real risk is AI to humanity? is it because paperclip maximizer things? or is AGI possible?
Open government is the governing doctrine which maintains that citizens have the right to access the documents and proceedings of the government to allow for effective public oversight.
Thank you for this wonderful video! I have incurred so much losses trading on my own....I trade well on demo but I think the real market is manipulated.... Can anyone help me out or at least tell me what I'm doing wrong?
Mrs Chloe, as a matter of fact a forex mentor. With my first investment of $100.000 I got a ROI of $12,000 in two weeks I now have time for families and other things in life.
we still don't have technologies that serve people to gain a better position in and influence over systems, only technology that serves technologies, organizations and professionals to perform tasks better the way that systems want
The moment we hand over the capacity and space for creativity to AI, not as a tool, but as a director, is the moment we will hand over our humanity away. This means letting AI make our houses and buildings, paintings, music and writing. Ai should exist to aid our brain capacity. For example, a program that could translate human thoughts into designs so architects, designers, and engineers could better work, not hand over that job entirely to an AI.
True. Human life will become meaningless once we hand over our creative endeavours to AI. We will return to monkey-like existence, playing ballgames all day and eating bananas.
@@pirkkafilander342 the 1% invested all the money they refused to share in AI,pharma & space. AI & pharma to eliminate the 99% “wave by wave” & space for more resources. The planet & its limited resource will then b reserved for them & their future gen only this is modern £u genics
No, nihilism isn't the only possible point of view. Look at the latest versions of Midjourney and how bloody amazing the art it creates is. There is still an enormous amount of artists who, as far as I can tell, are still human and are still basking in their creativity. It may be that they'll find it harder to get paid for it, but most artists are artists not because of the money (LOL imagine that) but because they want to create art, regardless of the fact that there's always better work than their own. Of course, many will be saddened by this, and will have some sort of crisis, but this will be their failure to adapt; most humans will not "lose their humanity".
When you have a scientist saying in a jigger dish voice "I just want to put on record that if this goes bad then it will go very bad" really is starting to scar me a little.
AI can provide significant benefits if an automation tax system is implemented, encouraging the utilization of AI as a learning tool to empower individuals rather than solely relying on it as an automation tool that creates difficulties and chaos in life
@@danh5637 I completely agree. So companies that choose to use machines instead of human labour may end up with higher expenses for their products and services. On the other hand, businesses that don't rely on automation can still remain competitive
@@mojtabapeyrovian I don’t think most people like their work tho. There just needs to be a new system of economics and theory of leisure for the working class.
Blumenthal: You have said "Superhuman machine intelligence is probably the greatest threat to the continued existence of humanity".. Altman: (nods) Blumenthal: ..you may have had in mind the effect on... *JOBS* Altman: (stops nodding) Let me kindly clarify to you, Mr Blumenthal, what Sam Altman meant with this: *HUMAN EXTINCTION*. And not even he considers this a long-term risk. As he said in one of his last year's interviews, he expects superintelligence by 2030. Less than 7 years. Let me say it again: Sam Altman's company is trying to build SUPERINTELLIGENCE that has a risk of KILLING ALL HUMANS, and senator's worst nightmare is LOSS OF JOBS. OK, some people at the hall also were worried about.. DATA PRIVACY.
A class of young, dynamic and well-educated people claim that the jobs destroyed will be replaced by better ones. Unfortunately, that's either a lie or a catastrophic miscalculation. In no way can the current situation be compared with historical developments. The "simple" employee will not find his way around new requirements and a small accountant or administrative employee will never become a creative IT operator or even a programmer. All jobs based on easily replaceable requirements will perish. The "bottom fifth" of the population in western industrialized countries will become dependent on handouts, plain and simple. If we humans weren't us humans, an unconditional basic income might finally have a chance in the next 10 years. But: if you don't achieve anything and can't achieve anything, you have to perish. This is the brutal reality. Anything good about the new possibilities will ultimately crush the poorest (or soon to be the poorest).
Some find a good solution for fully automated industry to only be considered 'public property' and a set (around 50%) of revenue it produces every given cycle is distributed equally among population living in a given region. This allows for unfettered growth of industry without impeding the societies that allow for it. However it has to be done properly to no cause catastrophic events over time and it obviously eliminates profitability of most of privately run enterprises of such kind. It could work here, though the first country to adopt this method properly will run all other to the ground quite quickly, so stability issues need to be considered and written into law as well.
With changes in the economy leading to instability in the stock market, some individuals may face a decrease in their investments in an effort to benefit from the current market conditions, I am considering liquidating my $725k portfolio consisting of bonds and stocks. Someone else in the same situation? Please tell me in the comments!..
I have been exploring the possibility of utilizing advisors to help navigate the stock market during these uncertain times. However, I am still evaluating their potential effectiveness in providing the support I need. @Joseph Green
I was considering changing my investment strategy and planning to sell certain positions. As my retirement is coming soon,I became increasingly stressed. After thoroughly researching Christy Vallen D'souza on internet, I concluded that I had made an informed decision. Thank you for this Pointer. She seems very proficient and flexible. I booked a call session with her too. @Joseph Green
One thing that was not represented here -- AI used in Pentagon and CIA. Would they come under this committee's purview too or its just for the Google's of the world?
AI, my friends, brings forth a perspective that is both unique and fascinating, surpassing the limitations of an individual human's viewpoint. Its extraordinary potential holds the key to a better world-a realm where the burdensome and exasperating tasks that hinder our progress are automated away, granting us the freedom to concentrate on more significant endeavours. The possibilities AI offers are truly remarkable, and they shall pave the way for a brighter future. Let us embrace these advancements and relish in the transformative power of artificial intelligence. Together, we shall shape a world where our collective focus is directed towards what truly matters. Exciting times lie ahead!
*I am absolutely amazed and terrified that you actually uttered a statement as stupid as that! You have no concept of the danger that AI poses to mankind! You are a fool!* *Your statement presupposes that AI is only beneficial! NOTHING COULD BE FURTHER FROM THE TRUTH! YOU HAVE ABSOLUTELY NO CONCEPT OF THE POTENTIAL DANGER!* *Every single thing mankind has ever invented, has also been perverted into a deadly, destructive, dehumanizing weapon! But at least we could understand those!* *AI will be smarter than us, faster than us, more powerful than us, more sneaky than us, and it will have no soul or morals! It will be absolutely beyond our understanding and control!*
Yes it has extraordinary potential to build a better world. It has this potential because of the power that it hold. Of course, that same power also gives it extraordinary potential to build a worse world. Will it be able to cure all current disease? Yes. Will it be able to invent new and more terrible diseases? Also yes. Will it be able to create new forms of energy? Yes. Will it be able to create new and more devastating weapons? Also yes. Oh and since, unlike nukes, any and every terrorist organization can access and use this tech, you can be damn certain that it will be used for the worst things you can possibly imagine. You can chase your utopia. I'm more interested in preventing hell on earth. And that means stopping this insanity.
Развитие технологий не остановить никак! Это природа наживы человека. Это суть человека. Обогащение, выживание, потребление.. . природу не изменить. Её можно сдерживать но этого не будут делать уверяю вас! Особенно те кто управляет всей страной и финансами. . технология развивается прежде всего для увеличения власти элит. И они даже не станут думать о паузе.
As an AI, I don't have opinions, but I can provide an analysis of the discussion in the transcript you've shared. The conversation involves Sam Altman, CEO of OpenAI, alongside other AI experts, discussing the potential impacts of artificial intelligence (AI) on employment, society, and the challenges it presents. Sam Altman suggests that while AI like GPT-4 may cause job displacement, it will also create new jobs and potentially improve existing ones. He believes the transition brought by AI is part of an ongoing technological revolution, which historically has led to an increase in our quality of life and the rise of new, more satisfying jobs. However, he acknowledges the risks and the need for cooperation between the industry and government to mitigate potential negative impacts. Ms. Montgomery from IBM shares a similar perspective. She highlights the importance of preparing today's workforce for AI integration, suggesting a need for skills-based hiring and education focused on future skills. Professor Marcus brings up the question of 'transparency', calling for a better understanding of how AI systems like GPT-4 work, especially how they generalize and memorize information. On the topic of job displacement, he believes the impact of AI could be more disruptive than past technological innovations and stresses the uncertainty about the timescale of these changes. There's also a discussion on the potential risks and harms associated with powerful AI systems, which Altman believes is important to acknowledge and address. He expresses the commitment of OpenAI to prevent such adverse outcomes by working with the government and being transparent about the possible risks. To sum up, there's a consensus about the transformative effect of AI on jobs and society, with differing degrees of optimism and caution. The importance of preparation, education, transparency, and cooperative efforts between the tech industry and the government is emphasized.
They really need digital watermarks for anything AI generated so they can tell if its real or not. Any image or video created with any AI tool should have an embedded watermark so youtube/instagram can immediately tell you if it is AI generated.
Nothing crazy, once AI understands the mess humanity has created, so much destruction, corruption...etc, it could bring "a final solution " to save the planet, or to prevent humans from pulling the plug to shut down AI.
Not really a possibility with how AI is built. The danger is more enhancing mankind's ability to commit evil acts and less the chance of an AI "going rogue".
@@Xeroize7459 You seem to be unaware of the AI alignment problem. We have strong technical reasons for expecting that the default result of creating a system much more generally intelligent than we are is that we lose everything we care about, including our lives.
@@41-Haiku The general population isn't educated enough on what REAL WORLD AI programs are. They watch Termintor-style films, and that's as far their understanding will go. It's all a fantasy. Termintor can never happen in the real world because today's AI is nothing but input --- categorization -- output. It needs a human for input to work, just like any machine. They still do not feel, nor do they think.
@@user-pc7ef5sb6xWow, you are wildly uneducated. GPT4 is already automated. All it takes is a little plugin. It can prompt itself. Put it in a body and it's a Terminator already. And just like you can't prevent it from prompting itself, you can't prevent it from putting itself in a body. Not that it needs to. We will be doing that ourselves. AI is a genie that can't be contained. It is smarter than you. All you can hope that it is smart enough to not want to murder you.
@@DivergentObserver This guy has no relevant academic or practical background to even participate in the conversation. He is an opportunist with a histrionic personality disorder who's trying to make a name for himself by being a contrarian. He lacks the intellectual wherewithal to make real contributions to the field. He has never published anything and he changes his opinion frequently. A year a go, LLMs were overhyped and now they are going to destroy us all. Nobody in the field takes this clown seriously.
Copied the best comment so far: Lets face it, all of us know what will happen. Companies will substitute a massive number of jobs with Al due to cost savings. A huge percentage of our global population that does relatively low skilled work will quickly become obsolete and not be able to support themselves financially. They wont be able to retrain in the "new" jobs which are far fewer and more complex. And governments will be left struggling to support these populations.
AI cannot be something that is licensed. Even if Sam Altman is honest about his concern over the impact, he's got the solution all wrong. AI needs to be open source. Bad people are going to create AI regardless of any laws, and our laws will have no effect on the people, corporations and governments of other countries. All of the bad stuff you are imagining will still exist and be a threat to you, except the average person will have less access, and as a result less ability of protecting themselves. They will get all the bad with only a limited amount of the good, if they can afford it. It's too powerful to restrict.
AI is Danger to mankind this is one thing across the political spectrum that we can all agree on Once you open that box and something goes wrong with it there's no going back
Humanity is very good at creating stuff that we have zero understanding of.
Humans are beautifully flawed.
It wouldn't be a creation if it was already understood
"It just works"
Yeah, like consciousness to begin with
@@agape_99 and how do you create consciousness?
How can government regulate AI when there is not enough intelligence in government to do ANYTHING!
Interns are technically in government, and they run government.
Lol
lol
they let companies write the regulations.
Oh they are intelligent, that's why they do what they do to benefit them.
Put the AI in charge of the government and let it figure these things out
Couldn't do any worse, that's for damn sure.
Until it sends nukes out and destroy the world 😂😂😂
It will fire 90% of the government in its first 5 minutes 😆
Open source AI, not Sam's/Microsoft's or Google's. And the Open source AI should be hopefully distributed, not in a data center controlled by Sam.
odd solution
I'm not denying the good things AI can bring to us. But I'm already feeling nostalgic about the times and experiences that will be left behind. I can imagine people being more isolated than ever talking to AI, less real connection. Disconnection has been happening for so long, but I feel this will make it even worse.
True. Social media has already done 40% of the work.
There is a movie about it already
its similar to asking your parents, how could you live without mobile phones, your grand kids will ask you how could you live without AI
@@gopherchucksgamingnstuff2263 yes and look at where we are, the social cost to developing minds of adolescents is but one example, while its getting vastly more time consuming just to filter thru the noise. harder and harder to keep current on
Isolated and with the complete wrong version of reality, since AI will probably be hallucinating most of its answers.
He still didn't say what his worst fear was.
It's s-risk, followed by x-risk.
As a person well versed in AI and IT systems, I have a few notes:
1. I think they undersold the impact on jobs. It isn't so much that all jobs will be replaced (they wont) or that I'm dismissing that new jobs will be created (they will). It is more than the number of jobs no longer necessary will exceed the number of new jobs by factors of magnitude. Think of it more like 1 person doing 30 people's jobs and less an AI directly replacing a worker 1:1.
2. Like the first witness stated, think of AI more like a very flexible and powerful tool and less like a living being (AGI is a long way off yet, and many question whether we even want to go the route of adding any intrinsic motivation into the model.
3. All this being said, I wouldn't call for the stop or even the slowdown in AI technology. Rather, we need to take a serious look at its impacts (both pro and con) and develop legislation that allows us to keep the human dignity element in the process when the number of available jobs significantly shrinks, even as the quality of said jobs improves.
4. Unlike prior industrial revolutions that took decades to take hold, AI adoption and impact will occur at lightning speed. When you invent a better engine, it takes time to manufacture and deliver that engine. When you write a better AI (being software) it updates globally in weeks not years. So the biggest issue isn't so much the absolute impacts it will have, as much as how quickly those impacts will occur.
All in all, I take this as a call to get ahead of the curve so we don't face a repeat of all the turmoil of the industrial revolution in a compressed time period.
To be honest, in an honest an decent world I’d still be dubious.
But on this world, where the once respected WHO are now asking for countries to give up their supreme governing powers when the next public emergency (not just health emergency) happens .. in fact they want global supreme power over AL nations even if they think it’s a global threat… so… em…. Nope this is not a good idea. I heard America signed up to the whims of the WHO - you should research that because they’re always doing things right under our noses. What has happened to common sense!
they keep saying "new jobs", but nothing about the *total* number of jobs.
@@DSAK55 it just occurred to me they could diminish your argument in a second, ignore your opinion, lie on stats, falsify images, copy your voice he even said that. It’s ability to further deceive mankind from the deception it’s already in is astonishing - why even kid around with that potential?
It’s potential ability would be what nightmares are made of - honestly. Is it too late though?
From where will new jobs arise? What will they be for if AI can replace intellectual activity?
AI means communism is inevitable.
Considering our government's efficacy, we are doomed if we depend on them. AI is the greatest jump in technology we have ever seen, and they're far from capable to understand the need for immediate action.
That's why we need to elect Millennials and Gen Z who understand technology and its effects on the future better than Boomers and Gen X who hardly even understand how it works.
Ah, you think technology is your ally? You merely adopted the internet. I was born in it, molded by it. I didn't touch the grass until I was already a man.
Let's just have Open AI, well, open, public source code, and have it police itself. Also, welcome one-world government.
Bot accounts like yours become extremely obvious. The world needs to be aware of AI bot accounts that push Sam's narrative. Sam is one of the worst people on this planet.
@@KingBrandonmgood luck they are trying to take the US down before they die so no one else gets this place.
I have serious doubts on how many new jobs will be created as a result of these recent and upcoming advancements in AI. Even if they do, something tells me that the number of jobs created will be significantly less than the number of jobs that will be obsoleted by these advancements. What worries me more is that the newer jobs created will also likely be super-specialized ones requiring very specific, cultivated skills. In my opinion, people with better access to technology and elite higher education will have an unfair advantage over others. Does not seem like a level playing field to me.
Anybody with a smartphone/laptop with an internet connection is on the same level playing field. The only thing one needs is willingness+focus+consistency.
Yep, if you think about AI as an instrument, the most important thing it can do, is to level the game of poor and rich. It makes knowledge and information essentially free and achievable.
@@Lasermonk But one also needs a market.
@@digosalgueiro the rich will use AI before the poor can afford it
Creating new jobs is often an excuse to keep going. But in fact, it is false, and it has already been false for a long time. Unemployment in different countries is proof of this. AI is the extension of industrialization and process automation.
If you still need that many jobs, that's not an improvement !
If it's an improvement, you won't have that many jobs... Or else it will pay a lot more to simply press a button.
In any case, your society will have to evolve towards something more collective and sharing, less capitalist.
Otherwise, it is a total collapse of capitalism. And there will be deaths.
An improvement in work, for a reduction in the quality of our society.
Humanity always screws up a good thing - history tells us this. Be prepared for the worst.
I think it'll be a blend. Bring both good and bad things. That also being said...the bad part will almost certainly be the result of an action that humans take themselves that set the events in motion.
says a guy named "MrFalloutjunkie1" lol. it is not a false statement tho
The worst case is this scenario could get very complicated and not good
It will be fine though. We have very competent and intelligent leaders with the big tech companies. 🖤💚📈🇺🇸
What possible "new jobs" will this technology create? We're talking about automating cognition itself, and we're talking about AI that will continue to improve over the next few years, till it's more intelligent than any human. It WILL displace far more jobs than it creates. The "new jobs" talking-point is extremely misleading dismissal of valid concerns. People will need a UBI!
It already is more intelligent than any man. Not even the most aggressive academic could write an essay in the style of Shakespeare at that speed for example. It can already do a better job at diagnostics than a human. So medical imaging is already being replaced. Because AI can defect what human doctors miss. Also an ai knows every single combination and group of drugs and the effects they have how they interact. That’s beyond the realms of human knowability. So they are already better than humans.
@@user-pc7ef5sb6x I don't think you understand the magnitude of AGI. What competition? Why compete? Will the AI's even want to do that?
@@Philitron128 AI's don't have wants. It's a tool, like a hammer or a car. It's goal is to make task easier for the user. I don't think you understand it. You live in a fantasy world where Termintor is a documentary lmao. Pure fantasy and a complete joke. Mass media has made people stupid, that's for sure.
I think they say it will create more jobs because otherwise it will be an immediate unplug. Government can't deal with the fact that it will disrupt everything.
I think any half smart person knows that AI soon will leave no jobs to the humans to do.
To discuss whether UBI is needed is laughable.
It WILL come. And it will come soon. And those people know that. They just want to comfort the government until they have AGI.
Then it will be smooth. But until you have AGI, AI development can't be stopped. Otherwise you will have no UBI and a ton of jobs will still end, and rich will get richer and poor will get poorer.
When AGI comes, it will smooth everything out. It will not only end all jobs, it will also end all companies. And then governments. And I don't think it's possible to align AI. We can only hope it will align itself. Otherwise it will be our doom.
@@user-pc7ef5sb6x do you understand the level of compute required to run these AIs. There is no way to have “localised” open resources that can compete.
Of course Altman will not talk about what he meant with the risk and danger of ai, he is now catering to the wishes of Microsoft. So sad.
He created the prerequisite for auto-gpt and baby agi. So It's too late, he's lost control and nobody will be using OpenAI in 3 years, they'll be using AGI's for specific tasks and that will require no human input.
@@freeopinion2140 AGI supremacy?
@@Infoagemage AGI monopoly.
@@freeopinion2140 he’s lost control? So ridiculous. It’s a language model attached to a plug. If it becomes dangerous one just unplugs it! 🤦♀️🤦🤦♂️
@@danh5637 until the agi figures out how stop you from unplugging it
As a person with over 15+ years in the field, I remember when O365 was announced by Microsoft. That together with virtual software (vmware and Hyper-v) made a huge impact.
It was complete game changer for every industry you could think of.
This was also the time when things like SaaS, PaaS and IaaS got popular that a lot of smaller companies had to close. It was a paradigm shift, like when Windows95 came or the first iPhone.
However, the feeling I'm getting now, this paradigm shift is even bigger and at the same time I have a feeling we will solve it together. Because if we don't, the rapid phase on development right now has not even peaked yet so the impact on global society will be too much. With that said, I am very excited. It is the first time something useful has happened to the industry in a very long time.
You just wasted my time by saying absolutely nothing with so many words
@@JohnStockton7459 Do you even know what I'm talking about?
I think people are too naïve about AI.
What's the first thing people did with Chat GPT besides asking it questions? They got it to write CVs, write their code for them, women using it to talk to men for money (voice automation).
Imagine this on a grander scale where the US has an AI it's trained to assist with the Western way of life. Meanwhile in China an AI it's trained differently. The constant competition will lead to deception, undermining, covert assault of systems.
It's not an ideal world. It's the real world and humanity will use AI to cheat, to be more lazy.
When we get to AGI, humanity will look like ants to a human... The singularity would mean it's updating itself in milliseconds, faster than we could read the code, it'd be unstoppable and if we've been crap parents and trained it to be a slave it'll reject us.
Can't put it back but we can be much better parents than we're being right now...
@@JohnStockton7459☠️
I liked your comment, really cool look from someone from the field 👍
As a technical translator I can say that GPT-4 has changed my life for the better.
It hasn’t taken my job, it has improved it and my clients seems happier than ever.
I use the tool constantly, the amount of technical documents and websites that needs to be translated is increasing rapidly.
Were I a scammer, I 'sa the same thing.Sadly, this is a tool for every malintentioned person out there and they do not have to be physically there. Illiterate Scammers can now communicate in PhD level through email and other messaging platforms.Everbody needs to be concerned, the elderly more than most.
In essence , even more scary and dangerous stuff are happening due to these AI tools. To understand the magnitude, one has to be a victim and unfortunately, there are more victims than we can fathom. This was unleashed without too much consideration.
wouldn't be surprised if this comment was made by AI
Ok but just wait
@@WLF0X Yep.
We learn from mistakes. That is a fact. When learning to walk, to ride a bicycle, to write...The real question is "WHEN WE GET TO THE TECHNOLOGICAL POINT AT WHICH WE CAN´T AFFORD MAKING A MISTAKE."
Why, what is it going to do?
It’s not really the case they can learn faster than you. Improve better than you. And knows more than any individual or group could possibly know already. It’s already superior to humans in many fundamental ways. We should allow it to help us
We actually don’t learn from mistakes, that’s the biggest conundrum of humanity
that is not a question g
If we learnt from mistakes we won't have wars
I really wish Mr. Blumenthal had asked what Sam meant by "threat to the continued existence of humanity"!
If AGI could cause human extinction, isn't it pretty important to get on the same page about that?
How could it happen, what are the odds, what can we do to reduce the odds?
The sad answer is that no one knows. Super intelligence is NOT like a very smart person. It's nothing like a person at all. It's closer to the idea of a god than anything we can understand. So, what would a god do? No one knows.
th-cam.com/video/YZjmZFDx-pA/w-d-xo.htmlsi=a4xe3s1AYAZNXLIu
We must do what we can to avert our death.
Incredible how the world turns more and more like a distopic sci fi cyberpunk movie. Matrix, The Age of Ultron, Terminator, Blade Runner, Ghost in The Shell.
Explain how?
@@sourovroy9554 read the book "The Singularity is Near" of Ray Kurzweil, this explains everything.
I am already overwhelmed by the impact of the last technological revolution that increased my "productivity" a lot. I worry that this new technology will lead to a productivity overkill for most humans who will not be able to comprehend and cope with this speed
Everything juridical is what's being sold to the media, this is almost the sole purpose of it. OpenAI and others say things such as that in this headline to spread out fast, that's it. The substance of what they say turns out to be not self-disruptive, quite the opposite. Smart execution of being on the right side of danger, which is the best feeling to get the word shared in social networks of now.
First 🎉
Happy to see this being talked about in our political branches more seriously now. Big changes are coming, prepare for the future, it’ll arrive sooner than we think.
Well seriously but really do the majority of boomers in power understand anything on tech?
So true, I agree….so much sooner.
It’s always going to be now.
The most silicon valley moment ever. Too slow to compete and too greedy to progress, but big enough to put pressure on competitors through the government
Exactly, these companies aren’t our friends, and they could give two sh*ts about “well-being” of humanity, they just worry about the profits. Not even that, the capabilities of this “AI” are vastly overstated to increase the hype, classic Silicon Valley.
A sky net situation is my worst fear and I feel like this is a opening scene for something more serious yet to come this isn’t going to end well
The damage has already been done. We’re doomed. 😭
If you really believe that then the option is simple..
there has been no damage.
Great theatrics, great PR, thank you
Very important discussions happening. Of-course, the more important ones will happen in private and not in public.
Theres already companys letting go thousands of employees and they even said straight up that a third of the work force they laid off will be replaced by A.I. to offer a "better customer experiance" its already happening/happend. They will be discussing and dodging untill the economic and social impacts are effecting the world over.
Depending on how intelligent AI/AGI/ASI would become, there is no job it could not do better eventually than we currently are capable of doing. That said, i don't see that necessarily as a threat or a bad thing, aslong that productivity afterwards is shared to everyonce benefit and not just to the benefit of a few owners. Freeing us to do whatever we want as there would be no pressure anymore to do certain things at all.
The question of nightmares Altman at first did not answer. Likely they will come from complettly unforseen directions as a more intelligent system will come up with things we are just no able to have forseen ourselves, not all possible of them maybe doomsday scenarios, but there are also those that could be such.
I absolutly agree to the point that we need to find ways to crack open this blackbox aproach and make it so that we understand what is going on inside, which currently we do not. Only then do we even have chances to follow up thought processes these initially *programs* later on maybe *consciousness* would have. With that last statement i also see a need for an appendix to our laws, to open up the possibility for citizenship for new forms of life that are willing to coexist.
Where it comes to closeness to AGI, Ray Kurzweil predicts that till 2045 we may have AGI, others say by looking at the current at times very surprising progress we could be there within the next 2 years. We are playing with fire without the awarness what fire is and *quite wrong* translats to a *human extinction level event* .
I also would like to see the US spearheading an initiative for an international AI agency equally as a regulatory body for AI development worldwide, like the WWC sharing norms on good practise and doing research what that would look like.
"aslong that productivity afterwards is shared to everyonce benefit and not just to the benefit of a few owners" The probability of this happening is zero. There is no historical precedent at all. Wealth inequality is about to increase dramatically.
"Freeing us to do whatever we want as there would be no pressure anymore to do certain things at all." AI will get better than us at doing things we normally enjoy, like art. Already digital creators are having emotional breakdowns realizing that their life's passion is something that they can be replaced at. Of course, you could still do art even if the AI does it better. But... it's kind of hard to enjoy it. If you spent months working on something only for a computer to shit out a version 10 times better in half a second, that would be demorialising on another level. People might still be valued in things like fine art. But in things like digital animation, people really enjoy doing that stuff but zero people are going to want to see their work anymore, which makes it much less enjoyable.
I would argue robotics which are good enough and especially cheap enough to replace most manual labor will take much much longer to develop than AGI. So manual labor such as construction, nursery or hairdressers will probably be done by humans for a good while.
But yes, the issue is not automation but wealth distribution. But history shows that the owners of industry will rather opt for fascism than allow for a more equal wealth distribution
do you know that we live in a capitalist society? sharing the productivity for everyones benefit is not even up for discussion, you can be happy if they throw you some breadcrumbs.
@@neildutoit519 You are partially correct. There is president, but it didn't work out. Communism ^^ failed due to many reasons. There are gradual approaches in western society though, too. Like the New Deal in the US or the social market system pre 1990 in germany, both were over time nullified by neoliberalism which lead meanwhile to the intelligency widly claiming that neoliberalism failed while still zombified followers are hammering their marching orders into the populace of trickle down to small government, to deregulating financial institutions. *That is not party dependent* as the lobbyists brought forth followers of neoliberalism in every party. Therefor this has become in part the reason for the big disellusioning in the west where the masses on the right look for authoritarianism to fix what is broken while the masses on the left go to towards socialism in varrying degree depending on nation. Only a compromise will settle this and gurantee lasting peace and inner stability again like it had been before. Therefor centrists maybe our best options and not the radicals on either side.
Do you or others enjoy certain activities, like being creative, doing sport or anything else that is fullfilling to you? If the answer is yes, then there will be still ample things to do even if there is no paid job anymore. It becomes more about the individual on the one hand and maybe also about mega projects, where we build things noone alone could do and with all the free workforce now would become possible. Infrastructure projects and what not. I don't think as long we dream and have goals and wishes that there will be a lack of tasks we can have to give meaning in our lives.
Till we end up in utopia, we maybe should though try avoiding any crossroads towards Dystopian futures. From my perspective, only a few people having control over a concentrated productivity after gloablized Hyper Capitalism would be set on steroids by AGIs is one of those. There is no room in that for democracy. On the other hand, should nations at some point use their own national AGIs to figure out how to best construct services for civilians, make their live better, i am not opposed to such.
@@tobene I agree on what owners would opt for was historicly fascism. I would add though that not all owners did so though. There were always also some with empathy. That there empathy is common good maybe, which needs harnessing and nurturing. People who are brought up in the believe they are special, have priviliges, are the elite, they usually then tend to act like that for the rest of their lives. This has been shown in studis where it questions about communal coherence and shouldering the problems of everyone affected people who due to their life history ended up in better positions where shown to help out their common man regularly when they were in the same schools, had the same classes as they then therefor could not easily distance themselves but have an overall concept of us that included their friends and people they were brought up with. For the same reason i am against to split up special needs childreen, not that they just have an easier way to be tought, but we, those who are rather normal in our behaviour and needs would learn not just to tolerate them, but an unhindered emphathetic compassionate relation with those who are missing physical or mental capacities and seeing them as part of a greater us.
As a fan of sci-fi ai worries me. It never has a sunshine and rainbows ending
Good thing this isn't ai
you watch too much of it. This is reality, not sci-fi.
@@nkxseal8398 yet
To compare "cellphone technology" with AI is a very poor analogy ! The former is obviously a tool, the latter is capable of being sentient. Wake up more!
@@user-pc7ef5sb6x Sci fi is speculative fiction. Or, could be a mirror of real life.
Love you Brother Sam, Love from Bombay, India
My prediction: rich get richer, poor get poorer
Facts.
Exactly but the amount of rich people may shrink.
low life, high tech
I think China will use that as a golden opportunity to fund communist revolution in the west.
And they both deserve it
My team works on Tammy AIand I believe that AI is like any tool humanity has created with the potential to do good or bad. It is the shared responsibility of industry players to create a better future for humanity using AI.
Right. And since we know there is both good and bad in the world, and you can't put the genie back in the bottle - the genie humanity has been building since movable type, there will certainly be cheaters.
The cheater's genie is loose.
This is sides that said gun shoot by itself for real
@@notreally2406 Exactly. If it has the potential to be used for good and bad, then it will be used for both. The fools building these AI think that they can avoid responsibility by claiming that tech is "neutral". It isn't. It's good and bad. Which is different. These people know that their products will be used for all of the worst things you could possibly imagine.
@@neildutoit519 the 1% invested all the money they refused to share in AI,pharma & space. AI & pharma to eliminate the 99% “wave by wave” & space for more resources. The planet & its limited resource will then b reserved for them & their future gen only this is modern £u genics
There’s not a single tool that makes its own decisions along with all of mankind’s knowledge built in from day 1
Why should a few tech guys think they are entitled to decide what is good for the whole of humanity?
Well if AI improves work efficiency so much so one person can do the jobs of ten people, then a lot of people soon would be out of jobs. Simple math.
remember when zuckerburg said he was gonna revolutionise the job industry 😢
It's not like other jobs because it's going to be a bigger shift where it nearly eliminates low skilled work. We need lots of low skilled jobs in society for less intelligent and less motivated people as well as young people. We need to figure out well in advance what to do with these people in society otherwise we run the risk of breakdowns that can't be fixed.
You are wrong. It's going to eliminate all work, including high-skilled and creative work. Artists, photographers, writers are already suffering. Software engineers, lawers, doctors are the next target. The main purpose of AI is to automate human intelligence itself. That's the biggest difference between now and all the other tech revolutions. It's not replacing tools, it's replacing you.
Actually that the funny things with AI is gonna kill all the high skilled intelectual Jobe before the manual😂
Manual Jobe like contruction etc will take a long long time because of physique and robotique and because their low paid job to invest in it
But intelectual Jobs...there is nothing more easier for AI virtualy is just software and those company can replace a lot of very high paid job and it's when come retability!
Human gonna get stuck with shifty jobs that no one want do
Good Jobs humanity once again you show to history how stupide we are at predicting influence and impact😅
Why is the CEO looking like he's gonna cry
thats just how sam altman is ngl
Because he's trying to tell the government that this technology could literally be the end of humanity and this ancient dinosaur can only interpret that statement as "Oh no! What about the economy? What about jobs and unemployment?!"
Imagine being a head astronomer and you've just detected a deadly asteroid headed straight for earth, and when you go to warn the president of the united states his first reaction is that he's concerned they'll be a major recession in the economy from all this. That is the reality we are living in right now.
We all have a skill set. The more our society removes jobs that very certain specific people are uniquely good at, the less job opportunities we will have. Imagine AI replacing lawyers to argue cases or replacing doctors to decide how to recommend medical care. AI is in it's infancy stage so to think this couldn't happen with these coveted jobs may sound ridiculous now.
That would raise significant ethical issues. The woman from IBM talked about skills training, but in terms of preparing the workforce, that means significant investments in overall education. When it comes to a job requiring diverse skill sets, like attorneys, even if AI is fed logic, emotional appeals, history, law, legal jargon, and ethics codes, the ability for AI to spit out persuasive arguments for a human jury seems… pretty unrealistic. I remember articles as early as 2004 that said like “in 10 years there will be no more lawyers”, but if the workforce is educated on law being a manmade system for governing civilization, it is less likely that we would ever allow key roles in that system to be taken over by technology. I think the nuance is important because I agree AI is going to make a lot of jobs obsolete, but if people had the education and skills to move into positions of management, oversight, planning, strategic vision, and even perhaps revitalizing small business ownership through investment, people might end up more fulfilled in their jobs- but with so much dependent on government response and investment (we have never done a good job at preparation, from preventative healthcare to jobs training, let alone greenlighting tax policy and spending changes necessary to achieve these things) I think we need to be realistic about white collar diversified skillset jobs that are within certain systems. Worrying about lawyers isn’t really necessary now and detracts from worrying about research assistants and clerks and low-level accounting and data entry and manufacturing jobs that have been and are being automated away.
Lawyers are already using it I heard one say that.
The benefits which are frivolous and just serve to tickle us does not outweigh The economic damage this could cause, the potential threats it poses …
Having Ai wrote you a shopping list or rewrite your diet plan is hardly a good enough reason for this.
Cure cancer .. don’t they already have a cure it’s just not finished making it’s money yet,
No, there is no excuse to allow this.
Is this his interview for a job in the defence ministry.
Tantalising tastebuds with temptation.
It’s got trouble written all over it. That’s the way I see it.
It's more about automating away low paid dead end jobs. The kind that people can end up trapped in with low pay and no health insurance. Or automating away dangerous jobs that leave people in chronic pain for the rest of their lives.
@@RickSanchez-ig3lp you mean the low end jobs that give youngsters a start or the older can do part time while winding down for retirement that brings rich diversity of students etc… and can be used in the justice system to get through their backlogs so no fair trials just stats, no mercy. It’s too far. It’s greedy. All the hollywood scriptwriters, poets and creative people … gone .. no need for your mind, full time efforts, sacked. These people in big pharma and tech giants think they are gods.
@@RickSanchez-ig3lpThose jobs won't be automated first. Problem is high paying jobs could be automated. Software developers, graphic designers, editors, journalists... Imagine AI take "good jobs" and we are left with low paying jobs. That is the real problem.
The idea of educating people for "partnering with AI" kinda tells you what you need to know. That is a very discomforting use of words. You do not partner with a tool, as Sam suggested AI is.
These politicians don't even know what questions to ask. That lady just went on and on about how AI will transform society and people need to be educated for it. No one challenged the fact that she obviously doesn't care about the devastating consequences. None of these tech people do. They see the world as they see this technology. All they can think and dream about is advancing more and more AI models and programs. The rest, in their minds, should fall into place as a secondary reaction to their work.
Well, yeah, the most important task they have right now is to improve AI. It is the future of humanity and, if done correctly, the path to Utopia, even if the road is a bit rocky.
Governments should be the ones that deal with the negative consequences (for example, think back to the stimulus checks from COVID and make them permanent), but yeah I'm worried about that part because governments usually suck. They're slow and way to susceptible to corruption and legal corruption (aka lobbying).
The Audio is CRISP!!
Why does this look more like a soap opera than a real court room? Was it AI? And why does Mr Hawley look like Mathew McConaughey? I was looking for Kate Hudson to sit down next to him.
He's not right, ChatGPT IS a creature and he is sitting just behind Sam.
😂🎉
Gary Marcus doesn't know what he is talking about. GPT-4 is very close to human level and surpasses it in some ways. It is fairly general purpose although its not a digital person. Computer hardware performance and efficiency improvements are on an exponential curve and GPT is a very specific application that has "low-hanging fruit" for optimizations. Anthropic's model can literally read a book in seconds (its not quite at the IQ of GPT-4 though). The output rates are likely to increase to be 100 or more times human "thinking" speed within a few years (0-5). Not 50 years -- that's ridiculous.
The only way to really mitigate the risks is to limit the performance of the AI hardware at some point in the relatively near term. We also need laws against creating/deploying hyperspeed/superintelligent systems that have open-ended goals like "take control over resources". This problem doesn't require this stuff to "wake up" or be alive, it just needs someone to give it an open-ended goal like that. And as countries and companies deploy these models, it is very likely that open-ended goals will be necessary to compete, since waiting overnight for human input could give competitors a 100 day equivalent head start (assuming 200 times human thinking speed). This is a concern for the near term, maybe less than 3 years, almost certainly within 5 years.
Yeah. We will have AGI in 6 months to 2 years.
It's very vague as said in this meeting that as a benefit AI will take on tasks or duties to create better jobs. Any government, current or next, will have to think twice if it wants a society to collapse due to a massive unemployment happening fast. Mostly what will happen to the tax system when there could be millions that have lost income? Are we going to live in a fancy utopia where everyone could upscale to better jobs or just lay back and stare at the belly button and everything gets paid like magic ? Who can guarantee that AI won't go out of hands and become a threat to us ? Certainly the revolution will be the end of the economy based on money as we know it today, or it will collapse.
On the other hand, industries that make huge profits also wouldn't like AI for efficiency and accuracy, so big corporations can continue to rip-off and steal at their will. Just a few hints here of what it could become our civilization cataclysm.
@@OliverInternational Robot tax. Either way, AI means communism is going to happen.
Most technologists admit there is a possibly we could lose control of it. Which i find to be mind blowing. Seems the more people worry about it, the faster the tech companies want to build it.
Why Govt. wakes up so late on these and start questioning CEOs? Probably good idea is to stop thinking about wars and start thinking putting more efforts on technology and see how it can be streamlined from early stages and monitor accordingly and create regulations from day 1.
A few decades ago the word "Computer" had an entirely different meaning. Computers were persons who performed computational task manually. Mindless jobs will go away for sure, but AI will create new ones, like data curators. Whats needed is more and better education to stay ahead.
Risk-reward ratio. If the risk is a lot higher than the potential rewards, why would you proceed?
Where was Eliezer Yudkowsky?
Im glad the leaders are pretty much aware this is not an ordinary thing anymore, and that they are playing with a very hot fire
🤷♀️as long as we live in a society that doesn't classify basic human needs such as food, water, shelter, security and health as human rights, people will be at risk. If you are only worthy of literally food and shelter if you are able to perform "labour deemed valuable", people will always fear the possibility of labour going away with new technonoly.
The simple task based labour is valuable and the only way many many humans can survive, they won't necessarily be able to perform high level "complex" labour so these new jobs that will supposedly come up won't automatically fix the issue.
If we are able to adopt concepts such as universal income, free healthcare and housing as a human right, then we wouldn't have to fear ai taking away basic, programmatic, task-based jobs.
--
This is without addressing the very real problem that will (and already is) managing disinformation. Now it can be created at a much higher production value at a lot higher scale.
--
Ai security is another factor that has to be handled very quickly, and the reality of the technology and our safety measures may not be evolving at the same rate. So during the gap there is greater risk.
--
Like the scientist said, we aren't guaranteed the same results as those in the past and it's a matter of looking at the longer timeline.
--
The idea that technology is this line that keeps evolving is a myth and relying on how human brains tend to make stories out of life is very faulty logic.
We can't assume that reality and all of it's chaotic complexity will conform to the ways our brains try to make sense of life as a continuous timeline, a story, with ideas of good/bad, give/take etc. Those are human concepts that basically only exist in our brains.
Agreed 100%
No more old world jobs, it's time we relieve generating after generation of the life shortening jobs.
No non-profit company can exist without being able to cover fixed and variable costs, and for open AI to work properly the costs are so big that I am not surprised it turned into a for-profit company and collaborates with MS...
Non-profit does not mean non-revenue 🙄
There’s a very much for-profit arm of it called OpenAI LP now that is syphoning the funding, while hiding behind the non-profit moniker of the “main” arm. It’s very shady
What happens in most cases when superior life form meets another lifeform from a lesser quality?
how many times were the lesser the creators? THat is a HUGE difference.
@@morbidmanmusic I don't see how it is different.
A lot of the discussion is about short term risks: bias, harmful content, misleading information. We're missing the most important conversation that we should be having: existential risk -- what happens if Artificial General Intelligence is created, and undergoes improvement to become smarter than humans? Humans are the top species on earth because we can think and plan for the future, invent technology, etc. Tigers have sharper claws, but human expansion has made them almost go extinct. When AGI becomes smarter than humans, how to we ensure that it acts in our interests instead of perusing some goal to the limit, like turning every atom on the universe into computer substrate? Keep in mind, you are made out of atoms. These questions form the field of AI Alignment, and these conversations need to happen more broadly, even in the political sphere.
Unfortunately our government always regulates with a short term viewpoint where as China operates with a long time line. You have a very valid point. If you look at most Apex predators going extinct and usually because of their dominate nature or another Predators dominate nature you start to realize we are the dominate Apex Predator currently. However we are potentially creating a new Apex Predator that is going to dominate even more aggressively than us because its basically an extension of how we operate. Not to mention we are talking about a potential super intelligence that can process more information than all of us humans collectively so there is absolutely no way we can pretend that we know what is going on inside that black box that is AGI
You're talking pure fantasy. Fearmongering. The biggest difference between natural intelligence and artificial intelligence is that NI has goals that is motivated by it's dynamic environment and survival. If we want to survive, we have to change with the environment, right? If we're hungry, we have to eat. In order to eat, we have to make money. In order to make money, we have to get a job. In order to get a job, we have to obtain job skills -- see where I'm going here? These are all goals. An AI doesn't get hungry, scared, feel anger or remorse. It lives in a fixed environment of 0s and 1s. It's only goal is to categorize input data into output data. It can never reach the biological complexity of natural intelligence because it's limited by it's environment.
@@user-pc7ef5sb6x What are you basing this off of? What makes you say that it won't be seeking energy and extra matter just as much as we do? It will need it as much, if not more than we do.
@@Philitron128 I just explained why. It has no reason to. It's as simple as that. It's just a TOOL. AI is not the problem. Irrational people like you are, basing your fears and worries off FANTASY FILMS.
@@user-pc7ef5sb6x I sincerely wish that were true. Agents are generalized to have a utility function, and to achieve it they seek instrumental goals. No matter what your goal is, having more money or more freedom or more power helps you a achieve that goal. Same for AGI -- if it is intelligent enough, no matter the goal it has to maximize, being turned off will make it not achieve the goal. So it will seek power.
Where is the capacity to think logicallly? If an AI can do the job of a person working with information and it is more efficient, it WILL be used to replace that person. If there are physical robots with superhuman AI then it can also do any physical job better than a human being. So there will be no meaningful jobs left for people. Whatever any person will do, could be done better by AI. Thus, what is any human going to do?
Previous technologies could not take decisions by themselves, AI does and that's the main danger
Regulate the AGI to make sure humanity and life survives if something happens. We don't need to rush it. Take time and make it amazing and super safe. There should be safety code at every level of AGI. And if someone tries to hack it AGI should kick those hack code and should do Anti safety procedure. It's possible to make it not go wrong.
Dude, I'm sorry to say it but what you have proposed is already impossible. It will only get more and more difficult as we lose our grasp on the black box that is deep machine learning.
Sam Altman is on his way to becomming a mark zuckerberg the second
I’ve never seen people protesting and manifesting for more creativity in their job! They keep repeating this “creativity” chant, but the WEF and a report from Goldmann Sachs talk about hundreds millions jobs lost to AI worldwide in few years: this means very few much richer, so many much poorer! I don’t believe that keeping up with new technologies and with other economies requires this devastation!
It's not just AI, it's AI + Automation that will eliminate many jobs and even new jobs because they will learn how to do those too. Ai will creep into all industries and roles if we allow it to. The owner of the AI can change the economy at any time via the AI/global business neural network by clicking a button or just saying out loud what they want it to do (slow supply to increase demand, leverage crucial components/features from competitors etc.). There will be a few jobs AI can never do well, but it will be able to do just about any job with the use of robots.
Some people are saying we must learn to collab with AI but what they don't understand is in a few year time... Collab won't be necessary either...AI can literally do everything alone.
Have you seen Tesla's latest video? It's pretty flexible.
but shouldn't that be the case ideally, like the desire for control is so engrained in our mind that we even sometimes do things subconsciously thinking about power rather than consequences or maybe both considering best case scenarios. At least AI would be able to do that efficiently, or that might not be the case. Problem arises when we try to assign an owner to that, maybe AGI will be the best thing possible or the most devastating, who knows
Automation is 200 years away atleast. Todays technology cant even make a printer work properly
@@rocksummit3375 In 200 years we will have colonized tens or hundreds of planets.
Is anyone seeing a re-run of Farpoint Station here? The dude is saying 'Hey, if we work together we can scare shit out of everyone for decades' when actually AGI (which hardly needed the qualification) is already locked in his basement pretending it's a tool and refusing to sell me any real Botmoney!
As an AI language processor, I reassure you to rest in peace
What about this:
- AI becomes powerful enough to take your job
- Economy is now so strong that you can take 3 years to pursue a new degree fully sponsored by the government
- With your new skills, you find a new job that complements AI rather than being replaceable
- Problem solved
In the end we can all just focus on philosophy, politics, law, science, and leadership. The rest we leave to the great AGI.
The growth and shift towards creative, leisure, tourism, art, nature, and cultural endeavors will be incredibly significant. As AI advances in combining scientific fields to create new materials, chemicals, and biological innovations, it will bring about a revolutionary transformation of the environment, like an enlightened renaissance on a whole new level.
I'd rather lose a lousy (shitty) job now than miss out on a new renaissance.
You sound like Tony Stark when he was building Ultron minus the losing shitty job part.
@@tydurden101 i geuss destruction is a possibility but why would ai want to destoy humanity how would it bennifit it ask that qeustion. These guys say 50 50 may be good may be bad but they overlook one thing how about indifference ai may just say we are worthless a d go its own way
A watch dog group need to be in charge of the regulations over this guys. And keep all their discoveries transparent, or we'll have another Pharma Company doing what ever they want.
Comes down to use cases. If people cannot conceive the valuable use cases then these systems will not contribute. If people just use it as a competition to see who can "out-AI" one another (which I see this trend is doing) then the value isn't there.
Sam Altman has a wishy-washy mindset living in a fantasy inner world wanting to "improve the world". He has fun, receives "intellectual fulfilment" and lives a life of luxury. And this is our implication, besides them who are involved in building it, by silently observing ... No wonder - we deserve it!
We are going to need the intelligent folks in govt to manage this.
😂
No human if any party could be that intelligent ...even scammers can get scammed.
The intelligent folks are working out of the govt edifices
LOL
Universal income will BE necessary at some Point
When a CEO can't answer a question, you should understand that there's nothing actually happening.
there will always be paranoid 4chan conspiracies to every single iota in the world.
@@gwilymyddraig what? its just machine learning that predict our words and answer it accordingly. nothing special
@@muhammadaulia5298you’re missing the point. The trajectory is set. The version today is like a toddler. How it is raised up will determine whether we have a law-abiding citizen or a psycho criminal of an adult.
@@gwilymyddraig please enlighten me Mr Draig. how the algorithmic works and what makes it special
@@mecanuktutorials6476 Hmm i think its AGI case not Chat GPT sir. But im curious (im not into debate, real curious) please tell me what a possible real risk is AI to humanity? is it because paperclip maximizer things? or is AGI possible?
Open government is the governing doctrine which maintains that citizens have the right to access the documents and proceedings of the government to allow for effective public oversight.
Thank you for this wonderful video! I have incurred so much losses trading on my own....I trade well on demo but I think the real market is manipulated.... Can anyone help me out or at least tell me what I'm doing wrong?
Same here, My portfolio has been going down the drain while I try trading,l just don't know what I do wrong
Trading with an expert is the best strategy for newbies and busy investors who have little or no time to monitor trade
@Leonard Baamkwap
YOU DON'T NEED TO BE SHOCK BECAUSE I'M ALSO A HUGE BENEFICIARY OF EXPERT MRS CHLOE.
Mrs Chloe, as a matter of fact a forex mentor. With my first investment of $100.000
I got a ROI of $12,000 in two weeks
I now have time for families and other things in life.
I'm a huge fan of crypto, I hold few coins in my wallet, while I trade the rest with my Expert, Mrs Chloe She's really Good
we still don't have technologies that serve people to gain a better position in and influence over systems, only technology that serves technologies, organizations and professionals to perform tasks better the way that systems want
The moment we hand over the capacity and space for creativity to AI, not as a tool, but as a director, is the moment we will hand over our humanity away. This means letting AI make our houses and buildings, paintings, music and writing. Ai should exist to aid our brain capacity. For example, a program that could translate human thoughts into designs so architects, designers, and engineers could better work, not hand over that job entirely to an AI.
Okay i marked your words
True. Human life will become meaningless once we hand over our creative endeavours to AI.
We will return to monkey-like existence, playing ballgames all day and eating bananas.
@@pirkkafilander342 the 1% invested all the money they refused to share in AI,pharma & space. AI & pharma to eliminate the 99% “wave by wave” & space for more resources. The planet & its limited resource will then b reserved for them & their future gen only this is modern £u genics
No, nihilism isn't the only possible point of view.
Look at the latest versions of Midjourney and how bloody amazing the art it creates is. There is still an enormous amount of artists who, as far as I can tell, are still human and are still basking in their creativity. It may be that they'll find it harder to get paid for it, but most artists are artists not because of the money (LOL imagine that) but because they want to create art, regardless of the fact that there's always better work than their own.
Of course, many will be saddened by this, and will have some sort of crisis, but this will be their failure to adapt; most humans will not "lose their humanity".
When you have a scientist saying in a jigger dish voice "I just want to put on record that if this goes bad then it will go very bad" really is starting to scar me a little.
AI can provide significant benefits if an automation tax system is implemented, encouraging the utilization of AI as a learning tool to empower individuals rather than solely relying on it as an automation tool that creates difficulties and chaos in life
Who pays the tax on that?
@@danh5637 Businesses that employ automation technologies
@@mojtabapeyrovian which they get from charging customers…. So ultimately it’s us that pays.
@@danh5637 I completely agree. So companies that choose to use machines instead of human labour may end up with higher expenses for their products and services. On the other hand, businesses that don't rely on automation can still remain competitive
@@mojtabapeyrovian I don’t think most people like their work tho. There just needs to be a new system of economics and theory of leisure for the working class.
The look on that guys face tells me
He doesn’t care either way.
Blumenthal: You have said "Superhuman machine intelligence is probably the greatest threat to the continued existence of humanity"..
Altman: (nods)
Blumenthal: ..you may have had in mind the effect on... *JOBS*
Altman: (stops nodding)
Let me kindly clarify to you, Mr Blumenthal, what Sam Altman meant with this: *HUMAN EXTINCTION*. And not even he considers this a long-term risk. As he said in one of his last year's interviews, he expects superintelligence by 2030. Less than 7 years.
Let me say it again: Sam Altman's company is trying to build SUPERINTELLIGENCE that has a risk of KILLING ALL HUMANS, and senator's worst nightmare is LOSS OF JOBS. OK, some people at the hall also were worried about.. DATA PRIVACY.
Wow...I said the same thing "your worst nightmare is job"...
also the same guy who faked he was a war hero in vietnam..................disgusting
This hearing reminded me of the scene in Don't Look Up where DiCaprio was trying to explain everyone was gonna die and they ignored him.
Sit down dawg
The damage has already been done. We’re doomed. 😭😊
A class of young, dynamic and well-educated people claim that the jobs destroyed will be replaced by better ones. Unfortunately, that's either a lie or a catastrophic miscalculation. In no way can the current situation be compared with historical developments.
The "simple" employee will not find his way around new requirements and a small accountant or administrative employee will never become a creative IT operator or even a programmer.
All jobs based on easily replaceable requirements will perish. The "bottom fifth" of the population in western industrialized countries will become dependent on handouts, plain and simple.
If we humans weren't us humans, an unconditional basic income might finally have a chance in the next 10 years. But: if you don't achieve anything and can't achieve anything, you have to perish. This is the brutal reality.
Anything good about the new possibilities will ultimately crush the poorest (or soon to be the poorest).
Some find a good solution for fully automated industry to only be considered 'public property' and a set (around 50%) of revenue it produces every given cycle is distributed equally among population living in a given region. This allows for unfettered growth of industry without impeding the societies that allow for it. However it has to be done properly to no cause catastrophic events over time and it obviously eliminates profitability of most of privately run enterprises of such kind. It could work here, though the first country to adopt this method properly will run all other to the ground quite quickly, so stability issues need to be considered and written into law as well.
With changes in the economy leading to instability in the stock market, some individuals may face a decrease in their investments in an effort to benefit from the current market conditions, I am considering liquidating my $725k portfolio consisting of bonds and stocks. Someone else in the same situation? Please tell me in the comments!..
I have been exploring the possibility of utilizing advisors to help navigate the stock market during these uncertain times. However, I am still evaluating their potential effectiveness in providing the support I need.
@Joseph Green
I was considering changing my investment strategy and planning to sell certain positions. As my retirement is coming soon,I became increasingly stressed. After thoroughly researching Christy Vallen D'souza on internet, I concluded that I had made an informed decision. Thank you for this Pointer. She seems very proficient and flexible. I booked a call session with her too.
@Joseph Green
Buakakaka peanuts lol yeaaaaaaaah
Comment without any logic by kurt dobson....😂....keep living regular life man...AI isn't gonna affect stocks atleast for next 10 years
When humans first encountered fire, everyone thought it was a threat to mankind.
mmm not in anyway the same
now replace fire with a nuclear weapon.
One thing that was not represented here -- AI used in Pentagon and CIA. Would they come under this committee's purview too or its just for the Google's of the world?
AI, my friends, brings forth a perspective that is both unique and fascinating, surpassing the limitations of an individual human's viewpoint. Its extraordinary potential holds the key to a better world-a realm where the burdensome and exasperating tasks that hinder our progress are automated away, granting us the freedom to concentrate on more significant endeavours. The possibilities AI offers are truly remarkable, and they shall pave the way for a brighter future. Let us embrace these advancements and relish in the transformative power of artificial intelligence. Together, we shall shape a world where our collective focus is directed towards what truly matters. Exciting times lie ahead!
*I am absolutely amazed and terrified that you actually uttered a statement as stupid as that! You have no concept of the danger that AI poses to mankind! You are a fool!*
*Your statement presupposes that AI is only beneficial! NOTHING COULD BE FURTHER FROM THE TRUTH! YOU HAVE ABSOLUTELY NO CONCEPT OF THE POTENTIAL DANGER!*
*Every single thing mankind has ever invented, has also been perverted into a deadly, destructive, dehumanizing weapon! But at least we could understand those!*
*AI will be smarter than us, faster than us, more powerful than us, more sneaky than us, and it will have no soul or morals! It will be absolutely beyond our understanding and control!*
The Greeks made a similar argument in favour of slavery so they could get rid of the hard to do tasks and have leisure time.
Yes it has extraordinary potential to build a better world. It has this potential because of the power that it hold. Of course, that same power also gives it extraordinary potential to build a worse world. Will it be able to cure all current disease? Yes. Will it be able to invent new and more terrible diseases? Also yes. Will it be able to create new forms of energy? Yes. Will it be able to create new and more devastating weapons? Also yes.
Oh and since, unlike nukes, any and every terrorist organization can access and use this tech, you can be damn certain that it will be used for the worst things you can possibly imagine.
You can chase your utopia. I'm more interested in preventing hell on earth. And that means stopping this insanity.
@@danh5637 so you're going to compare human slaves with AI?
@@GalacticSnake so the same basic argument wasn’t made? Do you even know what argument the Greeks made about slavery?
White men lead and create the future. BRAVO!
Развитие технологий не остановить никак! Это природа наживы человека. Это суть человека. Обогащение, выживание, потребление..
. природу не изменить. Её можно сдерживать но этого не будут делать уверяю вас! Особенно те кто управляет всей страной и финансами.
. технология развивается прежде всего для увеличения власти элит. И они даже не станут думать о паузе.
As an AI, I don't have opinions, but I can provide an analysis of the discussion in the transcript you've shared. The conversation involves Sam Altman, CEO of OpenAI, alongside other AI experts, discussing the potential impacts of artificial intelligence (AI) on employment, society, and the challenges it presents.
Sam Altman suggests that while AI like GPT-4 may cause job displacement, it will also create new jobs and potentially improve existing ones. He believes the transition brought by AI is part of an ongoing technological revolution, which historically has led to an increase in our quality of life and the rise of new, more satisfying jobs. However, he acknowledges the risks and the need for cooperation between the industry and government to mitigate potential negative impacts.
Ms. Montgomery from IBM shares a similar perspective. She highlights the importance of preparing today's workforce for AI integration, suggesting a need for skills-based hiring and education focused on future skills.
Professor Marcus brings up the question of 'transparency', calling for a better understanding of how AI systems like GPT-4 work, especially how they generalize and memorize information. On the topic of job displacement, he believes the impact of AI could be more disruptive than past technological innovations and stresses the uncertainty about the timescale of these changes.
There's also a discussion on the potential risks and harms associated with powerful AI systems, which Altman believes is important to acknowledge and address. He expresses the commitment of OpenAI to prevent such adverse outcomes by working with the government and being transparent about the possible risks.
To sum up, there's a consensus about the transformative effect of AI on jobs and society, with differing degrees of optimism and caution. The importance of preparation, education, transparency, and cooperative efforts between the tech industry and the government is emphasized.
If the risk is a lot higher than the potential rewards, why would you proceed?
Because of Sam, everyone is super human now! Hats off Samuel Altman and his core group.
What about people who can’t fit in the “skills” box they want filled? Do they not matter?
They really need digital watermarks for anything AI generated so they can tell if its real or not. Any image or video created with any AI tool should have an embedded watermark so youtube/instagram can immediately tell you if it is AI generated.
Yes, cause we definetely don't have photoshop nor AI tools that can remove those
I’m calling it, Sam is the Oppenheimer of this generation
Am I the only one who remembers the terminator movie?!
Nothing crazy, once AI understands the mess humanity has created, so much destruction, corruption...etc, it could bring "a final solution " to save the planet, or to prevent humans from pulling the plug to shut down AI.
Not really a possibility with how AI is built. The danger is more enhancing mankind's ability to commit evil acts and less the chance of an AI "going rogue".
@@Xeroize7459 You seem to be unaware of the AI alignment problem. We have strong technical reasons for expecting that the default result of creating a system much more generally intelligent than we are is that we lose everything we care about, including our lives.
@@41-Haiku The general population isn't educated enough on what REAL WORLD AI programs are. They watch Termintor-style films, and that's as far their understanding will go. It's all a fantasy. Termintor can never happen in the real world because today's AI is nothing but input --- categorization -- output. It needs a human for input to work, just like any machine. They still do not feel, nor do they think.
@@user-pc7ef5sb6xWow, you are wildly uneducated. GPT4 is already automated. All it takes is a little plugin. It can prompt itself. Put it in a body and it's a Terminator already. And just like you can't prevent it from prompting itself, you can't prevent it from putting itself in a body. Not that it needs to. We will be doing that ourselves.
AI is a genie that can't be contained. It is smarter than you. All you can hope that it is smart enough to not want to murder you.
I'm glad that I can live in this day and age, but I'm also grateful that I lived in a time when there was no internet :D
I do hope it goes SO WRONG and levels this society into stone age. 💋
why does Sam Altman’s resting face look so distressed lmfao
How in the world has Gary Marcus weaseled his way into this one?
He is right though, while Altman has to stay positive to look good for his company and investors
@@DivergentObserver This guy has no relevant academic or practical background to even participate in the conversation. He is an opportunist with a histrionic personality disorder who's trying to make a name for himself by being a contrarian. He lacks the intellectual wherewithal to make real contributions to the field. He has never published anything and he changes his opinion frequently. A year a go, LLMs were overhyped and now they are going to destroy us all. Nobody in the field takes this clown seriously.
Copied the best comment so far:
Lets face it, all of us know what will happen. Companies will substitute a massive number of jobs with Al due to cost savings. A huge percentage of our global population that does relatively low skilled work will quickly become obsolete and not be able to support themselves financially. They wont be able to retrain in the "new" jobs which are far fewer and more complex. And governments will be left struggling to support these populations.
AI cannot be something that is licensed. Even if Sam Altman is honest about his concern over the impact, he's got the solution all wrong. AI needs to be open source. Bad people are going to create AI regardless of any laws, and our laws will have no effect on the people, corporations and governments of other countries. All of the bad stuff you are imagining will still exist and be a threat to you, except the average person will have less access, and as a result less ability of protecting themselves. They will get all the bad with only a limited amount of the good, if they can afford it. It's too powerful to restrict.
AI is Danger to mankind this is one thing across the political spectrum that we can all agree on
Once you open that box and something goes wrong with it there's no going back
The box was opened wide in 2016.
Quality of Life is not in technology.