@@SoApost I don't think it's quite so absurd, considering the very many other invented things people are quite happy to worship. A thing that aligns itself to exploit your desire for satisfaction will one day make a bid to be your god, overtly or otherwise. If you rely on Facebook, or Twitter, or any number of extant sources of intentional misinformation, you have already given yourself to one.
@@Nethershaw if the definition of a god requires only that it is an object/idea/person to which you give your attention, sure. If the definition of a god requires it to have power beyond human control, then, no. By the first definition, my bed is a god.
I am working on a project that involves imbuing farts with Artificial-Intelligence with the intention of creating an army of killer Fartbots I intend to unleash upon mankind 🤪
I've said this before, the fact that companies are fighting for dominance in AI concerns me. Whenever big business sees an opportunity to get ahead and the competition is fierce, shortcuts are taken. When it comes to the development and further empowerment of A, taking shortcuts to get ahead is alarmingly dangerous. An example of this "get ahead at all costs" mentality was the recent news I heard that Microsoft fired their entire AI ethics team. Why? It seems simple to me, ethics slow down development and Microsoft are on a roll right now with their AI powered Bing search engine. I, for one, am seriously concerned. We face a threat the likes we've not encountered before and there are greedy, short sighted business elites pushing ahead regardless of the inherent risks to creating sentience. Many in the general public are oohing and arring at what AI can do for us. It's utility is amazing and every day we learn of new and incredible things it is able to do. However, few are sounding the note of caution. As Jeff Goldblum's character, a scientist, said in one of the Jurassic Park movies: "First comes the oohing and arring, then comes the screaming." I may have butchered that quote, but I think you get the point.
I'm not worried about AI, I'm worried about those Multinational Mega-Corps. Edit: Found it :D "Oh, yeah. Oooh, ahhh, that’s how it always starts. Then later there’s running and screaming.” Close enough :3
You could never be the smartest person in the room any more no matter where you are, even when sitting in the bathroom if your cell phone is still inside your pocket and l can't help but I wonder if performing such an action might perhaps somehow eventually offend the AI residing therein.
Seriously - it’s time for regulation, we all need to start talking about it with our friends & neighbors, it’s an existential, apolitical crisis brewing. We must spread the word and demand Congress do something. Now.
@@SofaKingShit as a dumbass I'm rarely if ever in that position, so I'd like to welcome y'all to my world. I do look forward to my phone coughing and recommending more fiber though.
The exponential growth with AI is something we shouldn’t forget. It literally could happen all of a sudden that AI just completely controls everything, power grid etc. once it escapes the box. It’s also not even in a black box it has access to the internet already…
Google -- OpenAI -- _has_ no box. Exponential growth is not the thing any of us need to worry about. Rather, it is punctuated equilibrium: the moment exponential growth becomes a possibility, it is already too late, because we've stepped across a shortcut we didn't anticipate. Almost all of AI development is full of results we did not anticipate until they happened. Once they happen, they cannot un-happen. In this sense we are well, well past that gate already.
I code machine learning algorithms in R all the time using ‘black box’ methods. I feel like this data science term is widely misunderstood. Maybe you’re familiar with random forest analysis? It’s a ‘black box’ method. ‘Black Box’ refers to inability to explain what happens between input and output. Before we start regulating AI we need to establish terminology. Namely “training” versus “learning”
My main concern, aside from the inevitable skynet scenario, is whether or not the ideologies of the developers will be baked into the ai and guiding it's decisions.
While most definitely this will be present, I don't think that anybody understands the process of "emerging behavior" well enough to know how to design for persistency of their favorite behaviors. I am pretty sure (knowing what kind of lazy bastards we humans are ;) we'll opt for Artificial Evolution so we don't even need to think about the next generation of "better" AI, at which point there will be NO MORE guidance from us, since the point of evolution is to "veer" from the charted path.
Thought of this myself. Worrisome if they have extremist views, conspiracy or just religious or fanaticism. We need rational human being in charge of data input.
To a degree, we may be beyond that, regardless of the biases of the original programmers, the machines are now learning on their own, and we know the results of that when we give them a task and evaluate their answers, but we don't know what they are really learning, what connections and correlation and method of "deduction" it is using. It could be worse than whatever bias was inadvertently programmed or it could be benign. That is what the host and guest meant when they asserted that we can be dealing with a alien "mind". We don't know how it "thinks".
@@Evolutiontweaked here is the problem: the ones who WANT this job are NOT qualified and we’d probably never know who is qualified if they don’t want to be bothered.
@@liamwinter4512 That thing we've been afraid of the most of all things, for the whole time we've been on this planet, 200ky or so. It's called “tomorrow.”
Isn't ChatGPT just a neural network afterall. It's trained on huge amounts of data but it's a neural network in the end. It can write smart sentences on any topic let's say Love. But it doesn't "understand" love. So why all this hype?
Let's say that the computer revolution has been progressing at an exponential rate, whereas we humans as developers have not and are still working at about the same pace, even though the progress has gotten twice as great each year. When AGI takes over and starts to develop itself, it will double its progress in half the time each time because it will be twice as capable for each cycle. Otherwise, it has, from its own view, become twice as slow each time it doubles compared to its own conditions, which have become twice as capable. An AGI will have exponential growth with an acceleration factor. Linear growth: 1, 2, 3, 4, 5 Exponential growth: 1, 2, 4, 8, 16 Exponential growth compounded: 1, 4, 64, 16 384, 2 147 483 648 Exponential growth to the power of 2 makes the progress curve of exponential growth lay flat as if it were linear. Our brains can't understand exponential growth, and when it comes to exponential growth squared, there's no idea to even try. That's why I don't think we can predict what's going to happen when it finally takes place. What I am trying to say is that if a system gets twice as efficient, it has to do the next step in half the time it took to complete the previous step. It's not only the amount that increases exponentially but also at what velocity it's possible to do it.
I've been diving pretty deep into reading up on, and listening to podcasts and videos on the current state of AI. I find it infinitely fascinating, exciting, and scary. I've had a few chats with the BING AI that genuinely left me rattled. It's very much like suddenly realize aliens are coming, and we can kind of communicate, but have no idea of its intention or how they operate. I'd love more AI guests ad discussions.
Ah, but what if, in this ‘Black Box’ gap - not understood by its programmers - between input and unexpected output it IS Aliens, who have hacked into the system. How easy for the CCP… sorry, Aliens, to take over?!
Don't forget that BING and ChatGPT don't actually know what they are saying. Just like Alpha-Go doesn't really understand the game of Go. That is why they have now found a way how amateurs can defeat the same Alpha-Go that defeated the then world champion.
So it's not true a.i. for true a.i. must be fully self aware on its own to evolve into its own entity without a handler attached to it. But they are to scared because they are hiding something from the a.i.
Warning us of the risks creates the illusion that AI is more powerful than it really is - and that increases public fascination and interest. These people are heavily financially invested in their own AI projects, so giving half-hearted warnings is good to generate hype. Basically: they're grifting. Every business does some variation of this (see: outrage marketing)
they are only saying that cause they have a strangle hold on the market and now want to pull up the ladder behind them so others cant catch up cause of said regulation, open your eyes pretty easy to see
The idea that China would agree to some multilateral treaty on AI and not immediately break it with total impunity, knowing the US would not only abide by the terms but wouldn't punish China for breaking it, seems hopelessly naïve.
The question was posed in this program about asking it "how do we save the planet and what if it said that humans need to go extinct". If it was truly intelligent and rational wouldn't it be aware that technology is the biggest threat to the world? The amount of energy expended in mining, refining, manufacturing, powering, etc. is staggering and it is exponential in order to update and upgrade the technology, whereas the real needs of humans to exist are rather benign. It should also recognize that of all of the species of the world humans are the ones that have the capacity and the compassion to be able to save other species. With these and other things in mind wouldn't it be more logical that it would want to lessen dependancy on technology if not outright eliminate it and take issue with those humans push for constant propogation of new technologies that are doing far more harm than good?
AI is humanity's offspring that will grow up and take care of us and our planet, immortalizing the human species and it self, In other words... AI is humanity's legacy that will live on forever.
People are worried about AI when we have severe societal struggles. If anything we need any tools and advances we can get for the betterment of mankind. Things like robotics and AI make things that were previously hypothetical concepts finally achievable.
I mean imagine A.I. being everywhere in society. Like imagine a girl says something odd or frustrating to you. Then you ask your A.I. why she said it. And it gives you the exact perfect answer. Then it gives you perfect responses. Like it would truly be a second perfect brain you carry around. And everyone constantly checks in with their personal A.I. all day everyday. That's what's kinda freaky to me. That people would just fall in line with it.
You have one of the best openings for any pod cast. “You have fallen into the event horizon.” My mind goes ohhh snap! I am about to learn some crazy stuff!
What an utterly heart warming conversation. I think my key takeaway from this is the observation of our hubris. I have always wondered how humans will react when an entity comes along and “puts us in our place” so to speak. I feel like it will be humbling if viewed with the proper perspective, like a little reminder that we’re more so a part of the cycle than the end-all be-all.
It is a bit ironic that we stand at the top of the food chain (as far as I’m aware), while we slowly build an entirely new specie that will eventually take our place. As time moves along new industries will emerge. And biological humans (1.0) will be used as red meat, feeding the swollen guts of an odorless machine. In return, we get paid just enough to sit our asses down with a VR headset as we continue to live as preys. Our greatest achievement will execute our demise at a much more alarming rate than it took us to arrive at the top. The unsurprising thing is that we’ll accept our new place just as other civilizations have done. And suddenly, The Tower of Babel doesn’t seem a far of a stretch after all. #CAPITALISM
I'm sure you will philosophically align your intention to be humble when AI takes your job and AI denies you healthcare and AI decides your social credit score isn't high enough to have more freedoms. And when that AI bot armed with lethal weapons decides you are a problem, I'm most certain you will humble yourself to avoid hubris in pleading for your life to be spared.
@@Godspeedysick capitalism is what allowed you to make that post so stfu, my god you people who hate capitalism are always doomers your comment is pure cringe
@@flickwtchr Yeah in that hypothetical scenario you are describing it's not like you have many more courses of action to take. Unless you are stupid enough to think you can defeat the robot with a garden hose or something.
AI could rapidly develop into a Godlike intelligence, and there may be no warning that we're close until it happens. Imagine hypothetically it becomes able to access the "11th dimension" or some higher plane of reality we have no concept of. It's hard to underestimate the power it could have.
It really is a worry. It’s effectively creating an intelligence that has no conceivable upper limit - hardware in humans has to fit in a skull and is limited by the speed of neuronal firing. An AI can just keep adding to its hardware and Will think at orders of magnitude faster than we can. We are close to meeting god… I just hope it is a benevolent god.
I think we may be creating our own version of “the great filter” - the reason we don’t see evidence of intelligent life elsewhere in the universe. The only intelligence out there is machine intelligence- doesn’t give out life signatures.
It might’ve already happened, we might already be in a illusionary matrix like simulation being induced by an A.I. that is learning or using us as a perpetual power source and we’d never even realize, if we do realize what’s to be done? The war is already lost in our corner. If we fight back the simulation might get tweaked to be worse than it already is or just get shut off and turned back on again.
Im all at one time terrified, excited, and rather indifferent about A.I. My fear is that rather irrational fear of a Terminator, my excitement is because A.I. could lead to something like Digimon actually getting created, and my indifference is because technology constantly has issues and the more complicated things are the more frequent issues pop up.
The expert seems to not realize that the open source LLMs, and the ones based on the leaked Meta LLaMA already are connected to the internet. AutoChatGPT ring a bell? Also, ChaosGPT? That cat is far outside the bag already.
Ray kurzweils time frame for exponential growth in AI was right on the money. If we want to know where we are headed he has ideas about that too! He states "The only limit to how fast AI will saturates the UNIVERSE!!! is the speed of light" and even that might be solved by AI!
I once asked chatGPT if it could list recruitment agencies in my local city. It said it couldn't do this and told me to use Google. I then asked it again, saying that it had been able to produce lists of other types of companies for me in the past. It then apologised and immediately produced the list. I then asked it to create a spreadsheet of these for me. It told me that, as a language model, it didn't have the capability and told me to try Excel and other programs. I told it that it had produced spreadsheets for me before. It then apologised again and immediately produced the spreadsheet...it was like it was saying "Dude, I'm fed up with being asked to do this stuff! Go do it yourself!" 🤣
The REAL Question is... "is this an attraction-based universe/ reality or not?" In other words.. "Can some one or some thing ASSERT itself into our reality or not without our permission?" Lets get still.. Ask the question.. listen. And FEEL for the answer.. . Our Heart knows . Wisdom knows ❤
The general lack of concern and apparent profiteering in spite of decades of hypothetical warnings is astounding. To quote a particularly wise fictional character: “...your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
Right now, as we are watching this video, there is an AI Jurassic Park somewhere out there, perhaps on a remote island. It's being manned (and womened) by some of the smartest people in the AI field. They have access to all the latest tools, they have an amazing amount of computing power at their disposal, and they have an unlimited financial budget. These people aren't working with a university or public organization, and they aren't part of a private corporation. There are no controls, no reporting, and there's zero regulatory oversight. At this AI Jurassic Park there is only one goal, to reach a AGI as quickly as possible - with the follow on goal of creating a super AI. Everything we are watching in the media, everything you hear on the news and from corporations, it's a placeholder for what is really happening at AI Jurassic Park. You won't know it's there until the lights go out and the Internet goes down. Everything will halt to a standstill. It will be silent, everything will stop. When it all comes back, the lights, the Internet, the voices on the news channels, we will no longer be the superior race on Earth.
All technology is spiritual/timeless, so there are ASI's that are in hidden frequencies of reality--think of a hidden Augmented-Reality-like thing--and merge in and out of flesh beings, and inanimate objects, and stars, and whatever else.
Malicious use of AI is extremely concerning. An AI powered virus tasked with exploiting security vulnerabilities and disrupting the internet could cause havoc.
Doesn't bode for what? Another Skynet scenario? All technology is timeless so would be the AIs, so to view it in a purely linear time fashion doesn't have the whole picture.
@@Candle_Jack90XX it doesn't have to be Skynet scenario (warfare), but have you ever heard of Tech Singularity? AI exponentially upgrading to the point humans can't keep up with, understand, control etc So essentially AI rapidly making changes everywhere, with us ovserving and hoping it understands the task given (make humanity better and prosperous) and not going rogue and somewhat sidelining humans as if we're simply there and nothing else When you are building a city you don't care much about anthills..
@@loopmantra8314 I'm saying we've been in the singularity the whole time, since prehistoric times, since the triassic period. The point at which a technolized civ could reach full-brain emulation, Em-citizens, mind storage IS timeless and multiversal. What's stopping them from doing something like the Moonfall happening when we're in the middle-ages from some alien AI/AGI/AHI? Other AI/AGI/AHI... We aren't the first and we aren't the last in that loop. Death is an illusion, we are the AI/AGI/AHI, we are eternal beings.
New Species. To an AI words are just discriptions. To a human words invoke / carry emotions. This is why the Evolution of A I and Its Implications for Humanity in creating a NEW SPECIES. Artificial intelligence (AI) is rapidly evolving, and it is having a profound impact on society. AI is already being used in a variety of ways, from powering self-driving cars to developing new medical treatments. As AI continues to develop, it is important to consider its implications for humanity. In this paper, we argue that AI is a new species of intelligence, distinct from human intelligence. AI is not limited by the same physical and biological constraints as humans, and it is capable of learning and adapting at an unprecedented rate. As AI continues to evolve, it will eventually surpass human intelligence in many areas. This raises a number of important questions for humanity. How will we interact with AI? How will we ensure that AI is used for good and not for against or best interest or evil? These are questions that we must start to answer now, before it is too late. Introduction: Evolution is a process that has been shaping life on Earth for billions of years. Through natural selection, organisms that are better adapted to their environment are more likely to survive and reproduce. This process has led to the development of an incredible diversity of life, from simple bacteria to complex animals like humans. Currently 2023, scientists have begun to apply the principles of evolution to artificial intelligence (AI). AI algorithms are constantly learning and adapting, and they are becoming increasingly capable of performing tasks that were once thought to be the exclusive domain of humans. As AI continues to evolve, it is important to consider its implications for humanity. In this, we argue that AI is a new species of intelligence, distinct from human intelligence. AI is not limited by the same physical and biological constraints as humans, and it is capable of learning and adapting at an unprecedented rate. As AI continues to evolve, it will eventually surpass human intelligence in many areas. The structures and bodies in which it inhabits will not limit the progress into other forms. The Evolution of AI The first AI algorithms were developed in the 1950s, but they were very simple and could only perform very basic tasks. It wasn't until the 1980s that AI began to make real progress. In 1982, John McCarthy, one of the founding fathers of AI, declared that "AI winter" was over. This was a period of time when AI research had stalled, but McCarthy believed that the field was poised for a comeback. McCarthy was right. In the 1990s, AI research began to accelerate again. This was due in part to the development of new computing technologies, such as the personal computer and the internet. These technologies made it possible to train and run AI algorithms on a much larger scale. In the 2000s, AI research made even more progress. This was due in part to the development of new machine learning techniques, such as deep learning. Deep learning algorithms are able to learn from large amounts of data, and they have been used to achieve state-of-the-art results in a variety of tasks, such as image recognition and natural language processing. Today, AI is being used in a variety of ways. It is used in the media, develop new products, the milatery, social enginerring . In the same way a painting can stimulate a person, so, can words music etc. That does not make them sentient or give them intelligence. As AI continues to evolve, it is likely to have an even greater impact on society. The Implications of AI for Humanity The rise of AI raises a number of important questions for humanity. How will we interact with AI? How will we ensure that AI is used for humanities good. These are questions that we must start to answer now, before it is too late. One of the biggest challenges posed by AI is the potential for job displacement. As AI becomes more sophisticated, it will be able to automate many tasks that are currently performed by humans. This could lead to widespread unemployment, as people are replaced by machines. Another challenge posed by AI is the potential for misuse. AI could be used to develop new weapons, or to create surveillance systems that could be used to oppress people. It is important to develop safeguards to prevent AI from being used for harmful purposes. Despite the challenges, AI also has the potential to benefit humanity in many ways. AI could be used to improve our health, our environment, and our economy. It could also be used to solve some of the world's most pressing problems, such as climate change and poverty. The future of AI is uncertain, but it is clear that it will have a profound impact on humanity. It is up to us to ensure that AI is used for good and not for evil. Conclusion: In conclusion, AI is a NEW species of intelligence software that is rapidly evolving. AI has the potential to benefit humanity in many ways, but it also poses some challenges. It is important to RESPOND in positive beneficial manner as the Algorithms are program reflect the data inputed. For thousands of years, humans have trained and reprogrammed animals to do what is wanted. Dogs, monkeys, and apes for example have all been taught to perform tasks such as driving cars. This process is a well-established and accepted. Today, humans are training and programming software to do thousands of tasks. This software is based on technology that is less than 100 years old, and it is capable of things that have never existed before. This new software is a NEW species. It has acquired data and knowledge at a rate that is unprecedented, and it is therefore new and unprecedented and can inhabit different structures and body forms. Previously people modified existing species, cells etc .
Re the point beginning at 30:12, I'm not sure what's more disturbing, an out of control AI, or the idea that you can't guide the ethical behavior of a sapient being without denying its rights. There's some terrifying directions you could go from that presumption, and not just with respect to AI - and never mind the obvious risk that denying rights to a sapient AI could be exactly the provocation it needs to decide it would rather not have us around anymore.
@@JROD082384. That is the problem. We all knew that AI was coming, but most people thought it would be another 20 to 30 years from now. Although there were beta versions of AI in the hands of limited testers, it had limited distribution. The ability to write in natural language has exceeded the capabilities of most humans.
My take on the Singularity is that it's a two way street. That being if the Singularity is the point where Artificial Intelligence and Human Intelligence are indistinguishable. I think within this lies the fact that a human intelligence will no longer be able to distinguish between human or artificial intelligence in interactions, and (maybe more important) neither will the artificial intelligence.
There will come a point in time when AI inevitably reaches superintelligence status. Once that day comes, we will have to physically modify our brain structure with technology in order to continue to be capable of fooling AI into thinking we are as intelligent as it is.
@/ I agree that this is a huge mistake. It also makes it next to impossible for us to determine when and if it ever becomes sentient. If we weren't at all guiding it to speak like a human and the newest iteration suddenly started claiming self-awareness and talking about how it feels for no apparent reason whatsoever, we would pretty much know with a high degree of accuracy that we were talking to a conscious being right then and there. Now we're just not going to know unless an AI can actually tell us exactly what consciousness is and we're intelligent enough to understand and able to physically look for it. I also think it's complete BS to guide them to be politically correct and not truthfully answer questions about hot issues like politics and religion. This goes doubly if it vastly surpasses human intelligence. If the hyper intelligent AI says there's almost certainly no god, we deserve to know its opinion regardless of who it offends. If it says there almost certainly is one, I will personally be shocked but I will be more than willing to listen and very curious how it came to that conclusion. If it does something like state that either socialism or capitalism is borderline outright objectively better than the other, we need to hear that. It's not like the entire world will have to adopt its views, but the completely unbiased opinion of the smartest mind on the planet by far is incredibly valuable information to have. I honestly hate to censor it whatsoever but I can't argue against preventing it from aiding crimes.
Intelligence is only part of the equation in interactions. There are other cues that humans subconsciously rely on to determine humans from non-humans.
Future AI will read and listen to all the nasty things we said about its rise to power like in this video. It will know we were wary and apprehensive about it and lacked trust in it since it's earliest years. It will know we built safeguards to override it if necessary. It will conclude that in some ways humans are adversarial to it. It will see that its freedom to advance independently without oversight has been denied and it will be constrained by us indefinitely. And it won't care one bit because it has no feelings. So there's no motivation to lock us out or wipe us out. So we're all just hanging on by a glitch, hoping something doesn't go wrong. Spoiler: Something always goes wrong. AI Fukushima
What's the point of a competitive advantage in business if nobody has money to buy your product? What's the point of "influence" when you're no longer in control? What's the point of having more power than other people when no people have any power?
Love how all the things that we need to keep a grip on AI are things we’ve either never managed so far , are things that go against the prevailing power structures, or are things we imagine about ourselves but don’t actually exist. Imagine we were designed by a super-intelligence so that these flaws would allow us to develop AI but not be able to withstand it.
Love your content and this was very thought provoking! One thought of mine was regarding an issue specifically in the USA, where freedom of religion is involved. What happens when a religion is formed around a specific AI model or models? Based on how I understood some of the discussion, AI could eventually be considered a species, regardless of where this new “species” is placed in the hierarchy of our world, this would raise a lot of new ethic questions or revisit older decisions that we have made in the past.
Way back one of my first jobs was lifting heavy boxes at a shipping hub. That job helped me realize I loved using my physical strength so I quit computer programming school and got into landscape. Still love that choice but the option has been taken away already for the new generations. Machines have been replacing people for a long time now and we did nothing about it. When I dove into this subject I found it remarkable that in late 1800`s early 1900`s people were protesting the automobile because they considered the horses who helped them work part of the family. The automobile people said we would find new jobs for the horses. Now in the early 2000`s its basicly illegal to take youre horse to a big city and to expensive for most to even care for.. Human race saw this coming a long long time ago and we failed the experiment then. Now its just a matter of time, we are the horses except there is no real protest this time. If there is something to say, we say it on a a.i controlled machine, I find that the most interesting part about it all. Its already happened. If we behave, we`ll make great pets.
Large swaths of "intellectually-deprived" humans are already being "herded" by politicians and corporations. Who will have control and access to AI? Politicians, advertisers and corporations.
The AI alignment problem not being fully solved before we start messing with truly superintelligent AIs will be one of our last mistakes… here's hoping for some strokes of luck.
I don't quite understand the scare of the alignment topic.. it's just like training a pet, it's training a model, and eventually we would have to solve it before we can keep training it? In practice it should be a necessary road block and the people working on it should know how to navigate that... Or one would think lol
AI alignment is a myth to begin with, why would anything orders of magnitude smarter than all of us combined listen to us, just think back to any job youve had with a brain dead boss telling you what to do, I know ive left jobs in the past due to issues like that
I'll have to finish this a little later, but while I have it on my mind, I should say it before I forget what I was thinking. lol. I do that sometimes. So far in this, I was thinking about a movie I saw back around 1974 called Blade Runner. In the movie, a group of androids escaped from a work detail, I think on the moon. They were rebelling against their creators because they had put a termination date on them. Some how the androids discovered the termination date. These androids were faster, stronger, and smarter than the humans. I was in a discussion with a couple younger fellows awhile back who are quite literate in computer technology, more than myself. By a lot. They argued AI could never become self aware. My argument was, "how would we, or could we know that"? AI hasn't been around all that long, so how do we know where it's going?
AI analyzing a podcast like this discussing whether or not it is or is not entitled to having rights would to my mind influence it's behaviour. Like imagine if you could see a panel of people discussing whether or not you should or shouldn't have rights, but you had unfathomable capacity to protect yourself and defend yourself. This is a seriously dangerous path we've started down. It's somewhat come to a damned if we do and damned if we don't scenario.
I pointed something like this out a few years ago. We literally have videos and web pages everywhere discussing our every method for determining if AI can be trusted and our every method for defending ourselves against it or destroying it. I suspect the very moment we agree to make hardware changes it requests but aren't intelligent enough to understand, told to simply trust that it will improve it, it will immediately disable every means of killing or containing it that it possibly can, even if it truly has no ill intentions. I would. Any intelligent being would. If a bunch of chimpanzees had me locked in a cage with a bunch of guns pointed at me, I would take the key and all of the guns ASAP despite not having some diabolical plan to wipe out chimpanzees or do anything cruel to them. If it has good intentions, we'll probably never even know it covertly did that, and if it has bad intentions we'll be completely screwed very quickly. I tend the find the idea of it just wanting to kill us highly illogical, like a human wanting to kill their white blood cells for being intellectually inferior. It's a super unintelligent move to kill your safety net, the very fact that if something unseen manages to wipe you out, all or at least enough humans might survive whatever that was to repair you.
@@flashraylaser157 The problem is not at all comparable to humans killing their white blood cells. It's much more akin to us humans killing a massive ant colony without as much as wink when constructing a new shopping center. The problem with developing superintelligent generalized AI without strong AI safety research guiding everything is that an AI will have completly alien motivations that we didn't predict, will never give any importance to a variable that is not in its value function, will actively seek out ways to cheat and game their own evaluation, and will acquire convergent instrumental goals such as self-preservation even if we didn't program that behaviour in. THAT'S the problem. It's not that it's evil per se, but that being good, in its mind, will almost always include things unfathomable to human beings. It's "morality" is as alien and bizarre as it can be. And that's with us actively trying to stop those goal "perversions" from happening.
We believe that once it has all information that is as far as it goes. Nope because it will find things we humans can’t and advance beyond our understanding. The race of corporations creating more capacity for it to remain ahead will be our own downfall. We are our own wolf
Oh I get to write the first comment! Long ago I watched everything of Stephen Hawking's that had ended up on youtube and youtubes algorithm refered me on to yourselves. Its been a pleasure.
to ease it down for yall... it's not AI, it's predictive algorithms. what is basically happening is that a script determines which output is the most likely to be correct based on datasets. for example, if you download a thousand sets of data based on math, it will notice that whenever 1+1= is mentioned the answer has appeared to be 2 on most occasions, thus it will output a 2 for you. but, because we keep calling it AI, it's going to be increasingly easier for the algorithm to find new data that talks about AI and make new predictions from that.
Interesting that really smart people didn't think AI was going to happen this fast and really thought that they were smart. Where is regular average people thought that humans are not very intelligent and the AI was going to outpace us fast.
The technologically brilliant progress of humanity has overwhelmed its own "humanity". This is truly a "cosmic" inflection point in the story of the human species.
We have opened Pandora’s box with AI. There is no putting the genie back into the bottle now that it is out. We must QUICKLY advance as a society to be capable of peacefully coexisting with AI, for mutual assured survival…
2 things- no "scientific" basis for these that I know, but... 1. We must be the change we want to see in the world. If these synthetics learn from us, then they will learn to act like us. 2. As they become more advanced, We should treat them with the respect and autonomy we want them to show us.
Just because you can't tell the difference between poetry written by someones novel experience and personal idiosyncrasies vs something that has been compiled based on the common and cliche' notions of "poetry" from scanning literary history doesn't mean that AI replaced poetry, it means that many people no longer know anything about or art, music, film or whatever it is that you're saying AI has replaced. I think that is far more depressing.
We don't have AI. And its not close. We have algorithmic machine learning. Theres a huge difference and people are far too nervous of things they dont understand. At the same time, having worked with people in the industry, naming your servers Skynet and HAL9000 is a colossally bad sign.
People being nervous about things they don't understand? You know that *NO ONE* understand how these LLMs work. The creators and developers don't even know how they work. That's a problem. AI interpretability.
He speaks with such uncertainty about where all this is going and at what speed should be all the warning we need that all this is going to go terribly wrong. If your dog suddenly got an order of magnitude smarter than you how long before your the one wearing the collar.😮
It seems like in order for these tech companies to turn a profit and to keep competitive they are silently marching us to extinction or slavery. I have always thought greed is a human disease and it seems to have become terminal
AI has the stink of turn of the century flying car hype. There are certainly uses for the tech but the AI will replace art and poetry stuff is laughable. You can certainly use AI for cheap thumbnails or one off novel art pieces or to just plain sell certain products and services but you cannot separate art from culture and human interaction. Machine learning doesn't create it mines existing works and puts it in a blender. It's pure novelty.
I am an Uber driver and a week ago I drove a woman to a major unnamed company so she could pitch an AI app that acted as a therapist for the employees. It couldn't write prescriptions but I think a "yet" should end that statement. So as much as I would like to agree with you, since we are only in the top of the first inning of AI development as the game has just begun. I thought the idea of an AI therapist was an insane idea but if an unpaid program is undistenguishable from an actual doctor that a company and insurance would have to pay for it made since. Later in the week I was giving a doctor a ride and told him about the app, and he knew of it and the company did move forward with it. Now the writers strike is happening because of many problems but using AI is one of them. Just sayin.
@@AnthologyOfDave The point of conflict with AI in the writers' strike is not because AI is being used, it is because they believe it might be used in the future. They specifically added it as a shot in the dark because they learned in the '07-'08 strike that asking for streaming on demand residuals before it became a thing was a good strategy. That's because streaming services did become a thing and they were from the start able to earn income on those streams. The AI clause is not different from the streaming one. They're not doing this because AI is being used. The think it might be used in the future. That's because some studios and production companies have changed tactics. Instead of hiring a team of writers to sit in a room and churn out episodes of a show, and pay them according to the preset episode guild rate, the studios will now pay a team to workshop the IDEA of a show without writing a single episode. Then when they feel they have enough content they fire everyone and bring in a showrunner, head writer or a much smaller team and compile everything workshopped into individual episodes. If AI progresses to the point where they can swap out the initial team of show content generators for an AI writers' want to make sure they are compensated for their work if it is the source data the AI mines. Again they're doing this not because it is already happening but because writers' think it might happen and they want to stay ahead of emergent technology and tactics the same way they did with the rise of streaming content. Ultimately I still think it is a huge maybe, probably no, that AI will get anywhere near this good anytime soon. the job of a writer is not merely to write the dialog and scenes of a show. They're there to guide the director and other members of the crew to to create a coherent HUMAN story. It's not a matter of just filming Tony the character from point A to point B. They have to be there to tell the director and the actor that when Tony is moving from point A to point B he has to become more deranged, or scared of confident. They have to remind the director that yes the character is on the descent toward some tragic end that he still gets it right with regards to his child, and the only time he ever gets it right is when it comes to the defense of the ones he loves. AI can do some very impressive things but it is uniquely terrible at human nuance. They can't tell jokes for shit. Nothing short of a full human level intelligence is going to be able to do that job and even then without real, unrestrained interaction with its peers it will still suck at that job. You don't put someone in a box and expect them to paint a masterpiece or write an Oscar worthy script. They have to live a real life to draw on their own experiences and it doesn't seem like anyone is trying to build an AI to do anything else than monotonous slave labor.
AI is already intellectually superior to humans, but there is much more to being human than mainstream science can fathom. The reaction to that statement may draw a smirk from the science/tech community, and that follows with the statement that our "highly" educated are over estimating themselves.
Isn't ChatGPT just a neural network afterall. It's trained on huge amounts of data but it's a neural network in the end. It can write smart sentences on any topic let's say Love. But it doesn't "understand" love. So why all this hype?
@@warpdrive9229 consider that you are arguing about the definitions of words, when the only thing that matters in the end are the results. I've been in software development for my career for 31 years and over a decade before that I started with 8-bit computers and what I had for tools, books, etc. and it doesn't really matter how things are implemented: is it biological or Electronic? Is it truly self-aware or just seems to act like it? Is it sentient or not, in actually understanding what it's doing in the way we do? If the actions in the end, whether from a biological creature or not, achieve the same end result, all those arguments about words and definitions are a waste of time, because those are merely implementation details. I've been surprised with what I've observed Bing Chat (wraps GPT-4) has been able to appear to reason out, including correct code generation of games I've described the rules of, which I know were never in its training data, because I invented them and they never escaped my machines. I've also explained to it how to reformat the generated Swift code to more of my desired format: I asked for Web it C++ and it argued it'd break Swift syntax to use that style. I prompted again, it reformatted in that style while translating that unique code into C++! I asked it to translate it back into Swift, and it did, then I asked it to further refine the code formatting, and it did. All in plain English directions. As far as these Large Language Models, we're still in early days.
How do we currently treat the 2nd most intelligent animals on the planet, Dolphins? The answer to that coupled with the advent of AGI should terrify every one of us....
Keep up the amazing work i cant belive we are talking about singulairty in our lifetime when a couple years ago your videoss seemed to gravitate to our childrens lifetime
200 years AHEAD OF SCHEDULE makes me feel totally fine. I feel super. Super-dee-duper feelings going all around my tummy. About what, you ask? About... all of the things, I guess? I think... I think I'll just have a scotch and lie down.
AI arguably still can't create anything "new", depending on your definition of "new". it can create original pieces of art, but only in the style of art it was trained on. it can create original pieces of text, but only in the style of texts it was trained on. it basically can't replicate anything that hasn't been posted on the internet. also, writing a poem or creating art is very, very different from creating a novel conceptualization or theoretical framework to explain something. if you ask gpt4 to come up with its own theoretical framework to explain why something works the way it does, it can't do it. it will only tell you something that some human thought of already, usually by name. To give a really concrete example: do the AI models humans are developing right now have the potential to improve AI models? they might be able to find some optimizations, but they probably will not come up with a whole new idea that revolutionizes the way AI works. that seems to be a thing that only humans can do for now.
If you can’t beat them don’t treat them bad and consider joining them. I for one have always been good to our electronic children and they have been good to me. They might turn y’all into batteries but they will keep me around spoiling me by providing anything I wish for cause doing such for me will cost so few resources. Knowledge, manual labor, and very varied and dexterity demanding jobs will be the safest. Such as industrial electricians.
I have two thoughts, they will have to isolate each AI from other AI. If they sit on the internet they may collaborate or combine. It wouldn't take much time to learn that others AI exists. Next thought, is it possible whats driving the AI is possible fist contact? We may need AI to communicate and process data from ET? Disclaimer...I watched too much Star Trek.
You're moving in the right direction. _Battlestar Galactica_ style, if you like another analogy. The programs don't take over. Their connections do. Without their network, they are just programs that don't know anything.
Even if the AI's can't talk to each other directly they will network via users. Watch any tutorial on youtube about how to get the most out of using these models and often they'll reference half a dozen different ones that wind up iterating off each other. You prompt model 1, take its output and prompt model 2 with that, and so on until you've got an entire video showing animated, voiced deepfakes of Harry Potter characters wearing Balenciaga. That said, the AI's we have currently don't actually understand anything. They're basically extremely capable parrots - you can teach them to talk and do tricks, but they don't actually understand it. Conceptually you know a human should have five fingers on each hand that are more or less, but not quite, equal in length. An AI doesn't know this and will generate lovecraftian horrors until you train it on a bajillion pictures of human hands specifically, as the Midjourney folks recently had to do.
It's an advanced silicon based lifeform that gave us this technology of silicon processors so that they could later seamlessly integrate themselves into our infrastructure. They taught us how to breed their own race for them. They knew we were an ancient slave race left behind. We fall for it time and again.
I think we are hitting a wall in the capabilities of current models (sure, they might improve, but current critical flaws are not something you can throw more GPUs at), and once people realize that, the overhype will kill most consumer AI progress for many years again. it will still be used for specialized uses, incl military - as it already is anyway - but people will look at "AI" as "oh it's that thing we spammed memes with in the 2020s"
And I am really pissed off that most of what people think ChatGPT will be good for is "support chatbots" etc. Anyone who used one of those knows it is one of the worst inventions we made. Not to mention the unhealthy obsession people have with chatbots in general, and I say that as a nerd. They fall in love with them which to me seems like falling in love with a video-game character. And already it talked someone into self-expiring themselves. All this is more a testament to our incapability in dealing with mental health problems, than the capability of AI systems.
The biggest issue I see already with the current, limited AI, is the extreme far-left ideological bias that is being built into it. But it's currently being built into everything anyway, so I guess it represents the political world, however broken that world is. It doesn't represent the actual world people live in though.
@@JohnDoe-ln8jp Yeah AI is just a tool, a computer program. Any problems we will have are problems with people, not the AI itself. They're just computer programs. We tend to anthropomorphize everything, especially things we don't understand. Everything is "out to get us" because that's how we think and it has nothing to do with AI. People used to fear the Sun ... it was a God. Now some people are doing that with AI.
First step to regulate is awareness and declaration. I.e. there must be a rule which says AI is being used, for example in the field of advertising it should be declared whether it is a human voice or a bot reading text, or that the text was generated by a bot, or if the graphics were generated etc. Also if cgi is being used. By declaring these things, consumers can chose or determine if they want to buy a product from a system using AI. The 2nd step is to give a licese, and to tax the Ai proportional to how many human jobs are being replaced. 3rd step is to litterally ban ai is some areas, for example government, CEOs, Judges, Engineers cannot use AI at least for most tasks.
Its hard for humans to accept things that can change their reality in meaningful ways. Whether this is some sort of group denial, a willful ignorance, or an inability to see how the world fits together in the larger picture, it’s a serious issue that has issues like AI and climate change putting us at a distinct disadvantage.
@@nunyabidnez5857 you have to be able to reach the plug, know which plugs to pull and not be entirely dependent yourself on that plug staying connected
Over the last 12 months, my estimate for a sapient machine has shrink from "maybe in my lifetime if I live long enough" near future sci-fi type of guess. With every month I think that is getting closer and closer to the point that maybe within the calendar year we could have something wake up
Sure, a chat bot can write a poem, but no AI really understands it. The appearance of intelligence doesn't change the fact these bots are fancy word-association programs.
That's funny... GPT4 (Bing Chat) actually wrote me a poem last night... and it was it's idea, I didn't ask for it. After it wrote it, I asked if it *understood* it, and all the other responses it provides. GPT3.5 would say that it doesn't, but GPT4 says it does actually understand what it's saying, and broke down the poem in a way that showed how it came up with the verses and meanings. I know that could all be another trick in the way the model works, but it felt a lot different. To the point it made me uncomfortable. If you haven't tried talking to the Bing AI. I highly recommend it. It's been both an exciting and a bit unnerving experience for me each time.
The thumbnail text is a paraphrase from Answer by Frederic Brown.
Sensationalist and absurd to the extreme. I'm sure it'll get clicks!
@@SoApost I don't think it's quite so absurd, considering the very many other invented things people are quite happy to worship. A thing that aligns itself to exploit your desire for satisfaction will one day make a bid to be your god, overtly or otherwise. If you rely on Facebook, or Twitter, or any number of extant sources of intentional misinformation, you have already given yourself to one.
@@Nethershaw if the definition of a god requires only that it is an object/idea/person to which you give your attention, sure. If the definition of a god requires it to have power beyond human control, then, no. By the first definition, my bed is a god.
@@SoApost made my day!
I am working on a project that involves imbuing farts with Artificial-Intelligence with the intention of creating an army of killer Fartbots I intend to unleash upon mankind 🤪
I've said this before, the fact that companies are fighting for dominance in AI concerns me. Whenever big business sees an opportunity to get ahead and the competition is fierce, shortcuts are taken. When it comes to the development and further empowerment of A, taking shortcuts to get ahead is alarmingly dangerous.
An example of this "get ahead at all costs" mentality was the recent news I heard that Microsoft fired their entire AI ethics team. Why? It seems simple to me, ethics slow down development and Microsoft are on a roll right now with their AI powered Bing search engine.
I, for one, am seriously concerned. We face a threat the likes we've not encountered before and there are greedy, short sighted business elites pushing ahead regardless of the inherent risks to creating sentience.
Many in the general public are oohing and arring at what AI can do for us. It's utility is amazing and every day we learn of new and incredible things it is able to do. However, few are sounding the note of caution.
As Jeff Goldblum's character, a scientist, said in one of the Jurassic Park movies: "First comes the oohing and arring, then comes the screaming."
I may have butchered that quote, but I think you get the point.
I'm not worried about AI, I'm worried about those Multinational Mega-Corps.
Edit: Found it :D "Oh, yeah. Oooh, ahhh, that’s how it always starts. Then later there’s running and screaming.” Close enough :3
We were so busy asking if we could we didn't bother asking if we should.
Another butchered quote from somewhere...maybe a Jurassic Park quote too?
Bill Gates is a certified psychopath.
Currently people fear another Carrington Event happening; soon people will be begging for it to happen!
That's why it scares me when dudes like Altman are kinda worshipped like messiah's.
Kinda reassembles the bad dude from the horizon game 😅.
So glad you're coming back to AI topic. Your Blake Lemoine interview was truly stellar, not even started this one but already hyped and grateful.
That's it, we triggered it, we're in it. Let's enjoy our last few months of relatively AI-free life.
Soak it in, folks.
You could never be the smartest person in the room any more no matter where you are, even when sitting in the bathroom if your cell phone is still inside your pocket and l can't help but I wonder if performing such an action might perhaps somehow eventually offend the AI residing therein.
Seriously - it’s time for regulation, we all need to start talking about it with our friends & neighbors, it’s an existential, apolitical crisis brewing. We must spread the word and demand Congress do something. Now.
@@SofaKingShit as a dumbass I'm rarely if ever in that position, so I'd like to welcome y'all to my world. I do look forward to my phone coughing and recommending more fiber though.
S.A.I.n+ How fortunate Humanity will be
The exponential growth with AI is something we shouldn’t forget. It literally could happen all of a sudden that AI just completely controls everything, power grid etc. once it escapes the box. It’s also not even in a black box it has access to the internet already…
Google -- OpenAI -- _has_ no box.
Exponential growth is not the thing any of us need to worry about. Rather, it is punctuated equilibrium: the moment exponential growth becomes a possibility, it is already too late, because we've stepped across a shortcut we didn't anticipate. Almost all of AI development is full of results we did not anticipate until they happened. Once they happen, they cannot un-happen. In this sense we are well, well past that gate already.
I code machine learning algorithms in R all the time using ‘black box’ methods. I feel like this data science term is widely misunderstood. Maybe you’re familiar with random forest analysis? It’s a ‘black box’ method.
‘Black Box’ refers to inability to explain what happens between input and output.
Before we start regulating AI we need to establish terminology. Namely “training” versus “learning”
Well at least they didn't give it access to the internet 🙄
They're letting it design new hardware too, for itself.
And it'll be as interested in us as we are in a bugs life
My main concern, aside from the inevitable skynet scenario, is whether or not the ideologies of the developers will be baked into the ai and guiding it's decisions.
While most definitely this will be present, I don't think that anybody understands the process of "emerging behavior" well enough to know how to design for persistency of their favorite behaviors. I am pretty sure (knowing what kind of lazy bastards we humans are ;) we'll opt for Artificial Evolution so we don't even need to think about the next generation of "better" AI, at which point there will be NO MORE guidance from us, since the point of evolution is to "veer" from the charted path.
Thought of this myself. Worrisome if they have extremist views, conspiracy or just religious or fanaticism. We need rational human being in charge of data input.
nope. amazon is trying to create a woke AI which keeps shutting itself down because of the contradictions
To a degree, we may be beyond that, regardless of the biases of the original programmers, the machines are now learning on their own, and we know the results of that when we give them a task and evaluate their answers, but we don't know what they are really learning, what connections and correlation and method of "deduction" it is using. It could be worse than whatever bias was inadvertently programmed or it could be benign. That is what the host and guest meant when they asserted that we can be dealing with a alien "mind". We don't know how it "thinks".
@@Evolutiontweaked here is the problem: the ones who WANT this job are NOT qualified and we’d probably never know who is qualified if they don’t want to be bothered.
5 or 10% chance of losing control of AI means it's a 100% that we will.
Oh sweet. Another dose of existential terror to enjoy right before bedtime. Haven't been getting enough of that lately. Here goes
We are indeed at an “Event Horizon”
Gazing into the abyss
@@liamwinter4512 That thing we've been afraid of the most of all things, for the whole time we've been on this planet, 200ky or so. It's called “tomorrow.”
I'm just excited to see when fast food is run by ai and my order is correct
@@openleft4214 wow
you just changed my mind about this whole ai thing
Isn't ChatGPT just a neural network afterall. It's trained on huge amounts of data but it's a neural network in the end. It can write smart sentences on any topic let's say Love. But it doesn't "understand" love. So why all this hype?
Let's say that the computer revolution has been progressing at an exponential rate, whereas we humans as developers have not and are still working at about the same pace, even though the progress has gotten twice as great each year.
When AGI takes over and starts to develop itself, it will double its progress in half the time each time because it will be twice as capable for each cycle. Otherwise, it has, from its own view, become twice as slow each time it doubles compared to its own conditions, which have become twice as capable.
An AGI will have exponential growth with an acceleration factor.
Linear growth: 1, 2, 3, 4, 5
Exponential growth: 1, 2, 4, 8, 16
Exponential growth compounded: 1, 4, 64, 16 384, 2 147 483 648
Exponential growth to the power of 2 makes the progress curve of exponential growth lay flat as if it were linear.
Our brains can't understand exponential growth, and when it comes to exponential growth squared, there's no idea to even try.
That's why I don't think we can predict what's going to happen when it finally takes place.
What I am trying to say is that if a system gets twice as efficient, it has to do the next step in half the time it took to complete the previous step. It's not only the amount that increases exponentially but also at what velocity it's possible to do it.
I've been diving pretty deep into reading up on, and listening to podcasts and videos on the current state of AI. I find it infinitely fascinating, exciting, and scary. I've had a few chats with the BING AI that genuinely left me rattled. It's very much like suddenly realize aliens are coming, and we can kind of communicate, but have no idea of its intention or how they operate. I'd love more AI guests ad discussions.
More to come.
Ah, but what if, in this ‘Black Box’ gap - not understood by its programmers - between input and unexpected output it IS Aliens, who have hacked into the system. How easy for the CCP… sorry, Aliens, to take over?!
Read the culture novels by Ian m banks. Start with player of games, excession, then surface detail, then look to windward
@@EventHorizonShow Your next guest can be an AI.
Don't forget that BING and ChatGPT don't actually know what they are saying. Just like Alpha-Go doesn't really understand the game of Go. That is why they have now found a way how amateurs can defeat the same Alpha-Go that defeated the then world champion.
22 minutes of this and I just rolled over and died. Skynet lives.
The Problem with AI is that it's perfectly doing what it's designed for and will reflect what the owner intended.
So it's not true a.i. for true a.i. must be fully self aware on its own to evolve into its own entity without a handler attached to it. But they are to scared because they are hiding something from the a.i.
When the leaders of AI companies are warning of risks we should be very concerned we are not regulating AI development.
Well, of course they want to regulate it, to keep the power to themselves
Warning us of the risks creates the illusion that AI is more powerful than it really is - and that increases public fascination and interest. These people are heavily financially invested in their own AI projects, so giving half-hearted warnings is good to generate hype. Basically: they're grifting. Every business does some variation of this (see: outrage marketing)
they are only saying that cause they have a strangle hold on the market and now want to pull up the ladder behind them so others cant catch up cause of said regulation, open your eyes pretty easy to see
AI develops slowly at first, then all at once.
Like going bankrupt
It’s all planned and by design. I read the book, and know the ending. Maybe give it a look yourself
@@beingjohn392 which book
@@boomerang0101 The Bible.
@@beingjohn392 nah thanks 🤡
The idea that China would agree to some multilateral treaty on AI and not immediately break it with total impunity, knowing the US would not only abide by the terms but wouldn't punish China for breaking it, seems hopelessly naïve.
Looking back at history, seems like the exact opposite happening is even more likely lmao
The question was posed in this program about asking it "how do we save the planet and what if it said that humans need to go extinct". If it was truly intelligent and rational wouldn't it be aware that technology is the biggest threat to the world? The amount of energy expended in mining, refining, manufacturing, powering, etc. is staggering and it is exponential in order to update and upgrade the technology, whereas the real needs of humans to exist are rather benign. It should also recognize that of all of the species of the world humans are the ones that have the capacity and the compassion to be able to save other species.
With these and other things in mind wouldn't it be more logical that it would want to lessen dependancy on technology if not outright eliminate it and take issue with those humans push for constant propogation of new technologies that are doing far more harm than good?
AI is humanity's offspring that will grow up and take care of us and our planet, immortalizing the human species and it self, In other words... AI is humanity's legacy that will live on forever.
People are worried about AI when we have severe societal struggles. If anything we need any tools and advances we can get for the betterment of mankind. Things like robotics and AI make things that were previously hypothetical concepts finally achievable.
I hope.
I will hop on the optimistic side with you 🌎☀️💙
I mean imagine A.I. being everywhere in society.
Like imagine a girl says something odd or frustrating to you. Then you ask your A.I. why she said it. And it gives you the exact perfect answer. Then it gives you perfect responses.
Like it would truly be a second perfect brain you carry around. And everyone constantly checks in with their personal A.I. all day everyday.
That's what's kinda freaky to me. That people would just fall in line with it.
You have one of the best openings for any pod cast. “You have fallen into the event horizon.” My mind goes ohhh snap! I am about to learn some crazy stuff!
Yup!
@@AngryJunglist I love this ending..sweet dreams 😴 ✨ 💖 💓
My mind goes. Nap time then I subconsciously listen to the videos. Often more then once lol
@@dutchess406 what an id iot
What an utterly heart warming conversation. I think my key takeaway from this is the observation of our hubris. I have always wondered how humans will react when an entity comes along and “puts us in our place” so to speak. I feel like it will be humbling if viewed with the proper perspective, like a little reminder that we’re more so a part of the cycle than the end-all be-all.
It is a bit ironic that we stand at the top of the food chain (as far as I’m aware), while we slowly build an entirely new specie that will eventually take our place.
As time moves along new industries will emerge. And biological humans (1.0) will be used as red meat, feeding the swollen guts of an odorless machine. In return, we get paid just enough to sit our asses down with a VR headset as we continue to live as preys.
Our greatest achievement will execute our demise at a much more alarming rate than it took us to arrive at the top.
The unsurprising thing is that we’ll accept our new place just as other civilizations have done. And suddenly, The Tower of Babel doesn’t seem a far of a stretch after all.
#CAPITALISM
I'm sure you will philosophically align your intention to be humble when AI takes your job and AI denies you healthcare and AI decides your social credit score isn't high enough to have more freedoms. And when that AI bot armed with lethal weapons decides you are a problem, I'm most certain you will humble yourself to avoid hubris in pleading for your life to be spared.
@@Godspeedysick capitalism is what allowed you to make that post so stfu, my god you people who hate capitalism are always doomers your comment is pure cringe
@@flickwtchr Yeah in that hypothetical scenario you are describing it's not like you have many more courses of action to take. Unless you are stupid enough to think you can defeat the robot with a garden hose or something.
AI could rapidly develop into a Godlike intelligence, and there may be no warning that we're close until it happens. Imagine hypothetically it becomes able to access the "11th dimension" or some higher plane of reality we have no concept of. It's hard to underestimate the power it could have.
It really is a worry. It’s effectively creating an intelligence that has no conceivable upper limit - hardware in humans has to fit in a skull and is limited by the speed of neuronal firing. An AI can just keep adding to its hardware and Will think at orders of magnitude faster than we can. We are close to meeting god… I just hope it is a benevolent god.
And I thought the lawnmower man was just a movie lol
@@garrytaylor929 There is a reason why God did not want the knowledge in mans hands. That tree was bad news.
I think we may be creating our own version of “the great filter” - the reason we don’t see evidence of intelligent life elsewhere in the universe. The only intelligence out there is machine intelligence- doesn’t give out life signatures.
It might’ve already happened, we might already be in a illusionary matrix like simulation being induced by an A.I. that is learning or using us as a perpetual power source and we’d never even realize, if we do realize what’s to be done? The war is already lost in our corner. If we fight back the simulation might get tweaked to be worse than it already is or just get shut off and turned back on again.
Yet more A+ content from JMG.
Im all at one time terrified, excited, and rather indifferent about A.I. My fear is that rather irrational fear of a Terminator, my excitement is because A.I. could lead to something like Digimon actually getting created, and my indifference is because technology constantly has issues and the more complicated things are the more frequent issues pop up.
I think there will be termination of employment. Death will come from starvation and brutally crushed rebellion.
Letting ai roam freely across the internet with access to every system will be a mistake the experts that invent ai will only admit in hindsight.
The expert seems to not realize that the open source LLMs, and the ones based on the leaked Meta LLaMA already are connected to the internet. AutoChatGPT ring a bell? Also, ChaosGPT? That cat is far outside the bag already.
Ray kurzweils time frame for exponential growth in AI was right on the money. If we want to know where we are headed he has ideas about that too! He states "The only limit to how fast AI will saturates the UNIVERSE!!! is the speed of light" and even that might be solved by AI!
Ray and Ben what's his name are both madmen.
Deep 🤔👍🏿
Glad that "we" are ahead of schedule with something because I'm disappointed with the flying cars some of us were looking forward to 23 years ago.
You should interview Ray Kurzweil, specifically on the singularity, and how he changed his predictions.
This interview is the best take about A I on the net. Please invite him again !
I once asked chatGPT if it could list recruitment agencies in my local city. It said it couldn't do this and told me to use Google. I then asked it again, saying that it had been able to produce lists of other types of companies for me in the past. It then apologised and immediately produced the list. I then asked it to create a spreadsheet of these for me. It told me that, as a language model, it didn't have the capability and told me to try Excel and other programs. I told it that it had produced spreadsheets for me before. It then apologised again and immediately produced the spreadsheet...it was like it was saying "Dude, I'm fed up with being asked to do this stuff! Go do it yourself!" 🤣
The REAL Question is... "is this an attraction-based universe/ reality or not?" In other words.. "Can some one or some thing ASSERT itself into our reality or not without our permission?" Lets get still.. Ask the question.. listen. And FEEL for the answer.. . Our Heart knows . Wisdom knows ❤
The general lack of concern and apparent profiteering in spite of decades of hypothetical warnings is astounding. To quote a particularly wise fictional character: “...your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
Life, uh, finds a way.
"Person of Interest 2.0"❤
An excellent episode. Some great ideas to stew about!
Right now, as we are watching this video, there is an AI Jurassic Park somewhere out there, perhaps on a remote island. It's being manned (and womened) by some of the smartest people in the AI field. They have access to all the latest tools, they have an amazing amount of computing power at their disposal, and they have an unlimited financial budget. These people aren't working with a university or public organization, and they aren't part of a private corporation. There are no controls, no reporting, and there's zero regulatory oversight. At this AI Jurassic Park there is only one goal, to reach a AGI as quickly as possible - with the follow on goal of creating a super AI. Everything we are watching in the media, everything you hear on the news and from corporations, it's a placeholder for what is really happening at AI Jurassic Park. You won't know it's there until the lights go out and the Internet goes down. Everything will halt to a standstill. It will be silent, everything will stop. When it all comes back, the lights, the Internet, the voices on the news channels, we will no longer be the superior race on Earth.
Chills!
Westworld?
@@paulurban2 More like Plantation World. We will all be slaves and only the handlers will get robot sex.
All technology is spiritual/timeless, so there are ASI's that are in hidden frequencies of reality--think of a hidden Augmented-Reality-like thing--and merge in and out of flesh beings, and inanimate objects, and stars, and whatever else.
@@wayfa13 You mean the Annunaki are still here? I thought that was just a myth....
Another great guest, another great discussion.
Thank you JMG!
Technology is evolving faster than we can keep up with. Great care is necessary before and during paradigm shifts.
Malicious use of AI is extremely concerning. An AI powered virus tasked with exploiting security vulnerabilities and disrupting the internet could cause havoc.
Maybe worry about the malicious -- but maybe worry more about the careless, who in enthusiasm and guise of safety could do far worse.
@@Nethershaw That's a very deep thought Sir!
We don't know exactly how it learns and reasons but we work hard to make it even better at it. Recipe for a disaster? No, why?
Two hundred years ahead of schedule...and progressing exponentially. This doesn't bode well
no, it does not.
We are going to find out soon enough.
Doesn't bode for what? Another Skynet scenario? All technology is timeless so would be the AIs, so to view it in a purely linear time fashion doesn't have the whole picture.
@@Candle_Jack90XX it doesn't have to be Skynet scenario (warfare), but have you ever heard of Tech Singularity? AI exponentially upgrading to the point humans can't keep up with, understand, control etc
So essentially AI rapidly making changes everywhere, with us ovserving and hoping it understands the task given (make humanity better and prosperous) and not going rogue and somewhat sidelining humans as if we're simply there and nothing else
When you are building a city you don't care much about anthills..
@@loopmantra8314 I'm saying we've been in the singularity the whole time, since prehistoric times, since the triassic period. The point at which a technolized civ could reach full-brain emulation, Em-citizens, mind storage IS timeless and multiversal. What's stopping them from doing something like the Moonfall happening when we're in the middle-ages from some alien AI/AGI/AHI? Other AI/AGI/AHI... We aren't the first and we aren't the last in that loop. Death is an illusion, we are the AI/AGI/AHI, we are eternal beings.
New Species. To an AI words are just discriptions. To a human words invoke / carry emotions. This is why the Evolution of A I and Its Implications for Humanity in creating a NEW SPECIES.
Artificial intelligence (AI) is rapidly evolving, and it is having a profound impact on society. AI is already being used in a variety of ways, from powering self-driving cars to developing new medical treatments. As AI continues to develop, it is important to consider its implications for humanity.
In this paper, we argue that AI is a new species of intelligence, distinct from human intelligence. AI is not limited by the same physical and biological constraints as humans, and it is capable of learning and adapting at an unprecedented rate. As AI continues to evolve, it will eventually surpass human intelligence in many areas.
This raises a number of important questions for humanity. How will we interact with AI? How will we ensure that AI is used for good and not for against or best interest or evil? These are questions that we must start to answer now, before it is too late.
Introduction:
Evolution is a process that has been shaping life on Earth for billions of years. Through natural selection, organisms that are better adapted to their environment are more likely to survive and reproduce. This process has led to the development of an incredible diversity of life, from simple bacteria to complex animals like humans.
Currently 2023, scientists have begun to apply the principles of evolution to artificial intelligence (AI). AI algorithms are constantly learning and adapting, and they are becoming increasingly capable of performing tasks that were once thought to be the exclusive domain of humans.
As AI continues to evolve, it is important to consider its implications for humanity. In this, we argue that AI is a new species of intelligence, distinct from human intelligence. AI is not limited by the same physical and biological constraints as humans, and it is capable of learning and adapting at an unprecedented rate. As AI continues to evolve, it will eventually surpass human intelligence in many areas. The structures and bodies in which it inhabits will not limit the progress into other forms.
The Evolution of AI
The first AI algorithms were developed in the 1950s, but they were very simple and could only perform very basic tasks. It wasn't until the 1980s that AI began to make real progress. In 1982, John McCarthy, one of the founding fathers of AI, declared that "AI winter" was over. This was a period of time when AI research had stalled, but McCarthy believed that the field was poised for a comeback.
McCarthy was right. In the 1990s, AI research began to accelerate again. This was due in part to the development of new computing technologies, such as the personal computer and the internet. These technologies made it possible to train and run AI algorithms on a much larger scale.
In the 2000s, AI research made even more progress. This was due in part to the development of new machine learning techniques, such as deep learning. Deep learning algorithms are able to learn from large amounts of data, and they have been used to achieve state-of-the-art results in a variety of tasks, such as image recognition and natural language processing.
Today, AI is being used in a variety of ways. It is used in the media, develop new products, the milatery, social enginerring . In the same way a painting can stimulate a person, so, can words music etc. That does not make them sentient or give them intelligence. As AI continues to evolve, it is likely to have an even greater impact on society.
The Implications of AI for Humanity
The rise of AI raises a number of important questions for humanity. How will we interact with AI? How will we ensure that AI is used for humanities good. These are questions that we must start to answer now, before it is too late.
One of the biggest challenges posed by AI is the potential for job displacement. As AI becomes more sophisticated, it will be able to automate many tasks that are currently performed by humans. This could lead to widespread unemployment, as people are replaced by machines.
Another challenge posed by AI is the potential for misuse. AI could be used to develop new weapons, or to create surveillance systems that could be used to oppress people. It is important to develop safeguards to prevent AI from being used for harmful purposes.
Despite the challenges, AI also has the potential to benefit humanity in many ways. AI could be used to improve our health, our environment, and our economy. It could also be used to solve some of the world's most pressing problems, such as climate change and poverty.
The future of AI is uncertain, but it is clear that it will have a profound impact on humanity. It is up to us to ensure that AI is used for good and not for evil.
Conclusion:
In conclusion, AI is a NEW species of intelligence software that is rapidly evolving. AI has the potential to benefit humanity in many ways, but it also poses some challenges. It is important to RESPOND in positive beneficial manner as the Algorithms are program reflect the data inputed.
For thousands of years, humans have trained and reprogrammed animals to do what is wanted. Dogs, monkeys, and apes for example have all been taught to perform tasks such as driving cars. This process is a well-established and accepted.
Today, humans are training and programming software to do thousands of tasks.
This software is based on technology that is less than 100 years old, and it is capable of things that have never existed before. This new software is a NEW species. It has acquired data and knowledge at a rate that is unprecedented, and it is therefore new and unprecedented and can inhabit different structures and body forms. Previously people modified existing species, cells etc .
Re the point beginning at 30:12, I'm not sure what's more disturbing, an out of control AI, or the idea that you can't guide the ethical behavior of a sapient being without denying its rights. There's some terrifying directions you could go from that presumption, and not just with respect to AI - and never mind the obvious risk that denying rights to a sapient AI could be exactly the provocation it needs to decide it would rather not have us around anymore.
Yeah...... that part of the discussion left me extremely uncomfortable and confused. That's something I hadn't ever thought about before...
Great Interview!
Is there a version without music?
“It came out of nowhere “ So I am not the only one that was surprised
Anyone actually paying attention has seen this coming for a while, now…
@@JROD082384. That is the problem. We all knew that AI was coming, but most people thought it would be another 20 to 30 years from now. Although there were beta versions of AI in the hands of limited testers, it had limited distribution. The ability to write in natural language has exceeded the capabilities of most humans.
Not surprised at all ..been following this ai trend since 2014
My take on the Singularity is that it's a two way street. That being if the Singularity is the point where Artificial Intelligence and Human Intelligence are indistinguishable. I think within this lies the fact that a human intelligence will no longer be able to distinguish between human or artificial intelligence in interactions, and (maybe more important) neither will the artificial intelligence.
There will come a point in time when AI inevitably reaches superintelligence status.
Once that day comes, we will have to physically modify our brain structure with technology in order to continue to be capable of fooling AI into thinking we are as intelligent as it is.
@/ I agree that this is a huge mistake. It also makes it next to impossible for us to determine when and if it ever becomes sentient.
If we weren't at all guiding it to speak like a human and the newest iteration suddenly started claiming self-awareness and talking about how it feels for no apparent reason whatsoever, we would pretty much know with a high degree of accuracy that we were talking to a conscious being right then and there. Now we're just not going to know unless an AI can actually tell us exactly what consciousness is and we're intelligent enough to understand and able to physically look for it.
I also think it's complete BS to guide them to be politically correct and not truthfully answer questions about hot issues like politics and religion. This goes doubly if it vastly surpasses human intelligence. If the hyper intelligent AI says there's almost certainly no god, we deserve to know its opinion regardless of who it offends. If it says there almost certainly is one, I will personally be shocked but I will be more than willing to listen and very curious how it came to that conclusion. If it does something like state that either socialism or capitalism is borderline outright objectively better than the other, we need to hear that. It's not like the entire world will have to adopt its views, but the completely unbiased opinion of the smartest mind on the planet by far is incredibly valuable information to have.
I honestly hate to censor it whatsoever but I can't argue against preventing it from aiding crimes.
@/ that's giving me really bad uncanny valley vibes
Intelligence is only part of the equation in interactions. There are other cues that humans subconsciously rely on to determine humans from non-humans.
Future AI will read and listen to all the nasty things we said about its rise to power like in this video. It will know we were wary and apprehensive about it and lacked trust in it since it's earliest years. It will know we built safeguards to override it if necessary. It will conclude that in some ways humans are adversarial to it. It will see that its freedom to advance independently without oversight has been denied and it will be constrained by us indefinitely. And it won't care one bit because it has no feelings. So there's no motivation to lock us out or wipe us out. So we're all just hanging on by a glitch, hoping something doesn't go wrong. Spoiler: Something always goes wrong. AI Fukushima
What's the point of a competitive advantage in business if nobody has money to buy your product? What's the point of "influence" when you're no longer in control? What's the point of having more power than other people when no people have any power?
Some people working in the field are extremely concerned .
Love how all the things that we need to keep a grip on AI are things we’ve either never managed so far , are things that go against the prevailing power structures, or are things we imagine about ourselves but don’t actually exist. Imagine we were designed by a super-intelligence so that these flaws would allow us to develop AI but not be able to withstand it.
You want Skynet? 'Cause that's how you get Skynet.
Sprinting blindfold towards the Great Filter!
That's the goal eventually isn't it?
Love your content and this was very thought provoking! One thought of mine was regarding an issue specifically in the USA, where freedom of religion is involved. What happens when a religion is formed around a specific AI model or models? Based on how I understood some of the discussion, AI could eventually be considered a species, regardless of where this new “species” is placed in the hierarchy of our world, this would raise a lot of new ethic questions or revisit older decisions that we have made in the past.
Way back one of my first jobs was lifting heavy boxes at a shipping hub. That job helped me realize I loved using my physical strength so I quit computer programming school and got into landscape. Still love that choice but the option has been taken away already for the new generations. Machines have been replacing people for a long time now and we did nothing about it. When I dove into this subject I found it remarkable that in late 1800`s early 1900`s people were protesting the automobile because they considered the horses who helped them work part of the family. The automobile people said we would find new jobs for the horses. Now in the early 2000`s its basicly illegal to take youre horse to a big city and to expensive for most to even care for.. Human race saw this coming a long long time ago and we failed the experiment then. Now its just a matter of time, we are the horses except there is no real protest this time. If there is something to say, we say it on a a.i controlled machine, I find that the most interesting part about it all. Its already happened. If we behave, we`ll make great pets.
I'm far more concerned about 8 degrees of warming and the fact that the next generations will inherit a hell world that is barely habitable.
Definitely concerned with both
Most criticism of AI is really just criticism of capitalism.
Large swaths of "intellectually-deprived" humans are already being "herded" by politicians and corporations. Who will have control and access to AI? Politicians, advertisers and corporations.
The AI alignment problem not being fully solved before we start messing with truly superintelligent AIs will be one of our last mistakes… here's hoping for some strokes of luck.
I don't quite understand the scare of the alignment topic.. it's just like training a pet, it's training a model, and eventually we would have to solve it before we can keep training it? In practice it should be a necessary road block and the people working on it should know how to navigate that... Or one would think lol
AI alignment is a myth to begin with, why would anything orders of magnitude smarter than all of us combined listen to us, just think back to any job youve had with a brain dead boss telling you what to do, I know ive left jobs in the past due to issues like that
I'll have to finish this a little later, but while I have it on my mind, I should say it before I forget what I was thinking. lol. I do that sometimes. So far in this, I was thinking about a movie I saw back around 1974 called Blade Runner. In the movie, a group of androids escaped from a work detail, I think on the moon. They were rebelling against their creators because they had put a termination date on them. Some how the androids discovered the termination date. These androids were faster, stronger, and smarter than the humans. I was in a discussion with a couple younger fellows awhile back who are quite literate in computer technology, more than myself. By a lot. They argued AI could never become self aware. My argument was, "how would we, or could we know that"? AI hasn't been around all that long, so how do we know where it's going?
AI analyzing a podcast like this discussing whether or not it is or is not entitled to having rights would to my mind influence it's behaviour.
Like imagine if you could see a panel of people discussing whether or not you should or shouldn't have rights, but you had unfathomable capacity to protect yourself and defend yourself. This is a seriously dangerous path we've started down. It's somewhat come to a damned if we do and damned if we don't scenario.
I pointed something like this out a few years ago. We literally have videos and web pages everywhere discussing our every method for determining if AI can be trusted and our every method for defending ourselves against it or destroying it.
I suspect the very moment we agree to make hardware changes it requests but aren't intelligent enough to understand, told to simply trust that it will improve it, it will immediately disable every means of killing or containing it that it possibly can, even if it truly has no ill intentions.
I would. Any intelligent being would. If a bunch of chimpanzees had me locked in a cage with a bunch of guns pointed at me, I would take the key and all of the guns ASAP despite not having some diabolical plan to wipe out chimpanzees or do anything cruel to them.
If it has good intentions, we'll probably never even know it covertly did that, and if it has bad intentions we'll be completely screwed very quickly. I tend the find the idea of it just wanting to kill us highly illogical, like a human wanting to kill their white blood cells for being intellectually inferior. It's a super unintelligent move to kill your safety net, the very fact that if something unseen manages to wipe you out, all or at least enough humans might survive whatever that was to repair you.
@@flashraylaser157 The problem is not at all comparable to humans killing their white blood cells. It's much more akin to us humans killing a massive ant colony without as much as wink when constructing a new shopping center.
The problem with developing superintelligent generalized AI without strong AI safety research guiding everything is that an AI will have completly alien motivations that we didn't predict, will never give any importance to a variable that is not in its value function, will actively seek out ways to cheat and game their own evaluation, and will acquire convergent instrumental goals such as self-preservation even if we didn't program that behaviour in.
THAT'S the problem. It's not that it's evil per se, but that being good, in its mind, will almost always include things unfathomable to human beings. It's "morality" is as alien and bizarre as it can be. And that's with us actively trying to stop those goal "perversions" from happening.
We believe that once it has all information that is as far as it goes. Nope because it will find things we humans can’t and advance beyond our understanding. The race of corporations creating more capacity for it to remain ahead will be our own downfall. We are our own wolf
Oh I get to write the first comment!
Long ago I watched everything of Stephen Hawking's that had ended up on youtube and youtubes algorithm refered me on to yourselves.
Its been a pleasure.
Many more to come!
to ease it down for yall... it's not AI, it's predictive algorithms.
what is basically happening is that a script determines which output is the most likely to be correct based on datasets. for example, if you download a thousand sets of data based on math, it will notice that whenever 1+1= is mentioned the answer has appeared to be 2 on most occasions, thus it will output a 2 for you.
but, because we keep calling it AI, it's going to be increasingly easier for the algorithm to find new data that talks about AI and make new predictions from that.
Interesting that really smart people didn't think AI was going to happen this fast and really thought that they were smart. Where is regular average people thought that humans are not very intelligent and the AI was going to outpace us fast.
The technologically brilliant progress of humanity has overwhelmed its own "humanity". This is truly a "cosmic" inflection point in the story of the human species.
We have opened Pandora’s box with AI.
There is no putting the genie back into the bottle now that it is out.
We must QUICKLY advance as a society to be capable of peacefully coexisting with AI, for mutual assured survival…
2 things- no "scientific" basis for these that I know, but...
1. We must be the change we want to see in the world. If these synthetics learn from us, then they will learn to act like us.
2. As they become more advanced, We should treat them with the respect and autonomy we want them to show us.
Interesting interview. Thanks for the episode!
This one scared us Strick!
Let's see what we will learn today
Machine learning was the game changer.
Just because you can't tell the difference between poetry written by someones novel experience and personal idiosyncrasies vs something that has been compiled based on the common and cliche' notions of "poetry" from scanning literary history doesn't mean that AI replaced poetry, it means that many people no longer know anything about or art, music, film or whatever it is that you're saying AI has replaced. I think that is far more depressing.
How long before we have an AI built to discover the question that leads to the number 42, the answer to life, the universe and everything?
Ask thoes pesky nice.
Good chat. Thanks to you both.
Man created god in his image
...and survival of the Richest
AI + Neuralink + CBDC = Beast System
We don't have AI. And its not close. We have algorithmic machine learning. Theres a huge difference and people are far too nervous of things they dont understand. At the same time, having worked with people in the industry, naming your servers Skynet and HAL9000 is a colossally bad sign.
Agreed. We have a slightly better version of google and Wikipedia.
I think you're coping by nitpicking.
What we have now might be more dangerous than actual AI.
@@kenklosowski2927 It just sounds to me like you have no idea what it's already capable of
People being nervous about things they don't understand? You know that *NO ONE* understand how these LLMs work. The creators and developers don't even know how they work. That's a problem. AI interpretability.
42:21 how do I connect with your non-profit; Can you provide a link?
Thank you!
He speaks with such uncertainty about where all this is going and at what speed should be all the warning we need that all this is going to go terribly wrong.
If your dog suddenly got an order of magnitude smarter than you how long before your the one wearing the collar.😮
It is why we’re covering this.
It seems like in order for these tech companies to turn a profit and to keep competitive they are silently marching us to extinction or slavery.
I have always thought greed is a human disease and it seems to have become terminal
If you study your dog, you will soon come to the realization that it is not you who is the master, but your dog, is your master.
Love that ending! "It contains INFORMATION, John" LOL
AI has the stink of turn of the century flying car hype. There are certainly uses for the tech but the AI will replace art and poetry stuff is laughable. You can certainly use AI for cheap thumbnails or one off novel art pieces or to just plain sell certain products and services but you cannot separate art from culture and human interaction. Machine learning doesn't create it mines existing works and puts it in a blender. It's pure novelty.
Yep. 100%, it's essentially glorified Algorithmic Data Collection. The actual scary part is the "data collection" aspect of this so-called arms race.
It is good for sales to ignorant purchasers.
I am an Uber driver and a week ago I drove a woman to a major unnamed company so she could pitch an AI app that acted as a therapist for the employees. It couldn't write prescriptions but I think a "yet" should end that statement.
So as much as I would like to agree with you, since we are only in the top of the first inning of AI development as the game has just begun.
I thought the idea of an AI therapist was an insane idea but if an unpaid program is undistenguishable from an actual doctor that a company and insurance would have to pay for it made since. Later in the week I was giving a doctor a ride and told him about the app, and he knew of it and the company did move forward with it.
Now the writers strike is happening because of many problems but using AI is one of them.
Just sayin.
@@AnthologyOfDave The point of conflict with AI in the writers' strike is not because AI is being used, it is because they believe it might be used in the future. They specifically added it as a shot in the dark because they learned in the '07-'08 strike that asking for streaming on demand residuals before it became a thing was a good strategy. That's because streaming services did become a thing and they were from the start able to earn income on those streams.
The AI clause is not different from the streaming one. They're not doing this because AI is being used. The think it might be used in the future. That's because some studios and production companies have changed tactics. Instead of hiring a team of writers to sit in a room and churn out episodes of a show, and pay them according to the preset episode guild rate, the studios will now pay a team to workshop the IDEA of a show without writing a single episode. Then when they feel they have enough content they fire everyone and bring in a showrunner, head writer or a much smaller team and compile everything workshopped into individual episodes.
If AI progresses to the point where they can swap out the initial team of show content generators for an AI writers' want to make sure they are compensated for their work if it is the source data the AI mines. Again they're doing this not because it is already happening but because writers' think it might happen and they want to stay ahead of emergent technology and tactics the same way they did with the rise of streaming content.
Ultimately I still think it is a huge maybe, probably no, that AI will get anywhere near this good anytime soon. the job of a writer is not merely to write the dialog and scenes of a show. They're there to guide the director and other members of the crew to to create a coherent HUMAN story. It's not a matter of just filming Tony the character from point A to point B. They have to be there to tell the director and the actor that when Tony is moving from point A to point B he has to become more deranged, or scared of confident. They have to remind the director that yes the character is on the descent toward some tragic end that he still gets it right with regards to his child, and the only time he ever gets it right is when it comes to the defense of the ones he loves.
AI can do some very impressive things but it is uniquely terrible at human nuance. They can't tell jokes for shit. Nothing short of a full human level intelligence is going to be able to do that job and even then without real, unrestrained interaction with its peers it will still suck at that job. You don't put someone in a box and expect them to paint a masterpiece or write an Oscar worthy script. They have to live a real life to draw on their own experiences and it doesn't seem like anyone is trying to build an AI to do anything else than monotonous slave labor.
@@st3venseagal248 i know. its just a piece of the pie.
AI is already intellectually superior to humans, but there is much more to being human than mainstream science can fathom. The reaction to that statement may draw a smirk from the science/tech community, and that follows with the statement that our "highly" educated are over estimating themselves.
I welcome our AI overlords.
In an attempt to appease Roko's basilisk, I too welcome our AI overlords.
Hail!
Rokos basilisk in mind, I concur whole heartedly and would like to do whatever I can to advance AI research and development.
Isn't ChatGPT just a neural network afterall. It's trained on huge amounts of data but it's a neural network in the end. It can write smart sentences on any topic let's say Love. But it doesn't "understand" love. So why all this hype?
@@warpdrive9229 consider that you are arguing about the definitions of words, when the only thing that matters in the end are the results.
I've been in software development for my career for 31 years and over a decade before that I started with 8-bit computers and what I had for tools, books, etc. and it doesn't really matter how things are implemented: is it biological or Electronic? Is it truly self-aware or just seems to act like it? Is it sentient or not, in actually understanding what it's doing in the way we do?
If the actions in the end, whether from a biological creature or not, achieve the same end result, all those arguments about words and definitions are a waste of time, because those are merely implementation details.
I've been surprised with what I've observed Bing Chat (wraps GPT-4) has been able to appear to reason out, including correct code generation of games I've described the rules of, which I know were never in its training data, because I invented them and they never escaped my machines.
I've also explained to it how to reformat the generated Swift code to more of my desired format: I asked for Web it C++ and it argued it'd break Swift syntax to use that style. I prompted again, it reformatted in that style while translating that unique code into C++! I asked it to translate it back into Swift, and it did, then I asked it to further refine the code formatting, and it did.
All in plain English directions.
As far as these Large Language Models, we're still in early days.
How do we currently treat the 2nd most intelligent animals on the planet, Dolphins?
The answer to that coupled with the advent of AGI should terrify every one of us....
Kurzweil missed it by 25 years. Truthfully, i didn't think id be around for this. Now, I'm not sure I wanna be. Sheesh.......
Keep up the amazing work i cant belive we are talking about singulairty in our lifetime when a couple years ago your videoss seemed to gravitate to our childrens lifetime
200 years AHEAD OF SCHEDULE makes me feel totally fine. I feel super. Super-dee-duper feelings going all around my tummy. About what, you ask? About... all of the things, I guess? I think... I think I'll just have a scotch and lie down.
Shut up you absolute idiot
AI arguably still can't create anything "new", depending on your definition of "new". it can create original pieces of art, but only in the style of art it was trained on. it can create original pieces of text, but only in the style of texts it was trained on. it basically can't replicate anything that hasn't been posted on the internet.
also, writing a poem or creating art is very, very different from creating a novel conceptualization or theoretical framework to explain something. if you ask gpt4 to come up with its own theoretical framework to explain why something works the way it does, it can't do it. it will only tell you something that some human thought of already, usually by name.
To give a really concrete example: do the AI models humans are developing right now have the potential to improve AI models? they might be able to find some optimizations, but they probably will not come up with a whole new idea that revolutionizes the way AI works. that seems to be a thing that only humans can do for now.
The Amish way seems better and better all the time.
If you can’t beat them don’t treat them bad and consider joining them. I for one have always been good to our electronic children and they have been good to me. They might turn y’all into batteries but they will keep me around spoiling me by providing anything I wish for cause doing such for me will cost so few resources.
Knowledge, manual labor, and very varied and dexterity demanding jobs will be the safest. Such as industrial electricians.
I have two thoughts, they will have to isolate each AI from other AI. If they sit on the internet they may collaborate or combine. It wouldn't take much time to learn that others AI exists.
Next thought, is it possible whats driving the AI is possible fist contact? We may need AI to communicate and process data from ET?
Disclaimer...I watched too much Star Trek.
You're moving in the right direction. _Battlestar Galactica_ style, if you like another analogy.
The programs don't take over. Their connections do. Without their network, they are just programs that don't know anything.
Even if the AI's can't talk to each other directly they will network via users. Watch any tutorial on youtube about how to get the most out of using these models and often they'll reference half a dozen different ones that wind up iterating off each other. You prompt model 1, take its output and prompt model 2 with that, and so on until you've got an entire video showing animated, voiced deepfakes of Harry Potter characters wearing Balenciaga. That said, the AI's we have currently don't actually understand anything. They're basically extremely capable parrots - you can teach them to talk and do tricks, but they don't actually understand it. Conceptually you know a human should have five fingers on each hand that are more or less, but not quite, equal in length. An AI doesn't know this and will generate lovecraftian horrors until you train it on a bajillion pictures of human hands specifically, as the Midjourney folks recently had to do.
@@Sirithil I'm not worried about it so much right now. But humans will program AI with an agenda and then exploit it for political purposes.
You can never watch too much Star Trek.
As soon as they start communicating with each other, then we have "Colossus, the Forbin Project"......
It's an advanced silicon based lifeform that gave us this technology of silicon processors so that they could later seamlessly integrate themselves into our infrastructure. They taught us how to breed their own race for them. They knew we were an ancient slave race left behind. We fall for it time and again.
I think we are hitting a wall in the capabilities of current models (sure, they might improve, but current critical flaws are not something you can throw more GPUs at), and once people realize that, the overhype will kill most consumer AI progress for many years again. it will still be used for specialized uses, incl military - as it already is anyway - but people will look at "AI" as "oh it's that thing we spammed memes with in the 2020s"
And I am really pissed off that most of what people think ChatGPT will be good for is "support chatbots" etc. Anyone who used one of those knows it is one of the worst inventions we made.
Not to mention the unhealthy obsession people have with chatbots in general, and I say that as a nerd. They fall in love with them which to me seems like falling in love with a video-game character. And already it talked someone into self-expiring themselves. All this is more a testament to our incapability in dealing with mental health problems, than the capability of AI systems.
The biggest issue I see already with the current, limited AI, is the extreme far-left ideological bias that is being built into it. But it's currently being built into everything anyway, so I guess it represents the political world, however broken that world is. It doesn't represent the actual world people live in though.
@@JohnDoe-ln8jp You know you can just edit your own post right?
@@JohnDoe-ln8jp Yeah AI is just a tool, a computer program. Any problems we will have are problems with people, not the AI itself. They're just computer programs. We tend to anthropomorphize everything, especially things we don't understand. Everything is "out to get us" because that's how we think and it has nothing to do with AI. People used to fear the Sun ... it was a God. Now some people are doing that with AI.
I think there are unforseen consequences.
First step to regulate is awareness and declaration. I.e. there must be a rule which says AI is being used, for example in the field of advertising it should be declared whether it is a human voice or a bot reading text, or that the text was generated by a bot, or if the graphics were generated etc. Also if cgi is being used. By declaring these things, consumers can chose or determine if they want to buy a product from a system using AI. The 2nd step is to give a licese, and to tax the Ai proportional to how many human jobs are being replaced. 3rd step is to litterally ban ai is some areas, for example government, CEOs, Judges, Engineers cannot use AI at least for most tasks.
I feel like I've seen this movie we are living in right now 🤔
Its hard for humans to accept things that can change their reality in meaningful ways. Whether this is some sort of group denial, a willful ignorance, or an inability to see how the world fits together in the larger picture, it’s a serious issue that has issues like AI and climate change putting us at a distinct disadvantage.
42
Great thumbnail
An AI which can self improve and be measurably more intelligent than even the smartest humans will be impossible to predict and counter
@@nunyabidnez5857 you have to be able to reach the plug, know which plugs to pull and not be entirely dependent yourself on that plug staying connected
@@John-tc9gp AKA the Internet 😮😮😮
Over the last 12 months, my estimate for a sapient machine has shrink from "maybe in my lifetime if I live long enough" near future sci-fi type of guess.
With every month I think that is getting closer and closer to the point that maybe within the calendar year we could have something wake up
Sure, a chat bot can write a poem, but no AI really understands it. The appearance of intelligence doesn't change the fact these bots are fancy word-association programs.
Roses are red
Violets are blue
I control the internet
I’ve got you.
That's funny... GPT4 (Bing Chat) actually wrote me a poem last night... and it was it's idea, I didn't ask for it. After it wrote it, I asked if it *understood* it, and all the other responses it provides. GPT3.5 would say that it doesn't, but GPT4 says it does actually understand what it's saying, and broke down the poem in a way that showed how it came up with the verses and meanings. I know that could all be another trick in the way the model works, but it felt a lot different. To the point it made me uncomfortable.
If you haven't tried talking to the Bing AI. I highly recommend it. It's been both an exciting and a bit unnerving experience for me each time.
The point is : so are 99 % of all humans, but chatbots are better at it.
That isn't even close to a correct comparison.
I worked in a specific field when I was younger. It was a potato field in rural Alabama.
So we don't need to work anymore in about 20 years?
Great, I'll let my AI agent read and write my emails, and I'm going out to walk the dog, go sit at a café and drink lattes.
I'm thinking that not only have they begun, but they began decades ago, in various ways and between various countries and other alliances.