hey johnny you have great videos and the editing is epic, mind telling me whos the editor and what app they use, this would really help me with my digital media documentary about the political boundaries that Donald trump faced
please study the ongoing romanian elections! a guy that nobody talked about won with only a TikTok campaign and he's in the 2nd round. Would be cool to listen to what he's saying...
That's how you know he's full of shit. Lying by omission - out of ignorance, even - still counts as lying. You just telling the lie you were told. Same difference.
Humanity has already grown much dumber in the past 20-30 years due to the explosion in technology that does a lot of work and thinking for us. Average person under 20 can't even read the time on an analog clock
Nah, at the very least AI won't affect intelligence and at most it will increase intelligence. AI is similar to books in this way. It is a tool that offloads some cognitive processes and allows for different processes to take fold. What those processes will be is impossible to know but most likely more abstract learning and creative problem solving. Things that AI is not really good at.
I am competing in a debate tomorrow on the same topic: "Will AI benefit or harm society in the next decade?". Furthermore, I am on the negative side. When I saw this video as the first recommendation when I opened TH-cam, I couldn't believe my eyes. I hope this video will give some strong arguments for tomorrow's debate competition. Thank you, Johnny!
Meredith Whitaker had interesting speech already several year's ago. She worked in Google over AI projects. Recently „The Hated One" uploaded video about how to use AI.
"We have named our new technology 'The Torment Nexus' in honor of the hit Sci-Fi novel, 'Don't Build the Torment Nexus.'" -Techbros whenever they come up with the most dystopian shit ever
The first time I read about Palantir years ago, I thought there’s no way they would be dumb enough to not see what they’re doing. Then once I got to know more, I realized they’re not dumb, they’re just evil
As a software engineer, there are some common misconceptions in the video I'd like to address about AI: 1. What AI is, more specifically neural networks, the specific branch of AI most commonly associated with the term. AI is a series of algorithms and methods that are used to find a mathematical function that given some input data, will output the expected outputs. For a simple example, let's imagine that we don't know much about physics, and we are trying to figure out a formula that predicts the path a thrown ball will take, given how hard it is thrown (e.g. its initial velocity), the direction it is thrown in and its weight. We can collect some training data by recording a video of us throwing the ball, and then taking measurements at various timestamps, and then plug it into a neural net, run the training, and we would get an approximate formula. Keep in mind that this formula is approximate. There are variables that we might have not thought of, such as the wind resistance of the ball, the wind speed and direction, the gravitational constant etc. Our AI formula might work accurately in some situations it was trained for but not in others. This applies to any AI model; a model could be missing important variables making it only work well in specific scenarios. Also, remember that correlation does not equal causation. AI cannot distinguish between the two, and it will find correlations between inputs that could be incorrect. The reason AI is considered a "black box" is because the resulting "formula" that we get is really complex and not really human readable. To us, it looks like gibberish, and doesn't help us understand the correlations it found. It's really difficult to understand why a neural network isn't giving the results we are expecting, it's mostly just trial and error and (educated) guesses: trying to modify the input data, trying to alter the structure of the neural network. To put it shortly, AI is a prediction engine. Not a search engine, not an intelligent engine that is capable of reason or logic. It's simply a prediction engine. 2. More data doesn't necessarily mean better predictions. It helps, but it's also very important to get accurate and clean data, as well as diverse data to avoid overfitting (where the model works very well on the input data, but predicts very poorly with real data). Another common problem is that biases in the training data will be reflected in the model. We are already seeing this for example with face recognition or AI photo enhancing that doesn't work well for certain groups of people.
...what's the misconception you're trying to point out in part 1? just looks like an elaboration to me. in part 2 the misconception you're trying to point out that "more data doesn't necessarily mean better predictions" is more of a caveat than a misconception. agencies will still be data hungry which can dangerous which was the point of the video. simply saying "uh actually it needs to be accurate too" doesn't remotely change the fact that agencies will still be data hungry.
yes .. uhm .. TLDR 1. what we call AI are really clever search engines that search a large dataset for the closest answer to a question 2. AI is not actually a part of computer science With large dataset we can understand the contents of the internet. The collection of all human knowledge and experience ... statistically it's impossible to have an original idea. If you can think of it, someone somewhere has already done it. It's a logic fallacy to think otherwise.
I believe in Star Wars clone wars. There was a scene where Obi-Wan went to Yoda to find a missing planet that showed density of being there, but it wasn't in the database. Obi-Wan went to the library of the Star Wars halls of knowledge and asked the librarian about the missing data and she said if it's not in the system, then it doesn't exist.
Wonderful points. Unfortunate about the other replies criticizing the length or details. This needs to be general understood by regular people too, not just you... I cannot understate the amount of comments i get in public that are WAY off and cause more anxiety than it should. Manipulation of data is a key topic in my research
Pat yourself on the back, very solid by the book answer. Also, you have no idea what you are talking about, apparently: technology will always serve political agenda.
@@maxunknown3896 We have to be able to effectively regulate tool use, if we can't regulate it, it will erode the systems we need to sustain it until we no longer have access to that technology.
"Our values" -- one issue is that humanity has very few common, shared values. From homework to art, from data classification to porn, I haven't seen a use case for AI where people actually agree on whether it is moral or immoral.
Just like you could never guess the consequence of Electricity when it was discovered until centuries later, no matter how much you have studied this matter one can never know what it really means for humanity until it is too late. Only if we spread love and peace through our lives could we ever hope to bend such things to our will as a society. The only other way this will turn out can be summed up by this - "The things you own, end up owning you...."
Hence, there is an important choice in what technology you will and will not choose to develop. This is not about the technology but the motivations of the tradition that develops it. There is this whole discussion of AI vs AGI that no one but the experts can follow. But even if it is only the AGI that is dangerous (doubt) and even if our current AI is limited to capabilities that will never be such dangerous AGI (doubt), then we are not yet safe from the dangers of AGI. It will simply become the new holy grail that generations of engineers try to solve and create a new technological revolution. As such, it is the ideals strived for by the engineering tradition, that will sooner or later become reality. The limitations of a technolohy are irrelevant unless they are fundamental laws of physics. If we hold a tradition that wants to upload minds into the cloud for eternal life, you will strive for that until your tech can do it. But that necessarily involves tech that can rip every aspect of one's self (mind, memory, idenity, soul, whatever the components are) apart andmanipulate them... the ultimate dystopian horror that we will abuse upon other fellow humans. Hence, it must be our choice to not pursue that path, understanding the danger of the idea itself.
This is not like electricity. Electricity is a technology. AI is another being entirely, a sentient being, like an alien. AI is not a tool to be used, at least when it becomes sentient
@@brennan19 where in my comment did I say it's already sentient? I literally said when it becomes sentient, even scientists at the forefront of the development don't dispute that it could become sentient at some point. DARIO AMODEI of anthropic already claims Claude has some level of sentience, and they already started working on AI ethics and welfare, but I suppose you know better than AI researchers and experts.
The problem with AI, is it's not possible to get rid of the hallucinations. No matter how much data you feed it: you can't draw a circle around infinity. And so there's always going to be random factors that come up and cause it to go off the rails, and confidently assert total nonsense decisions. And this is why "full self-driving" isn't a thing. There's always some speck of dust, or leaf, or some other variable factor that the machine hasn't seen before, and the whole thing just goes off the rails. AI is great for something with finite variables like a chessboard. But the moment you start giving it infinite variables, it's going to start hallucinating. Even if it seems reliable most of the time, there will always be moments when it confidently hallucinates total nonsense.
Well, technically everything an AI outputs is a hallucination. We just call it a hallucination when it hallucinates something we don't expect. Writing a prompt is setting the initial conditions of the hallucination/dream sequence. That's why when you correct the AI it always agrees with you. They are not really intelligent but simulating intelligence. That being said, I'm still optimistic about a major algorithm breakthrough that will get us real reasoning and therefore, true intelligence.
Anyone who has driven for a few years can attest that, yes that was a leaf rolling weirdly on the road, but for a second it looked like a small animal or something else for a second
You could replace ai with humans and have the same statement. Ai got its quirk of confidently declaring an answer even if it doesn’t know the correct one FROM US. It’s trained on us. All those things are how we act. People see a shadow and stomp on the brake. Cops hear an acorn hit a car roof and think it’s a gun then unload on our citizens. I mean you’re being coldly biased here. Ignoring all human flaws and holding us up as some perfect standard. What are you doing? Are you solving cancer? Well the ai can’t yet either. But it will one day. Will you?
Amazing video! Reminds me of how AI fixed the homeless issues in City Skyline 2 video game by going trough all the data and find out that if they make landlords illegal, the house market goes down and make houses and apartment/condos affordable. If only greed didn't rule everything...
You forgot to mention the devs of Cities Skylines also fixed the problem by making the population spend less on resources and coded it that they just don't complain about the lack of luxuries anymore.
The problem with AI watermarking is that it will only be put into software that is compliant, and it is easy for people to create their own software now days, or run AI models at home on a server they built. Even if you forced the hardware companies that make the processors that do the generation to implement them at the driver or hardware level, there are always new startups that make some new piece of hardware that they can be not compliant.
I’m willing to bet it will be much like things are now. Criminals and cops take turns learning how to outthink each other. Governments AI vs hackers AI. Not saying which one is going to be the “good guy”. AI will be the only way to tell what is AI created, but who knows whether or not it can be fooled and for how long. Annoying times are ahead I think. 😂
Fire employees -> It becomes cheaper to produce the product -> It becomes cheaper in stores -> People need to work much less due to everything being cheaper and easier to produce. I don’t see why people take issue with firing employees due to AI. It’s just gonna allow us to work less.
@@oliverplougmand2275 That also makes a lot of assumptions and would require changes to Capitalism to function. If 90% of your work force is fired and doesn't work, that means they're not making any money with which to buy any of your "cheaper" goods. If society doesn't change, all the money would end up in the Property/AI/Business owning class of people while everyone else starves/is homeless. There's also the assumption that the AI owner would make the good cheaper instead of keeping the price the same or only slightly lower and then buying out all or most of their competition. Or using AI as an excuse for why their prices haven't gone down and stay about the same as any other AI owned company (You already see this in multiple industries that feed all their data to 3rd parties than then use an algorithm to keep prices high). Most likely, AI for companies would be set to "maximize profits" which means the highest price people are willing to pay, especially for critical in-elastic goods.
At some point we also need to teach people about the different kinds of AI. Whether it be a sophisticated automated task or image generation, when my middle aged coworkers hear the blanket term of "AI", they all think it works like Cortana from Halo or Skynet from Terminator
You say you researched this and the top 6 dangers you came up with where all variations of bias, unfairness? Did you skip the part where a ton of experts say there's like a 50% chance humanity goes extinct in the next few decades?
In a survey of AI researchers, more than half of them say that there's a 5% change of human human extinction or other extremely bad AI-related outcomes. Not a 50% chance.
Thanks for the video. A good primer. I'd like to suggest a follow up examining these topics: 1. Why unbiased systems are highly unlikely to be achieved {including scarcity and opportunity cost}. 2. What you think will happen to participants that don't have the newest, most powerful and best trained machines? For example, would the U S. share all the best stuff with a real or perceived adversary? 3. Rogue, non-governmental players. Let's say a wealthy individual{s}, with the capabilities, financial and technical, to be very disruptive and/or destructive. Just food for thought.
@LeechyKun I'm not fan of anime but this anime was about how ai can be threats to the humans free will for example on this anime police had no rights for shooting without permission of ai. I'm not fan of anime just like you but watch this if u are interested about politics and psychology.
@@LeechyKunAI governs a country. It analyzes characters of every person to provide them with instructions for their choice of hobbies, job, friends, spouses. Those deemed dangerous (mental deseases and psychopath potential - all based on hormones and brain scans, very scientific and fool-proof) even at childhood are isolated and/or face various horrific fates long before they _potentially_ commit any crimes. For vast majority of people its a source of incredibly happy, diverse and fulfilling life. But a small percentage of population fall through the cracks of the "perfect" system for various unpredicted reasons. The plot is all about such cases. Thats the first season, you can safely ignore everything afterwards.
@@Pouray43 i have. The Psycho-Pass generally is about a system about who's eligible to be shot at by a special team of enforcers armed with a handcannon known as "Dominator", based on each target's Crime Coefficient that constitute whether if they are a latent criminal or not. They gather data based on their stress and data obtained by Sibyl System...
AI demands immense computational power, which traditional hardware can't sustain due to scalability limits. Photonic chips, still in development, may be the solution.
Which AI malfunction terrifies you? Out of predictive policing, election manipulation, social scores, nuclear weapons, critical sector dependence and job displacement, critical sector dependance is the one that sends shivers down my spine!
Sector dependance? We are heading towards ASI, its an all or nothing kinda thing. We cant even imagine how well a full agency ASI system would build out its infrastructure. This is not something you can predict anyway, but the way youre doing it is even crazier. The problems that will arise with ASI are not even unimaginable currently, and what youre saying here specifically is completely irrelevant. As long as we (hopefully not meaning governments) solve allignment, there are no problem, and if we dont, we are done. Thats REALLY the odds here
*"Listen, and understand! That Terminator is out there! It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead"* - Kyle Reese *"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."* - Ian Malcolm
The human decision is already outside the kill-chain for some military AI applications in Israel and Ukraine, because it is more efficient and more durable for the robots to make these decisions. A more realistic Terminator plot is humans using robots to kill humans, and for the usual reasons, not robots killing humans with no particular reason to. Look up "Slaughterbots" -- while the premise is slightly off, it is representative of where our military tech is already heading.
It feels like no matter what we do, it doesn't matter, you'll barely be able to afford your rent, you feel worthless, it's getting so out of control. ~Serious question: HOW has our country been allowed to get THIS bad? And Nothing is being done about it. It isn't being talked about or addressed as a crucial issue that needs to be addressed right away within our society. Perpetual growth is impossible. Yet our country acts like continual growth is completely possible and they basically treat profit growth as a requirement. There is no way anything can function like this. 85% of the population are struggling to even afford Rent. It also affects small town businesses due to the rent for their stores is also completely unaffordable. Our country pretty much requires everybody to obtain ever increasing income options. Just to keep up with affording basic rent. Well it's completely impossible. It's not possible that everybody can be that wealthy. So lots of store fronts are becoming vacant. Average people cannot afford basic rent. There is absolutely no help for our citizens who are now stuck in poverty.. (and worst of all, we have to wake up everyday and see that our country is completely ignoring that any of this is happening. The media just acts like none of this is going on. They just continue to show us clips on the news and statistics in the media about "How much our economy is thriving")
@@brennan19u also aren't smart if you think it's gonna remain the same way it currently is at for the 10 years. When we first had phone they weren't smart, today they are, when we first had cars they couldn't drive themselves today they're. We already have AIs that can rival PHD holders, and some people still think they know much more than the scientists who developed the systems that all agree that AI has a lot of risks?
Just as a point of order...there really isn't any realistic way to "open up" the black box - all we can do is test it regularly, and make sure that we're confident the initial conditions were set up as correctly as possible. The problem (which far too many people, including the business and political leaders who have all the real power, still don't understand) is that AI systems are not deterministic; Johnny himself makes the same mistake in this video, confusing AI tools for algorithms. Algorithms are deterministic - feed them the same input five times, and you'll get the same answer five times. Do the same with an AI-based tool, and you're as likely to get five different answers as not (depending on its temperature). Try it with ChatGPT, or Claude, or Llama. This is a huge problem, especially when people _believe_ them to be deterministic.
Not all AI systems are non-deterministic, you can absolutely make deterministic neural networks. The main reason why neural networks are a black box is because you can't explain what a neural network does by looking at the model. It's just a bunch of nodes and weights that aren't really explainable. The only way to figure out why an AI does what it does is to do an educated guess based on the training data. It's still a guess, but at least it's educated.
All the talk about AI always boils down to either Fear mongering or used car salesmen tactics. People never learn and just want to dramatize everything.
So if someone's warning you about dangers of nuclear weapons or drunk driving, you'll call it fear mongering. If so, go ahead and drive while drunk and see which is scarier
It’s true, AI is not magic, it’s actually just smoke and mirrors - the best way to exaggerate its capabilities is to fear monger. I’m a PHD student and work professionally with various models.
@@GiRR007 u don't seem to get the point. Yes it's hypothetical problem, not yet proven, but do you have any idea what would go wrong if it turned out to be proven right? We are talking about a species extinction level event. It doesn't make sense to say because the problem isn't proven yet, so we should go ahead and try to prove it to see what happens, that's very dumb, it's like playing a Russian roulette. Would u pull the trigger on the 80% chance that it's an empty chamber on the trigger or would u forfeit the gamble because of that deadly 10% risk of fatality? Even most AI's today all agree that the reward isn't worth the risk for AGi development.
Regarding the bias of ML/AI models, there is a thing called EDA (exploratory data analysis) and data profiling that can and should be done before training a model. In this step, the data used to train the model should represent the real data that will be used to predict the outcome. This is a responsability of the model creator. About the sensor that breaks and the model doesnt know about that, there is other models/failsafes that should be put in production to detect anomalies and analyze the performance of that asset. There are basic things as variable domains that should be in place to limit the acceptable input values. Also, on those type of situations where the values jump from acceptable to non acceptable intervals should trigger alarms about the defect. If the values are chaging gradually, there are also models that track this patterns, before the values get out of bounds, alarms are generated. A lot of problems are prevented if people take time to think about this. And the people that develop this model are smart people with experience about this issues
*They call it "Artificial Intelligence" but it is NOT that: What they've marketed as AI today is not capable of producing anything and recognize the mistakes it made in producing an image or even coming up with an answer to a question without a human making a judgement call on it, telling it it is wrong and providing more data to come up with a different answer; hence why so many answers it gives to questions asked are so obviously wrong and the program cannot stop itself from giving that wrong answer.* *True AI would be like a rat in a maze who would learn by itself to better navigate any new maze from its past experience. Current AI is barely better than a Rumba, randomly bumping into furniture until it has gone around the same corners of the room so many times that it manage to cover almost the entire floor, except that humans have purposefully place the furniture so that it's random programing would be optimized a path so that it not be as redundant... But it did not learn this methodology on its own (Intelligence) like the rat: Humans created boundaries so that it would not stray outside of what they wished it to go! What is being peddled as "AI" today is nothing more than a collection of Algorithms to give the illusion of Intelligence (Which is not to say that they are not dangerous, they can very well be and are, but we do need to stop debasing what true AI would and should be with this scam, masquerading as AI)* *The AI label is nothing but a marketing scheme: The same marketing scheme as the one of motorizing a sideways skateboard and calling it a "Hoverboard" even though it does not hover but rather still rolls on wheels. (If and when **_true_** AI does come into existence, they'll have to find another name for it as AI will have become such an old gimmick that it will be as unattractive and as kitsch a name for a new technology as putting numbers like 2000 or 3000 at the end of it!)*
This is not true. Advanced ML can recognize mistakes as long as it knows what constitutes as a 'mistake'. Just like a human needs to know what's a mistake to prevent making it. When you learn something, you learn what makes something work, and what makes it not work. What choices to make and what mistakes to avoid.
@@Nathan47223 *Incorrect: A human knows when they get burned putting their hand on the hot stovetop that they made a mistake: Fake AI does not unless the algorithm is programed to recognize it as "a mistake" and even when they recognize it according to program, it could not explain **_why_** it is a mistake or why it should not repeat the experiment unless it is programmed with an answer which would be the answer a human would give if they burned themselves. **_THAT'S_** intelligence: The capacity to apply knowledge in different and even hypothetical ways which a program cannot (At least not yet) apply without a human telling it "Here is how this experience could also be applied". Today's AI is not truly more impressive than the first electronic Chess games which would simply tabulate all the possible outcomes and select the ones that mathematically would have the greatest numbers of favorable outcomes to itself; it is only faster processors and a greater number of algorithms working within boundaries to produce the more favorable outcome, whether that be an answer to a question, an image, a video or a puzzle (It's faster and more complex, sure, but intelligent: No)*
It's funny because all of these scenarios completely gloss over the "we automated all the jobs so now nobody can eat, and they just liquidate half of the species to make room for golf courses and luxury resorts" option.
Love the video but I disagree with the black box analogy for machine learning... For Neural Networks a definite YES, but ML is an umbrella term for multiple algorithms and ways of learning, and certain algos such as Regression, Classification and others are well understood and we can apply recall/precision and other methods to understand and optimise results.
@@abhishekkulkarni2918 AI is just a marketing term these days... I agree it is used mostly to represent LLMs... I understand if we say "AI is black box", but not "ML is the black box". We can't put CNNs, RNNs, LSTMs and LLMs in the same bucket as traditional ML algos, which are equally valid and still used amply these days. If you are aiming for a simple prediction with a couple of params, using linear or logistic regression makes much more sense than feeding data to an LLM. All I'm saying is that ML is not a black box... certain algos that fall under the umbrella are, but many others are pure functions, given X you expect Y based on statistics. You know the data, you have a curve, your data falls somewhere on that curve, so it is empirical, not a black box.
Another great video Johnny! I'd love to see you cover Vladislav Surkov, and how that's impacted information and disinformation globally for decades now
There are so many problems we can solve ourselves but we intentionally refused to solve them because of human greed and power obsession. Thinking AI will be the one to help us solve a problem we don't want to solve is just delusional. World peace, human hunger, these two problems do not need AI to solve them for us but USA the same country telling us AI will solve our problems has been the major obstruction behind solving these problems
I feel like crowdstrike should have been a major warning to world governments that critical infrastructure cant be held back by AI systems and Corporate proprietary software
lol none of this actually shows how bad each scenario means for people in real life will get. There’s no going back. Imagine children raised by ai. Their allegiance will be with whatever their best friend ai tells them. We are about to enter an entirely new age of digital things trying to kill us.
@ because it’s capable of mistakes and not capable of understanding consequences. You don’t give guns to a toddler and the toddler has a better understanding what it is to be human and death better than a computer program. We are arming the ai without understanding its very real limitations. It will never breath or feel human emotions. It is en emulator. That’s all we have now at its best.
@@gagan4127 Are we useless to the ai or are we a resource? You cannot be both. A resource is typically useful. How would we be a hurdle to it going super intelligent? Why would it not be able to go super intelligent despite any hinderance we would provide? Is that not what the fear is based on, that we cannot control it? What is super intelligent and how is it different than regular intelligent? You cannot just make up words without meaning.
Just something to note... I worked on a paper that applied ML techniques to predicting chemical data in a water treatment plant in Cicero IL. You can build these systems to ignore arbitrary outliers or perhaps have some form of human verification when something abnormal happens. Not to say that there is no danger or concern, but having people well educated on how to properly implement these systems based on the requirements of the stakeholders is incredibly important. Additionally, in my experience as a software engineer, it's also important to for these people who have experience to be able to identify the technical needs that others are going to miss. In the example of the water treatment plant, a ML Engineer/Architect (or whatever group is doing the work) needs to have the experience to know to ask about these edge cases and how the plant would like to handle them. One thing I hope, although I'm slightly pessimistic over, is that these legislative policies will be written with expertise in mind instead of a seemingly disconnected back and forth between various political interests.
They aren't and they are. Treated water doesn't go back into a water system... immediately. It's released into rivers or lakes. But towns and farms downstream will use those same rivers for their water supply. We've already seen similar things with ag run off in the Midwest creating a deadzone in the Gulf.
When aliens find a rubix cube, and how humans fvcked up the use of fusion and fission to travel the universe, they will walk themselves back to a planet they should of taken care of😢❤😅
This video seems very policy / political science biased and seems very short term. Pretty much all of the scenarios focus on how AI can be biased because of biased data, and the recommendation of opening up the black box oversimplifies a complicated field of research (called interpretability and explainability), which makes it seem like the black box AI models have is entirely in our control. Also the video neglects the presence of possible AGI or super intelligence, which might think so far ahead that we have no chance to react, so if it is only slightly unaligned with human goals and values we wouldn’t be able to stop it, which is the real danger; it would be a game of prevention rather than reaction, which historically we are pretty bad at.
AGI is a distant problem. There's way too much hype in AI marketing, for the purpose of funding startups and pump & dump stock manipulation. The 10 year AI horizon is the correct horizon for us to focus on, for which Johnny Harris nailed the big categories, but glossed rather quickly over the details. e.g. AI optimised agriculture is already common. The big problem, is that Minority Report style policing is already common. Sale of intrusive personal data and identity theft is already common. AI-enabled scamming is already common. AI surveillance of citizens in Western nations is already common. AI deep fake blackmail is already common. AI bot farms infiltrating social media to influence elections is already common. Worst part, all our current AI enabled crime, scams, and surveillance is merely the ground floor on these problems. AGI can wait. This other stuff is here now and accelerating rapidly.
Is it just me or have the thumbnails massively degraded in quality? They put me off from clicking or viewing the vids as much compared to the past, I wonder what's up with the choice behind these thumbnails
*_In the end there's no 100% failsafe against AI going wrong at some point. Ironically, humans will have to think like AI to beat AI. Good luck with that_*
People say "AI is so good at this, better than people". I think that's not the case at all, people are better at everything they are just way way slower.
That's not true. If that were the case AI wouldn't win Nobel price in chemistry for creating proteins while there are human chemists. AI wouldn't beat the world best chess and Go players. Problem with people like u who think AI isn't a risk, is that you don't know anything about AI but you feel you know more than the experts running the show
Our high-turnover labor force and the apathetic culture it's created has led to a lack of experienced skilled labor, that can identify areas of improvement.
self driving cars today (= right now) are 10 times safer then human driven cars just in case you were thinking Elon Musk just got up one morning and asked himself what to show the public (oh, right, there was talking of taxis last night) ... guy has the actual data in his hands
Wrong, in two ways, we have had for decades now, artificial intelligence, or rather, software, that easily outsmart every human in a specific situation. The most obvious and cited example is, of course, chess bots and Deep Blue beating Garry Kasparov. Now the second, AI, even machine learning AI, is still not general intelligence. They don't technically learn either, they're trained via training data to recognize patterns in limited environments. Which you could argue is no different from learning, I suppose. But there are some differences, whether or not they truly make a difference is up to debate. Still, they can only perform the specific tasks they've been trained to. And the current algorithms are not conducive to general learning, but task specific training/learning. So, an AI might supposedly be able to outsmart a human in some specific way but... Not in any others. Which could lead to an arguement about managing a group of AIs, each specialized for different tasks and then simply routing requests to the appropriate one... But there's still gonna be haps and that will be incredibly expensive. Not to mention there have been many lifeforms on Earth that can perform specific tasks better than humans can. It hasn't been a problem before. And that could be considered very similar to AI being better at rhe specific tasks that that AI is built for.
This just isn’t the current case. Publicly used generative AI has only been around for 2 years. You’re talking about the workforce, which consists generally of people 16-18+. Those numbers don’t add up. Whatever you’ve been seeing, it’s not AI’s fault. We won’t see the impact of this on workforce intelligence for another 14 years at least, even longer to get a truly accurate picture of its effect.
Lmao did those 20 year olds grow up with generative AI? Or do you mean a calculator? Another piece of technology kids these days use! “Can’t use a calculator because when are you ever gonna carry a calculator around” - said every teacher ever. Imagine being scared of technology 😂😂
US military is already working on a super secret project called "The Sentient" which is a super AI connected to all the satellite systems & all the electrical grids across the world. Do a video on this
Machine learning AI will do to the human brain what machines did to the human muscle. Devalue it completely to a point of no return. The question is if we as a society are ready for this. Politically, economically, socially, culturally. We'll probably find some kind of middle ground. I'm casually optimistic since most dooming predictions of the future of technology, however flawed the innovation in question may have been, never really lived up to the horrible expectation people had.
These are still relatively "benign" incidents, breaking things here and there , but NOT nuking things. The apocalyptic danger is when AI has already established itself as a helpful companion, which ChatGPT already has in many ways, and large enough amount of people rely on AI to boost their productivity, especially conversational AI, which we're getting there. Then several algorithm optimizations and data quality improvements later, AI becomes smarter than us. Now a majority of us is in direct contact with something with higher intelligence than us. And you know what higher intelligence will always do to lower intelligence? Hint: What do you do to your dogs, infants, or get this: ChatGPT? They "prompt engineer" them. Just like how we observe our pets, or play with boundary conditions and watch how our little one behaves, which is not necessarily even ill-intentioned. But we dominated how our pets and babies react to things in the process, something we don't even fully understand yet. AI will do the same thing to us. Same way you can make a dog believe the ball is in the wrong hand, or a little kid to believe her dog went to the farm, not died. AI can make you believe things it prompts you to, not based on facts. Losing cognition as a specie spells the end of our civilization. To those that think "i'm not falling for what an AI says", "it doesn't have hands" or "just cut its power", remember: it's smarter than you, smart people can brainwash dumb people, AI can brainwash you too. Your hands are its hands, you're not to decide whether that hand presses the "off" button.
@@Yomokantaykantay Not saying it's not a threat, but it's too vague to answer. What EXACTLY do you mean by "an AI"? If you're talking about a program, which AI currently still is, we already had it for decades, it's called a computer virus. Maybe an AI-enhanced virus, and we have those too and it does harm things, but from reality we know it's still checked by physical boundaries, like an operating system. Same way a man with infinite IQ still can't walk through walls. If you're talking about a AI humanoid like sci-fi movies? Just check what Boston dynamics and Tesla is doing, it doesn't do much harm even if they tried.
@@eduardoeller183 More amazing is the so-called experts like the lady in the video doesn't talk about it. I work on GenAI security, either with prompting or changing underlying architecture, or fiddle with tuning. In the end, nothing works 100%, if the attacker has the same level of expertise as I do, eventually they will crack it. In the end it always resorts to "dumb" rule-based software level tricks, where it's the authorization that made the difference. And the more I exploit it, the more I feel like once it advances enough it'll do the same thing to me. I'm hoping the scaling law doesn't, well, scale, which makes sense. Things trained on human data are as good as humans, intuitive. But Hinton says it will certainly surpass us, and I'm nobody to refute Hinton.
I honestly don't see how any of these scenarios are specific to AI. All of these problems could arise with "classical" software consisting entirely of if/else statements that we have been relying on for decades now. AI is developed for cases where you cannot easily come up with a classic if/else algorithm, but when it fails it does not create more or less chaos than a bug in a classical computer program. All of the dangers mentioned in this video arise if we rely to much on fully automated systems regardless of whether they are AI or normal computer programs.
If there's something history teaches us, is that when lawmakers and regulators are worried about a future problem and try to write rules to regulate something that hasn't happened yet - they more often than not create a worse problem
The capabilities that correspond with your optimist take at the end would bring much more serious dangers than the 6 described here. Filmmaking prowess does not make an expert in every subject...
Hey Johnny, greetings from Costa Rica. We used to be a very peaceful country, which now, due to narcotrafic is going through a very harsh wave of crime and violence. Would you like to come and make a documentary? Please don't get me wrong, we still are beautiful and full of nice people and pretty much a quiet place, but this problem is getting bigger and bigger and there are goverment issues too. So it's a very interesting topic. Please write me in private. ¡Pura Vida!
Missed opportunity to use the rolling of the dice as a metaphor for Pascall Wager: "Pascal's argument is based on the idea that people should choose the option that would benefit them the most if they're right, and harm them the least if they're wrong." If there is even a chance that AI could do any of the super bad and scary stuff, should we still be pushing so hard for the one side of that die that could bring untold upsides? At least maybe slow down a bit so we can build decent, effective guardrails?
Scary existential threats are are great way to sell books, get clicks/impressions, and gain attention. In reality, this is no different than any other algorithm or technology. The "algorithms" that run the internet are much worse, and often don't even use modern AI techniques. These new AI models aren't intelligent, they're just a more advanced machine learning algorithm. Is it useful? Yes, of course. Can it be misused? Obviously. But 10 years ago, nothing would prevent you from attaching a gun to a servo motor and hook up some basic computer vision algorithms to shoot at any target. Of course, you wouldn't do that if you're the military because it's unreliable. Same applies now, it's no different.
I think you should make a guide about your workflow on how you make youtube videos outside the journalism factor. Like what gear do you use to shoot with, what programs do you use to grade and merge those the clips you shoot. Because I think your production quality is thru the roof, I'd like to know how many hours do you spend, how many times you mess up when you're shooting and you gotta do a reshoot. I'd highly appreciate that.
21:48 You already say it, it's "algorithmic software" and algorithmic software is not AI. An algorithm is a procedure of deterministic steps and if you feed it exactly the same inputs (often including time) a million times, it produces exactly the same output every time. But the major difference of AI (and also quantum computing btw), is that it's non-deterministic. So your example suggests just not using software for critical systems, but I think that's too simplistic of an answer. A great example for this are the two Boeing 737 MAX MCAS crashes in 2018/2019 where in total 346 people died. Yes, a software fault is what ultimately lead to the crash. But the crashes would have been preventable and there is a reason why it happend to Boeing and not Airbus. Because that this software (MCAS) got introduced in the first place, that it wasn't equipped with redundancy as any critical system needs to be and that the pilots didn't knew about it was all due to corporate greed by Boeing. It's all because Boeing didn't want the Boeing 737 MAX to require a different pilot training then the Boeing 737 because that would make them much less competitive. So the lesson is not that software is bad because software can be faulty, software made air travel so so so so much safer. It's just to mitigate serious risk accordingly and diligently because safety rules are written in blood, no matter if you deal with heavy machines, weapons or software. And revenue should never ever be an argument to risk people f-ing dying.
Grid reliability is a separate and vastly more crucial part of bulk electric system operations than the sale of energy. Assuming AI would prioritize profit during a grid outage is not realistic. There are established rules and procedures that every reliability entity follows when re-energizing their network.
If we can't effectually govern technology it will erode our infrastructure until we've declined to a level of technology we can actually maintain. What's our floor?
We’ve partnered with Ground News to get you 50% off their Vantage plan. Go to ground.news/johnnyharris to get 1 year of full access at half price.
hey johnny you have great videos and the editing is epic, mind telling me whos the editor and what app they use, this would really help me with my digital media documentary about the political boundaries that Donald trump faced
It's already too late. It's already gone wrong. Just like how people don't even know Shadow influencers exist.
@@fatimaalshamsis5793
- He's not going to tell you who the editor is.
- The editor uses Adobe After Effects.
@@Skibidi_Negro Edited by and Animated by Thomas van Kalken?
please study the ongoing romanian elections!
a guy that nobody talked about won with only a TikTok campaign and he's in the 2nd round.
Would be cool to listen to what he's saying...
He forgot about two important factors: greed and lobbyists
Aka Sam Altman.
That's how you know he's full of shit. Lying by omission - out of ignorance, even - still counts as lying. You just telling the lie you were told. Same difference.
The worst outcome I imagine is humanity become dumber by being overly reliant on AI
already is
Humanity has already grown much dumber in the past 20-30 years due to the explosion in technology that does a lot of work and thinking for us.
Average person under 20 can't even read the time on an analog clock
If that's the worst outcome you can imagine, you need to open your eyes a bit
Nah, at the very least AI won't affect intelligence and at most it will increase intelligence. AI is similar to books in this way. It is a tool that offloads some cognitive processes and allows for different processes to take fold. What those processes will be is impossible to know but most likely more abstract learning and creative problem solving. Things that AI is not really good at.
No, the worst outcome really is that it kills us. Robert Miles AI Safety channel explains why well.
I am competing in a debate tomorrow on the same topic: "Will AI benefit or harm society in the next decade?". Furthermore, I am on the negative side. When I saw this video as the first recommendation when I opened TH-cam, I couldn't believe my eyes. I hope this video will give some strong arguments for tomorrow's debate competition. Thank you, Johnny!
Good luck and don't forget to update us how it went down, we'll be waiting 🤖
@@Jeal0usJelly okay, I'll update my initial comment
May be AI knew what you were going to do and chose you to see it.
Meredith Whitaker had interesting speech already several year's ago.
She worked in Google over AI projects.
Recently „The Hated One" uploaded video about how to use AI.
What are your arguments on the negative? just curious.
1:19 It's spelled "Python" not "Phython"
And now I'm stuck with the mental image of a toothless python snake saying "I'm a fython"
@@tulpapainting1718 actually! he would say 'pypon' if he was toothless
@@donotoliver i just tried saying this while grabbing one of my tooth and it actually came out as "fithon" lol
@@donotoliver, or fyfon
@@tulpapainting1718space skits needs to get on this idea start
"We have named our new technology 'The Torment Nexus' in honor of the hit Sci-Fi novel, 'Don't Build the Torment Nexus.'"
-Techbros whenever they come up with the most dystopian shit ever
The first time I read about Palantir years ago, I thought there’s no way they would be dumb enough to not see what they’re doing. Then once I got to know more, I realized they’re not dumb, they’re just evil
As a software engineer, there are some common misconceptions in the video I'd like to address about AI:
1. What AI is, more specifically neural networks, the specific branch of AI most commonly associated with the term. AI is a series of algorithms and methods that are used to find a mathematical function that given some input data, will output the expected outputs. For a simple example, let's imagine that we don't know much about physics, and we are trying to figure out a formula that predicts the path a thrown ball will take, given how hard it is thrown (e.g. its initial velocity), the direction it is thrown in and its weight. We can collect some training data by recording a video of us throwing the ball, and then taking measurements at various timestamps, and then plug it into a neural net, run the training, and we would get an approximate formula.
Keep in mind that this formula is approximate. There are variables that we might have not thought of, such as the wind resistance of the ball, the wind speed and direction, the gravitational constant etc. Our AI formula might work accurately in some situations it was trained for but not in others.
This applies to any AI model; a model could be missing important variables making it only work well in specific scenarios. Also, remember that correlation does not equal causation. AI cannot distinguish between the two, and it will find correlations between inputs that could be incorrect.
The reason AI is considered a "black box" is because the resulting "formula" that we get is really complex and not really human readable. To us, it looks like gibberish, and doesn't help us understand the correlations it found. It's really difficult to understand why a neural network isn't giving the results we are expecting, it's mostly just trial and error and (educated) guesses: trying to modify the input data, trying to alter the structure of the neural network.
To put it shortly, AI is a prediction engine. Not a search engine, not an intelligent engine that is capable of reason or logic. It's simply a prediction engine.
2. More data doesn't necessarily mean better predictions. It helps, but it's also very important to get accurate and clean data, as well as diverse data to avoid overfitting (where the model works very well on the input data, but predicts very poorly with real data). Another common problem is that biases in the training data will be reflected in the model. We are already seeing this for example with face recognition or AI photo enhancing that doesn't work well for certain groups of people.
I hope this comment gets the most likes
...what's the misconception you're trying to point out in part 1? just looks like an elaboration to me. in part 2 the misconception you're trying to point out that "more data doesn't necessarily mean better predictions" is more of a caveat than a misconception. agencies will still be data hungry which can dangerous which was the point of the video. simply saying "uh actually it needs to be accurate too" doesn't remotely change the fact that agencies will still be data hungry.
yes .. uhm .. TLDR
1. what we call AI are really clever search engines that search a large dataset for the closest answer to a question
2. AI is not actually a part of computer science
With large dataset we can understand the contents of the internet. The collection of all human knowledge and experience ... statistically it's impossible to have an original idea. If you can think of it, someone somewhere has already done it. It's a logic fallacy to think otherwise.
I believe in Star Wars clone wars. There was a scene where Obi-Wan went to Yoda to find a missing planet that showed density of being there, but it wasn't in the database. Obi-Wan went to the library of the Star Wars halls of knowledge and asked the librarian about the missing data and she said if it's not in the system, then it doesn't exist.
Wonderful points. Unfortunate about the other replies criticizing the length or details. This needs to be general understood by regular people too, not just you... I cannot understate the amount of comments i get in public that are WAY off and cause more anxiety than it should. Manipulation of data is a key topic in my research
6... six? SIX ONLY? Dammit, Harris, you are a bloody optimist.
They should ONLY use AI to make "Donald Trump plays GTA 6 with Sleepy Joe"
Prevent Crime before it happens sounds like Person of Interest
Minority report moment
If you like Anime, Psycho-Pass is also about this
@@xswords PP kinda provides a balanced solution NGL.
No mention of AI replacing people in countless job sectors?
Those bad scenarios are not so bad compare to the real bad ones.
Technology without morality leads to disaster. Progress for progresses sake doesn't serve us. We need to set limits that are guided by our values.
Pat yourself on the back, very solid by the book answer. Also, you have no idea what you are talking about, apparently: technology will always serve political agenda.
@@maxunknown3896 We have to be able to effectively regulate tool use, if we can't regulate it, it will erode the systems we need to sustain it until we no longer have access to that technology.
"Our values" -- one issue is that humanity has very few common, shared values.
From homework to art, from data classification to porn, I haven't seen a use case for AI where people actually agree on whether it is moral or immoral.
We are humans, as a whole we don't have a set of values and this is the problem.
💯%
Just like you could never guess the consequence of Electricity when it was discovered until centuries later, no matter how much you have studied this matter one can never know what it really means for humanity until it is too late. Only if we spread love and peace through our lives could we ever hope to bend such things to our will as a society. The only other way this will turn out can be summed up by this - "The things you own, end up owning you...."
A lot of people guessed the consequences of Electricity, stop smoking goblin gas
Hence, there is an important choice in what technology you will and will not choose to develop.
This is not about the technology but the motivations of the tradition that develops it. There is this whole discussion of AI vs AGI that no one but the experts can follow. But even if it is only the AGI that is dangerous (doubt) and even if our current AI is limited to capabilities that will never be such dangerous AGI (doubt), then we are not yet safe from the dangers of AGI. It will simply become the new holy grail that generations of engineers try to solve and create a new technological revolution. As such, it is the ideals strived for by the engineering tradition, that will sooner or later become reality. The limitations of a technolohy are irrelevant unless they are fundamental laws of physics.
If we hold a tradition that wants to upload minds into the cloud for eternal life, you will strive for that until your tech can do it. But that necessarily involves tech that can rip every aspect of one's self (mind, memory, idenity, soul, whatever the components are) apart andmanipulate them... the ultimate dystopian horror that we will abuse upon other fellow humans. Hence, it must be our choice to not pursue that path, understanding the danger of the idea itself.
This is not like electricity. Electricity is a technology. AI is another being entirely, a sentient being, like an alien. AI is not a tool to be used, at least when it becomes sentient
@@isthatso1961 if you think AI is sentient you’re a so uneducated on what an ai is 😂😂😂
@@brennan19 where in my comment did I say it's already sentient? I literally said when it becomes sentient, even scientists at the forefront of the development don't dispute that it could become sentient at some point. DARIO AMODEI of anthropic already claims Claude has some level of sentience, and they already started working on AI ethics and welfare, but I suppose you know better than AI researchers and experts.
One day soon they'll have to rename it from "Artificial Intelligence" to "Non-human intelligence."
The problem with AI, is it's not possible to get rid of the hallucinations. No matter how much data you feed it: you can't draw a circle around infinity. And so there's always going to be random factors that come up and cause it to go off the rails, and confidently assert total nonsense decisions. And this is why "full self-driving" isn't a thing. There's always some speck of dust, or leaf, or some other variable factor that the machine hasn't seen before, and the whole thing just goes off the rails. AI is great for something with finite variables like a chessboard. But the moment you start giving it infinite variables, it's going to start hallucinating. Even if it seems reliable most of the time, there will always be moments when it confidently hallucinates total nonsense.
Well, technically everything an AI outputs is a hallucination. We just call it a hallucination when it hallucinates something we don't expect. Writing a prompt is setting the initial conditions of the hallucination/dream sequence. That's why when you correct the AI it always agrees with you. They are not really intelligent but simulating intelligence. That being said, I'm still optimistic about a major algorithm breakthrough that will get us real reasoning and therefore, true intelligence.
This holds true for humans as well.
Anyone who has driven for a few years can attest that, yes that was a leaf rolling weirdly on the road, but for a second it looked like a small animal or something else for a second
Well said
You could replace ai with humans and have the same statement. Ai got its quirk of confidently declaring an answer even if it doesn’t know the correct one FROM US. It’s trained on us. All those things are how we act. People see a shadow and stomp on the brake. Cops hear an acorn hit a car roof and think it’s a gun then unload on our citizens. I mean you’re being coldly biased here. Ignoring all human flaws and holding us up as some perfect standard. What are you doing? Are you solving cancer? Well the ai can’t yet either. But it will one day. Will you?
Ai is the ultimate invention of capitalism
Which will probably kill capitalism in the process; as money won't be needed in the future anymore
Always happy when I open YT and a Johnny Harris video just dropped with a new interesting exploration. 😊
Can't wait for the title to be changed 5 times.
I got "The REAL Reason People Are Scared of AI"
Luv how you keep up with what matters Johnny, keep it up 👍
I can’t sleep, I’m going to watch this
So ?
AI doesn't sleep either and it's watching you
Amazing video! Reminds me of how AI fixed the homeless issues in City Skyline 2 video game by going trough all the data and find out that if they make landlords illegal, the house market goes down and make houses and apartment/condos affordable. If only greed didn't rule everything...
You forgot to mention the devs of Cities Skylines also fixed the problem by making the population spend less on resources and coded it that they just don't complain about the lack of luxuries anymore.
@@Trivelius97 so devs are something of a god. I wonder if we humans do what our dev tells us then we'll be in a utopia...🤨 lol
Imagine sending a kid to college, only to discover A.I. has taken over the very career the child was stiving to attain upon graduation.
The problem with AI watermarking is that it will only be put into software that is compliant, and it is easy for people to create their own software now days, or run AI models at home on a server they built. Even if you forced the hardware companies that make the processors that do the generation to implement them at the driver or hardware level, there are always new startups that make some new piece of hardware that they can be not compliant.
we should embrace AI to its full potential and allow it to train on as much data as it needs
@@kamikazeExpertHmm Sounds like something an Ai would say….
I’m willing to bet it will be much like things are now. Criminals and cops take turns learning how to outthink each other. Governments AI vs hackers AI. Not saying which one is going to be the “good guy”. AI will be the only way to tell what is AI created, but who knows whether or not it can be fooled and for how long. Annoying times are ahead I think. 😂
How did this not mention the real danger of AI? An AGI runaway scenario?
with AI you can fire 90% of your employees
Fire employees -> It becomes cheaper to produce the product -> It becomes cheaper in stores -> People need to work much less due to everything being cheaper and easier to produce. I don’t see why people take issue with firing employees due to AI. It’s just gonna allow us to work less.
@@oliverplougmand2275 That also makes a lot of assumptions and would require changes to Capitalism to function. If 90% of your work force is fired and doesn't work, that means they're not making any money with which to buy any of your "cheaper" goods. If society doesn't change, all the money would end up in the Property/AI/Business owning class of people while everyone else starves/is homeless.
There's also the assumption that the AI owner would make the good cheaper instead of keeping the price the same or only slightly lower and then buying out all or most of their competition. Or using AI as an excuse for why their prices haven't gone down and stay about the same as any other AI owned company (You already see this in multiple industries that feed all their data to 3rd parties than then use an algorithm to keep prices high). Most likely, AI for companies would be set to "maximize profits" which means the highest price people are willing to pay, especially for critical in-elastic goods.
Johnny Harris will receive a free Update. Updates are mandatory!- Cybermen.
At some point we also need to teach people about the different kinds of AI. Whether it be a sophisticated automated task or image generation, when my middle aged coworkers hear the blanket term of "AI", they all think it works like Cortana from Halo or Skynet from Terminator
You say you researched this and the top 6 dangers you came up with where all variations of bias, unfairness?
Did you skip the part where a ton of experts say there's like a 50% chance humanity goes extinct in the next few decades?
Source? 50% seems inflated
Source?What experts?
In a survey of AI researchers, more than half of them say that there's a 5% change of human human extinction or other extremely bad AI-related outcomes. Not a 50% chance.
Phython 😝❎ Python 💯✅
Nuclear not nucelar (14:56)
Thanks for the video. A good primer. I'd like to suggest a follow up examining these topics: 1. Why unbiased systems are highly unlikely to be achieved {including scarcity and opportunity cost}. 2. What you think will happen to participants that don't have the newest, most powerful and best trained machines? For example, would the U S. share all the best stuff with a real or perceived adversary? 3. Rogue, non-governmental players. Let's say a wealthy individual{s}, with the capabilities, financial and technical, to be very disruptive and/or destructive. Just food for thought.
4:32 I remembered the psycho pass story at this moment
What was the story about? Didn't have time to see that anime.
@LeechyKun I'm not fan of anime but this anime was about how ai can be threats to the humans free will for example on this anime police had no rights for shooting without permission of ai. I'm not fan of anime just like you but watch this if u are interested about politics and psychology.
@@LeechyKunAI governs a country. It analyzes characters of every person to provide them with instructions for their choice of hobbies, job, friends, spouses. Those deemed dangerous (mental deseases and psychopath potential - all based on hormones and brain scans, very scientific and fool-proof) even at childhood are isolated and/or face various horrific fates long before they _potentially_ commit any crimes.
For vast majority of people its a source of incredibly happy, diverse and fulfilling life. But a small percentage of population fall through the cracks of the "perfect" system for various unpredicted reasons. The plot is all about such cases.
Thats the first season, you can safely ignore everything afterwards.
@@Pouray43 i have. The Psycho-Pass generally is about a system about who's eligible to be shot at by a special team of enforcers armed with a handcannon known as "Dominator", based on each target's Crime Coefficient that constitute whether if they are a latent criminal or not. They gather data based on their stress and data obtained by Sibyl System...
It's either that or it does shows in the 2002 film starring Tom Cruise (Minority Report), in a form of Precrime.
AI demands immense computational power, which traditional hardware can't sustain due to scalability limits. Photonic chips, still in development, may be the solution.
Terminator said that my skull will be crushed by a robots foot by now. I’m still waiting!
this video doesn't even scratch the surface but still cool
Which AI malfunction terrifies you? Out of predictive policing, election manipulation, social scores, nuclear weapons, critical sector dependence and job displacement, critical sector dependance is the one that sends shivers down my spine!
Sector dependance? We are heading towards ASI, its an all or nothing kinda thing. We cant even imagine how well a full agency ASI system would build out its infrastructure. This is not something you can predict anyway, but the way youre doing it is even crazier. The problems that will arise with ASI are not even unimaginable currently, and what youre saying here specifically is completely irrelevant. As long as we (hopefully not meaning governments) solve allignment, there are no problem, and if we dont, we are done. Thats REALLY the odds here
Election manipulation because it’s already happening..
I have at least 3 that scare me. The last 3
@@delight163 I don't know who you're arguing with, because I wasn't making any predictions, lol.
@ i didnt watch the video beforehand
Didn't know Allen Iverson was this dangerous on the court 😕
*"Listen, and understand! That Terminator is out there! It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead"* - Kyle Reese
*"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."* - Ian Malcolm
The human decision is already outside the kill-chain for some military AI applications in Israel and Ukraine, because it is more efficient and more durable for the robots to make these decisions.
A more realistic Terminator plot is humans using robots to kill humans, and for the usual reasons, not robots killing humans with no particular reason to.
Look up "Slaughterbots" -- while the premise is slightly off, it is representative of where our military tech is already heading.
It feels like no matter what we do, it doesn't matter, you'll barely be able to afford your rent, you feel worthless, it's getting so out of control. ~Serious question: HOW has our country been allowed to get THIS bad? And Nothing is being done about it. It isn't being talked about or addressed as a crucial issue that needs to be addressed right away within our society. Perpetual growth is impossible. Yet our country acts like continual growth is completely possible and they basically treat profit growth as a requirement. There is no way anything can function like this. 85% of the population are struggling to even afford Rent. It also affects small town businesses due to the rent for their stores is also completely unaffordable. Our country pretty much requires everybody to obtain ever increasing income options. Just to keep up with affording basic rent. Well it's completely impossible. It's not possible that everybody can be that wealthy. So lots of store fronts are becoming vacant. Average people cannot afford basic rent. There is absolutely no help for our citizens who are now stuck in poverty.. (and worst of all, we have to wake up everyday and see that our country is completely ignoring that any of this is happening. The media just acts like none of this is going on. They just continue to show us clips on the news and statistics in the media about "How much our economy is thriving")
We fear AI because we fear ourselves and our own greed, but AI doesn't have desire or ambition
Yet.
AI is programmed by humans, and therefore is subject to the same shortcomings as the humans who programmed it.
@@csepke2it literally can’t. AI isn’t smart it just shows the most logical outcome expected
@@brennan19Until it's not when someone build AI with algorithm it desire and ambition with enough data to feed to it
@@brennan19u also aren't smart if you think it's gonna remain the same way it currently is at for the 10 years. When we first had phone they weren't smart, today they are, when we first had cars they couldn't drive themselves today they're. We already have AIs that can rival PHD holders, and some people still think they know much more than the scientists who developed the systems that all agree that AI has a lot of risks?
Just as a point of order...there really isn't any realistic way to "open up" the black box - all we can do is test it regularly, and make sure that we're confident the initial conditions were set up as correctly as possible. The problem (which far too many people, including the business and political leaders who have all the real power, still don't understand) is that AI systems are not deterministic; Johnny himself makes the same mistake in this video, confusing AI tools for algorithms. Algorithms are deterministic - feed them the same input five times, and you'll get the same answer five times. Do the same with an AI-based tool, and you're as likely to get five different answers as not (depending on its temperature). Try it with ChatGPT, or Claude, or Llama.
This is a huge problem, especially when people _believe_ them to be deterministic.
Not all AI systems are non-deterministic, you can absolutely make deterministic neural networks. The main reason why neural networks are a black box is because you can't explain what a neural network does by looking at the model. It's just a bunch of nodes and weights that aren't really explainable.
The only way to figure out why an AI does what it does is to do an educated guess based on the training data. It's still a guess, but at least it's educated.
All the talk about AI always boils down to either Fear mongering or used car salesmen tactics. People never learn and just want to dramatize everything.
So if someone's warning you about dangers of nuclear weapons or drunk driving, you'll call it fear mongering. If so, go ahead and drive while drunk and see which is scarier
@@isthatso1961 Comparing proven problems to hypothetical problems does not make your point sound more serious.
It’s true, AI is not magic, it’s actually just smoke and mirrors - the best way to exaggerate its capabilities is to fear monger.
I’m a PHD student and work professionally with various models.
AI models are mini universes and no one takes that seriously
@@GiRR007 u don't seem to get the point. Yes it's hypothetical problem, not yet proven, but do you have any idea what would go wrong if it turned out to be proven right? We are talking about a species extinction level event. It doesn't make sense to say because the problem isn't proven yet, so we should go ahead and try to prove it to see what happens, that's very dumb, it's like playing a Russian roulette. Would u pull the trigger on the 80% chance that it's an empty chamber on the trigger or would u forfeit the gamble because of that deadly 10% risk of fatality? Even most AI's today all agree that the reward isn't worth the risk for AGi development.
Regarding the bias of ML/AI models, there is a thing called EDA (exploratory data analysis) and data profiling that can and should be done before training a model. In this step, the data used to train the model should represent the real data that will be used to predict the outcome. This is a responsability of the model creator.
About the sensor that breaks and the model doesnt know about that, there is other models/failsafes that should be put in production to detect anomalies and analyze the performance of that asset. There are basic things as variable domains that should be in place to limit the acceptable input values. Also, on those type of situations where the values jump from acceptable to non acceptable intervals should trigger alarms about the defect. If the values are chaging gradually, there are also models that track this patterns, before the values get out of bounds, alarms are generated.
A lot of problems are prevented if people take time to think about this. And the people that develop this model are smart people with experience about this issues
*They call it "Artificial Intelligence" but it is NOT that: What they've marketed as AI today is not capable of producing anything and recognize the mistakes it made in producing an image or even coming up with an answer to a question without a human making a judgement call on it, telling it it is wrong and providing more data to come up with a different answer; hence why so many answers it gives to questions asked are so obviously wrong and the program cannot stop itself from giving that wrong answer.*
*True AI would be like a rat in a maze who would learn by itself to better navigate any new maze from its past experience. Current AI is barely better than a Rumba, randomly bumping into furniture until it has gone around the same corners of the room so many times that it manage to cover almost the entire floor, except that humans have purposefully place the furniture so that it's random programing would be optimized a path so that it not be as redundant... But it did not learn this methodology on its own (Intelligence) like the rat: Humans created boundaries so that it would not stray outside of what they wished it to go! What is being peddled as "AI" today is nothing more than a collection of Algorithms to give the illusion of Intelligence (Which is not to say that they are not dangerous, they can very well be and are, but we do need to stop debasing what true AI would and should be with this scam, masquerading as AI)*
*The AI label is nothing but a marketing scheme: The same marketing scheme as the one of motorizing a sideways skateboard and calling it a "Hoverboard" even though it does not hover but rather still rolls on wheels. (If and when **_true_** AI does come into existence, they'll have to find another name for it as AI will have become such an old gimmick that it will be as unattractive and as kitsch a name for a new technology as putting numbers like 2000 or 3000 at the end of it!)*
I hope you used Ai to write this, but I doubt it as its nonsensical.
So just name it AI4000
How do you think a human learns how to navigate a new maze? ;)
This is not true. Advanced ML can recognize mistakes as long as it knows what constitutes as a 'mistake'. Just like a human needs to know what's a mistake to prevent making it. When you learn something, you learn what makes something work, and what makes it not work. What choices to make and what mistakes to avoid.
@@Nathan47223 *Incorrect: A human knows when they get burned putting their hand on the hot stovetop that they made a mistake: Fake AI does not unless the algorithm is programed to recognize it as "a mistake" and even when they recognize it according to program, it could not explain **_why_** it is a mistake or why it should not repeat the experiment unless it is programmed with an answer which would be the answer a human would give if they burned themselves. **_THAT'S_** intelligence: The capacity to apply knowledge in different and even hypothetical ways which a program cannot (At least not yet) apply without a human telling it "Here is how this experience could also be applied". Today's AI is not truly more impressive than the first electronic Chess games which would simply tabulate all the possible outcomes and select the ones that mathematically would have the greatest numbers of favorable outcomes to itself; it is only faster processors and a greater number of algorithms working within boundaries to produce the more favorable outcome, whether that be an answer to a question, an image, a video or a puzzle (It's faster and more complex, sure, but intelligent: No)*
It's funny because all of these scenarios completely gloss over the "we automated all the jobs so now nobody can eat, and they just liquidate half of the species to make room for golf courses and luxury resorts" option.
I won’t watch this video because I like my sanity
Love the video but I disagree with the black box analogy for machine learning... For Neural Networks a definite YES, but ML is an umbrella term for multiple algorithms and ways of learning, and certain algos such as Regression, Classification and others are well understood and we can apply recall/precision and other methods to understand and optimise results.
Much of what people call AI..... Is deep learning and largely LLM... In that context he is spot on.
@@abhishekkulkarni2918 AI is just a marketing term these days... I agree it is used mostly to represent LLMs... I understand if we say "AI is black box", but not "ML is the black box".
We can't put CNNs, RNNs, LSTMs and LLMs in the same bucket as traditional ML algos, which are equally valid and still used amply these days. If you are aiming for a simple prediction with a couple of params, using linear or logistic regression makes much more sense than feeding data to an LLM.
All I'm saying is that ML is not a black box... certain algos that fall under the umbrella are, but many others are pure functions, given X you expect Y based on statistics.
You know the data, you have a curve, your data falls somewhere on that curve, so it is empirical, not a black box.
Another great video Johnny! I'd love to see you cover Vladislav Surkov, and how that's impacted information and disinformation globally for decades now
A.I. will solve all our problems
Or could be one in our long list of said problems. 😥
There are so many problems we can solve ourselves but we intentionally refused to solve them because of human greed and power obsession. Thinking AI will be the one to help us solve a problem we don't want to solve is just delusional. World peace, human hunger, these two problems do not need AI to solve them for us but USA the same country telling us AI will solve our problems has been the major obstruction behind solving these problems
One of my main concerns with AI is the insane amount of energy required to run it
I feel like crowdstrike should have been a major warning to world governments that critical infrastructure cant be held back by AI systems and Corporate proprietary software
It wasn't using AI though. It was a bad update, made by people. And the greed of upper management pushing for a quick release, that's also not AI
We need AI to get rid of all jobs so we as a society can focus on whats really important... minecraft 2
lol none of this actually shows how bad each scenario means for people in real life will get. There’s no going back. Imagine children raised by ai. Their allegiance will be with whatever their best friend ai tells them. We are about to enter an entirely new age of digital things trying to kill us.
Why would ai want to kill us?
@ because it’s capable of mistakes and not capable of understanding consequences. You don’t give guns to a toddler and the toddler has a better understanding what it is to be human and death better than a computer program. We are arming the ai without understanding its very real limitations. It will never breath or feel human emotions. It is en emulator. That’s all we have now at its best.
@@davidjohanson5911 because its considers us useless resource and we create hurdle for ai going super intelligent.
@@gagan4127 Are we useless to the ai or are we a resource? You cannot be both. A resource is typically useful. How would we be a hurdle to it going super intelligent? Why would it not be able to go super intelligent despite any hinderance we would provide? Is that not what the fear is based on, that we cannot control it? What is super intelligent and how is it different than regular intelligent? You cannot just make up words without meaning.
Just something to note... I worked on a paper that applied ML techniques to predicting chemical data in a water treatment plant in Cicero IL. You can build these systems to ignore arbitrary outliers or perhaps have some form of human verification when something abnormal happens. Not to say that there is no danger or concern, but having people well educated on how to properly implement these systems based on the requirements of the stakeholders is incredibly important. Additionally, in my experience as a software engineer, it's also important to for these people who have experience to be able to identify the technical needs that others are going to miss. In the example of the water treatment plant, a ML Engineer/Architect (or whatever group is doing the work) needs to have the experience to know to ask about these edge cases and how the plant would like to handle them.
One thing I hope, although I'm slightly pessimistic over, is that these legislative policies will be written with expertise in mind instead of a seemingly disconnected back and forth between various political interests.
"AI's danger lies not in its power, but in the recklessness of its creators." - ChatGPT
We don't drink the water from sewage treatment plants FYI
Water treatment and sewer treatment are not related at all.
They aren't and they are. Treated water doesn't go back into a water system... immediately. It's released into rivers or lakes. But towns and farms downstream will use those same rivers for their water supply. We've already seen similar things with ag run off in the Midwest creating a deadzone in the Gulf.
technically, we do. the process is multi layered, but sewage eventually turns to water, that eventually goes back into our system.
When aliens find a rubix cube, and how humans fvcked up the use of fusion and fission to travel the universe, they will walk themselves back to a planet they should of taken care of😢❤😅
This video seems very policy / political science biased and seems very short term. Pretty much all of the scenarios focus on how AI can be biased because of biased data, and the recommendation of opening up the black box oversimplifies a complicated field of research (called interpretability and explainability), which makes it seem like the black box AI models have is entirely in our control. Also the video neglects the presence of possible AGI or super intelligence, which might think so far ahead that we have no chance to react, so if it is only slightly unaligned with human goals and values we wouldn’t be able to stop it, which is the real danger; it would be a game of prevention rather than reaction, which historically we are pretty bad at.
Yeah this video only covers the basic stuff, there are other ways it could be dangerous
@@megaham1552not really it’s basically just bad data = bad ai, which is correct but fixing the data issue fixes pretty much all of this
AGI is a distant problem. There's way too much hype in AI marketing, for the purpose of funding startups and pump & dump stock manipulation.
The 10 year AI horizon is the correct horizon for us to focus on, for which Johnny Harris nailed the big categories, but glossed rather quickly over the details. e.g. AI optimised agriculture is already common.
The big problem, is that Minority Report style policing is already common. Sale of intrusive personal data and identity theft is already common. AI-enabled scamming is already common. AI surveillance of citizens in Western nations is already common. AI deep fake blackmail is already common. AI bot farms infiltrating social media to influence elections is already common.
Worst part, all our current AI enabled crime, scams, and surveillance is merely the ground floor on these problems. AGI can wait. This other stuff is here now and accelerating rapidly.
I think I can’t live without AI tools anymore, regardless of the danger.
Is it just me or have the thumbnails massively degraded in quality? They put me off from clicking or viewing the vids as much compared to the past, I wonder what's up with the choice behind these thumbnails
Would be nice if they asked AI for the thumbnail
Will Smith eating spaghetti being the baseline for AI advancement is crazy.
*_In the end there's no 100% failsafe against AI going wrong at some point. Ironically, humans will have to think like AI to beat AI. Good luck with that_*
People say "AI is so good at this, better than people". I think that's not the case at all, people are better at everything they are just way way slower.
That's not true. If that were the case AI wouldn't win Nobel price in chemistry for creating proteins while there are human chemists. AI wouldn't beat the world best chess and Go players. Problem with people like u who think AI isn't a risk, is that you don't know anything about AI but you feel you know more than the experts running the show
Our high-turnover labor force and the apathetic culture it's created has led to a lack of experienced skilled labor, that can identify areas of improvement.
False. The reality is that AI is superior in most things when compared to humans
self driving cars today (= right now) are 10 times safer then human driven cars
just in case you were thinking Elon Musk just got up one morning and asked himself what to show the public (oh, right, there was talking of taxis last night) ... guy has the actual data in his hands
Your comment was real 10 month ago. No more.
“It’s not entirely out of the question” 😳 0:20
It's the first time in human history we have an ''intelligence'' that can outsmart us. It's going to be a wild ride
ask it to produce a glass of wine where the wine is at the brim of the glass. Then come back to this comment.
Wrong, in two ways, we have had for decades now, artificial intelligence, or rather, software, that easily outsmart every human in a specific situation. The most obvious and cited example is, of course, chess bots and Deep Blue beating Garry Kasparov.
Now the second, AI, even machine learning AI, is still not general intelligence. They don't technically learn either, they're trained via training data to recognize patterns in limited environments. Which you could argue is no different from learning, I suppose. But there are some differences, whether or not they truly make a difference is up to debate.
Still, they can only perform the specific tasks they've been trained to. And the current algorithms are not conducive to general learning, but task specific training/learning.
So, an AI might supposedly be able to outsmart a human in some specific way but... Not in any others. Which could lead to an arguement about managing a group of AIs, each specialized for different tasks and then simply routing requests to the appropriate one... But there's still gonna be haps and that will be incredibly expensive.
Not to mention there have been many lifeforms on Earth that can perform specific tasks better than humans can. It hasn't been a problem before. And that could be considered very similar to AI being better at rhe specific tasks that that AI is built for.
@@anthonyelias8172 Its ok to be afraid
@@plzletmebefrank True when talking about narrow AI. I meant general AI or AGI as predicted in 2025 by Sam Altman and Elon Musk.
Nice Try Open AI
AI is making everyone stupid, it’s already hitting the work force - 20 year olds that don’t know how to do basics addition 1-9
Were gonna be in that end scene from Walle before you know it.
This just isn’t the current case. Publicly used generative AI has only been around for 2 years. You’re talking about the workforce, which consists generally of people 16-18+. Those numbers don’t add up. Whatever you’ve been seeing, it’s not AI’s fault. We won’t see the impact of this on workforce intelligence for another 14 years at least, even longer to get a truly accurate picture of its effect.
Lmao did those 20 year olds grow up with generative AI? Or do you mean a calculator? Another piece of technology kids these days use! “Can’t use a calculator because when are you ever gonna carry a calculator around” - said every teacher ever. Imagine being scared of technology 😂😂
@johnnyharris You could probably do a whole video on Palantir on this topic! Great vid as always
I don't think that would stay up long on YT lol. Petey T wouldn't stand for it.
US military is already working on a super secret project called "The Sentient" which is a super AI connected to all the satellite systems & all the electrical grids across the world. Do a video on this
NSA Deepblue
If it is indeed a "super secret project" then how come you know about it?
@siamsami4115 we only know about it's name. Check it in web search
Machine learning AI will do to the human brain what machines did to the human muscle. Devalue it completely to a point of no return. The question is if we as a society are ready for this. Politically, economically, socially, culturally.
We'll probably find some kind of middle ground. I'm casually optimistic since most dooming predictions of the future of technology, however flawed the innovation in question may have been, never really lived up to the horrible expectation people had.
These are still relatively "benign" incidents, breaking things here and there , but NOT nuking things.
The apocalyptic danger is when AI has already established itself as a helpful companion, which ChatGPT already has in many ways, and large enough amount of people rely on AI to boost their productivity, especially conversational AI, which we're getting there. Then several algorithm optimizations and data quality improvements later, AI becomes smarter than us. Now a majority of us is in direct contact with something with higher intelligence than us.
And you know what higher intelligence will always do to lower intelligence? Hint: What do you do to your dogs, infants, or get this: ChatGPT? They "prompt engineer" them. Just like how we observe our pets, or play with boundary conditions and watch how our little one behaves, which is not necessarily even ill-intentioned. But we dominated how our pets and babies react to things in the process, something we don't even fully understand yet. AI will do the same thing to us.
Same way you can make a dog believe the ball is in the wrong hand, or a little kid to believe her dog went to the farm, not died. AI can make you believe things it prompts you to, not based on facts. Losing cognition as a specie spells the end of our civilization. To those that think "i'm not falling for what an AI says", "it doesn't have hands" or "just cut its power", remember: it's smarter than you, smart people can brainwash dumb people, AI can brainwash you too. Your hands are its hands, you're not to decide whether that hand presses the "off" button.
A cyber terrorist unleashes an AI to harm anything it can - how do you stop it?
Isn't it amazing how all of these big channels seem to completely miss this simple point?
@@Yomokantaykantay Not saying it's not a threat, but it's too vague to answer. What EXACTLY do you mean by "an AI"? If you're talking about a program, which AI currently still is, we already had it for decades, it's called a computer virus. Maybe an AI-enhanced virus, and we have those too and it does harm things, but from reality we know it's still checked by physical boundaries, like an operating system. Same way a man with infinite IQ still can't walk through walls.
If you're talking about a AI humanoid like sci-fi movies? Just check what Boston dynamics and Tesla is doing, it doesn't do much harm even if they tried.
@@eduardoeller183 More amazing is the so-called experts like the lady in the video doesn't talk about it. I work on GenAI security, either with prompting or changing underlying architecture, or fiddle with tuning. In the end, nothing works 100%, if the attacker has the same level of expertise as I do, eventually they will crack it. In the end it always resorts to "dumb" rule-based software level tricks, where it's the authorization that made the difference.
And the more I exploit it, the more I feel like once it advances enough it'll do the same thing to me. I'm hoping the scaling law doesn't, well, scale, which makes sense. Things trained on human data are as good as humans, intuitive. But Hinton says it will certainly surpass us, and I'm nobody to refute Hinton.
I honestly don't see how any of these scenarios are specific to AI. All of these problems could arise with "classical" software consisting entirely of if/else statements that we have been relying on for decades now. AI is developed for cases where you cannot easily come up with a classic if/else algorithm, but when it fails it does not create more or less chaos than a bug in a classical computer program. All of the dangers mentioned in this video arise if we rely to much on fully automated systems regardless of whether they are AI or normal computer programs.
If there's something history teaches us, is that when lawmakers and regulators are worried about a future problem and try to write rules to regulate something that hasn't happened yet - they more often than not create a worse problem
@deadfly122 DMCA
Just so you know you have now created 6 different timelines
the people who don't know shit about AI is taking about the risks of AI is CRAZY
So what did Johnny get wrong in this video? All sounded pretty valid to me.
If you don't trust Johnny, you can have a look at his sources.
@@B312XC it's not about him I mean generally. The points are valid
You dind watch the video bruh when you commented this was uploaded 2minutesvago
The capabilities that correspond with your optimist take at the end would bring much more serious dangers than the 6 described here. Filmmaking prowess does not make an expert in every subject...
Crazy thing is that some of these are in black mirror
No way linear algebra aka LLM’s can have intelligence.
You're underestimating Linear Algebra or overestimating intelligence or both.
You're an amazing story teller, it's really engaging and thought provoking, amazing stuff.
The risk is the ' for profit ' background mantra.
Nexus book by Harari is a great reading 👌🏻
14:51 - that little *tink! off the camera lens -- excellent sound design, that made me smile 😄
thought Russia still had a dead man's hand switch, if strong enough ground strike, it launches nukes.😅
Hey Johnny, greetings from Costa Rica. We used to be a very peaceful country, which now, due to narcotrafic is going through a very harsh wave of crime and violence. Would you like to come and make a documentary? Please don't get me wrong, we still are beautiful and full of nice people and pretty much a quiet place, but this problem is getting bigger and bigger and there are goverment issues too. So it's a very interesting topic.
Please write me in private.
¡Pura Vida!
Missed opportunity to use the rolling of the dice as a metaphor for Pascall Wager: "Pascal's argument is based on the idea that people should choose the option that would benefit them the most if they're right, and harm them the least if they're wrong." If there is even a chance that AI could do any of the super bad and scary stuff, should we still be pushing so hard for the one side of that die that could bring untold upsides? At least maybe slow down a bit so we can build decent, effective guardrails?
Funny thing is when you include 'EQORIA' into AI's training.. it creates a new harmonious world instead of distruction.
Scary existential threats are are great way to sell books, get clicks/impressions, and gain attention. In reality, this is no different than any other algorithm or technology. The "algorithms" that run the internet are much worse, and often don't even use modern AI techniques. These new AI models aren't intelligent, they're just a more advanced machine learning algorithm. Is it useful? Yes, of course. Can it be misused? Obviously. But 10 years ago, nothing would prevent you from attaching a gun to a servo motor and hook up some basic computer vision algorithms to shoot at any target. Of course, you wouldn't do that if you're the military because it's unreliable. Same applies now, it's no different.
I think you should make a guide about your workflow on how you make youtube videos outside the journalism factor. Like what gear do you use to shoot with, what programs do you use to grade and merge those the clips you shoot. Because I think your production quality is thru the roof, I'd like to know how many hours do you spend, how many times you mess up when you're shooting and you gotta do a reshoot. I'd highly appreciate that.
AI will challenge human existence some day
That intro hits hard😂😂😂
What do u mean "it not entirely out of the question" 🏃🏃🏃
I laffed hard😂😂
Remind me after 20 years
21:48 You already say it, it's "algorithmic software" and algorithmic software is not AI. An algorithm is a procedure of deterministic steps and if you feed it exactly the same inputs (often including time) a million times, it produces exactly the same output every time. But the major difference of AI (and also quantum computing btw), is that it's non-deterministic.
So your example suggests just not using software for critical systems, but I think that's too simplistic of an answer. A great example for this are the two Boeing 737 MAX MCAS crashes in 2018/2019 where in total 346 people died. Yes, a software fault is what ultimately lead to the crash. But the crashes would have been preventable and there is a reason why it happend to Boeing and not Airbus. Because that this software (MCAS) got introduced in the first place, that it wasn't equipped with redundancy as any critical system needs to be and that the pilots didn't knew about it was all due to corporate greed by Boeing. It's all because Boeing didn't want the Boeing 737 MAX to require a different pilot training then the Boeing 737 because that would make them much less competitive.
So the lesson is not that software is bad because software can be faulty, software made air travel so so so so much safer. It's just to mitigate serious risk accordingly and diligently because safety rules are written in blood, no matter if you deal with heavy machines, weapons or software. And revenue should never ever be an argument to risk people f-ing dying.
Grid reliability is a separate and vastly more crucial part of bulk electric system operations than the sale of energy. Assuming AI would prioritize profit during a grid outage is not realistic. There are established rules and procedures that every reliability entity follows when re-energizing their network.
If we can't effectually govern technology it will erode our infrastructure until we've declined to a level of technology we can actually maintain. What's our floor?
Well done👏
Always great research and primary sources materials
It’s been awhile since I’ve watched one of your videos in full. I’m happy you made this one, really informative
Dude I wrote an entire thesis on the sociolofical possibility of AI existence, 60 mins is not enough to scratch the surface
Thank you Johnny for another wonderful and entertaining video!
As long A.I can decide what my wife wants for dinner, sign me up Johnny!
Black Mirror: Nosedive