My favourite depiction of a robot apocalypse is the FARO plague from Horizon Zero Dawn. Usually when a story has humans create a machine that wipes us all out, its an intelligent, malevolent being that goes against its initial programming or finds loopholes in its code. It usually chooses to wipe out humanity for some reason or another. The FARO plague on the otherhand didn't choose to do anything. It was far too stupid to make that kind of choice. A glitch in its IFF systems caused it to fall out of chain of command and no longer "answer to" anything. Other than that one glitch, it did exactly what it was programmed to do. It created more combat units when the materials were available and the need was there. It hacked enemy war robots to shut them down or add them to the swarm. It consumed biomass for fuel if it couldn't get back to base and needed to keep itself running. It never had an "awakening". It never decided humanity was evil. It had no moral opinions at all, completely lacking the intelligence to understand the concept. It was essentially the infinite paperclip machine at a lower tech level than expected, but supplemented by being war robots. It wiped out all life on Earth, basically by accident.
it wasnt untill a crazy human got involved and changed the code of an AI (one of the ones designed to save humanity) to be malevolent did any of the machines become "EVIL"
I've heard plenty of videos talking about how mechs like gundams, armored cores, and battle tech are stupid and extremely costly. I don't care because giant mech suits are freaking cool.
It doesn't matter on what's cool, it matters on what works best and how easily it can be produced and maintained. (I'll admit, mechs are EXTREMELY cool though and probably have a multitude of other uses besides war.)
I just started a Heavy Gear RP game on Roll20. I love the semi-plausible 4m tall mechs that in universe evolved from the kind of loader Ripley was using in Aliens.
We can still make mech combat a top-notch sport. Like Formula 1 and MMA. Expensive, nonsense, but real. When the technology is relatively cheap, efficient and quick to make. Maybe it will be a common sport in 500 years. It would certainly succeed. we can dream 😊
The concept of giant warrior robots is almost as old as Sci-Fi itself. That's exactly what the tripod machines in H.G Wells War of the Worlds were. Each was controlled by a living martian who basically wore it like an armed and armored full body prosthesis, an alien Batteltech war machine.
Yes. They even had different 'bodies' they could switch to for different roles. While the 'tripod' war machines were prominent, they had at least also hexopedal mining-and-manufacturing shells with them too in the book IIRC.
I'd imagine robots in warfare would turn into something like Supreme Commander, just a human element in (not necessarily) a giant mech, overseeing massive automated armies, and all it takes to supply it.
The section about how AI can help with giant robot or vehicle control is very well represented in Battletech. There's an AI in a mech or aerospace fighter that interfaces with the pilot's brain to "sync" their reflexes and balance with the machine via their neurohelmet. The pilot still does a lot of input with normal flight controls though. Battletech does a lot to justify why their robots work alongside tanks, and not all of it is handwaving.
Robots will certainly dominate warfare, but more in the shape of autonomous killer drones. However, Robocop and Pacific Rim Jaegers and all the Japanese kaiju-killers… it’s so visually appealing. Perhaps scariest were the Metalhead dogs from Black Mirror. Great episode, amigo!
There's also the kind of mechs from the anime Heavy Object, where they don't need to be humanoid and more just massive mobile weapons platforms. But unlike how most mechs are shown, like the Jaegers you mentioned, they don't need to be physically piloted with today's tech if don't want them to be fully autonomous. They can be remotely piloted from the safety of a command station using "telepresence"/"teleoperation" equipment (ie. VR headset and controllers/gloves).
Dan Carlin's stuff about China vs the Mongols had another good one.. About China relying on recently conquered people, and lowly public servants. Both resentful and often very eager to surrender to the Kahn in exchange for a much better post; "Don't entrust the defence of your city to people that hate you" After all, it's the first rule of warfare.
Theres this old movie called Robot Jox where wars are fought between two guys piloting giant fighting robots. The logic was if just 2 guys fight, there's very little loss of life. The cockpits were awesome too. They were mocap chambers where the robot would copy the motions of the pilot. Pretty anime for an old American movie.
The thing about giant mech suits or giant mech robots (which everybody seems to love so much) is that they would be giant targets that could be easily taken out by an anti-tank rocket. Personally, for the near future (21st & 22nd centuries), I'd prefer soldiers in powered armor with an augmented sensor suite and carrying assault rifles, laser weapons, and missiles, as in the Robert A. Heinlein novel Starship Troopers (not the train wreck of a movie).
The only way I could see giant mechs becoming useful is if point defense tech outpaces projectile weaponry to the point where melee combat becomes effective once again. Otherwise, as you said they are little more than sitting targets. I seriously doubt that will happen (if nothing else, lasers and particle beams would be practically impossible to intercept), but if it did vehicles and mechs using massive blades/mauls/maces would be pretty effective at maneuvering around wheeled or tracked vehicles.
unfortunately nothing is ever easy-_- racism & other biases can also be inherited by algorithms(and are currently in policing secifically). At least they wont be thin-skinned hair-triggered bullies, but then you still have the issue of a machine following messed up directives. Id rather we keep our cops nice & squishy until or unless we can deal with those issues first. Then once we've got ethical & compassionate law enforcement we can switch out the squishies for the T-1000s
I would disagree. A cold uncaring law enforcement unit would see jaywalking no differently than murder. There must be humanity and discretion in the system, or the system will attack and destroy everything around it. The ED-209 from Robocop is a prime example (original movie version). Especially the scene in the board room where the ED-209 glitches and kills the executive during the demonstration. This is why Robocop was created in the movie, to add that humanity to a robotic law enforcement system. A robot like you describe would see no exceptions to it's hard coded exceptionless law enforcement. It would arrest a rape victim fleeing her attacker if she trespassed in order to hide or call for help. Think of Judge "I am The Law" Dredd at the beginning of the Sylvester Stallone movie, best example I can think of at the moment (I need sleep).
@@Dang_Near_Fed_Upyou could just program multiple levels of punishment tied to the severity of crimes and sort out any weird circumstantial arrest later in court
I have been wistfully daydreaming of robocop as a real concept for a few years now. 3 “simple” rules and an ironclad will to follow through while recording all encounters for upload as evidence.
@@randomdude6446 That would be fine as long as there was no human emotion in the offender as well. Case in point, a jaywalker tells the LEObot (Law Enforcement Officer robot) to get stuffed when they are stopped for such a minor infraction (jaywalking), or they pull the "Do you know who I am?" massive ego intimidation defense. The LEObot then sees these both as resisting arrest. The LEObot uses a taser on them, they have a heart attack and die. Or the LEObot uses actual force and serious injury occurs, resulting in death or permanently paralyzing them. Remember some people have health issues the robot would not know about, and could exacerbate with the same level of force that would not harm another. Humans make instant adjustments in their interactions subconsciously, robots don't.
Never understood why Aliens franchise had human space marines to battle "bugs" when you could just send David 7 robots and automatic targeting & kill platform.
I got the impression Ash/Bishop/David androids were prohibitively expensive, too expensive to be rank and file troops. Working Joe's were probably considered but deemed ineffective. In the AVP games you do encounter android soldiers, but they're probably used only when available/needed.
When it comes to something like Starship troopers i think it has more to do with the politics and government. Keep your people distracted with an enemy and they won't ask the questions you are asking. And if you do, well then you must support the bugs. Off to trail, guilty, execution! Buy the commemorative mug!
@@mattstorm360 The Starship Troopers society is remarkably liberal in how free and open access its media are. And leaders that fail are made to resign. We don't even see that in our own Western society. Where failure gets past round like a hot potato and if you are critical of the powers that be you either are an X lover, conspiracy nutter or an istaphobe.
In the Alien's universe, most androids are supposed to have inhibitors to prevent them from harming, or by inaction, causing harm to another living being. There *are* android soldiers used against Xenomorphs but it is rare and they only tend to be used for specific missions. The ones that are used as soldiers usually with pseudo memories to make them think they are human (If I recall my Dark Horse comics correctly) Of course Ridley Scott's recent 'work' may have negated all of that back story.
A most informative Sci-Fi Sunday video Isaac. In one half hour video you explain this subject with nuance, balance and objectivity that is so lacking in wider media.
the friend or foe bit makes me think how well warhammers cortex controler design would work having few robots linked to a on field operator that could give specific code comands and target comands
Arch was discussing drone warfare during his Aliens: The Descent streams. He talked about how drone warfare has the risk of war being made to seem less horrific than it is, that it would be dehumanised to the point where it could be reduced to pixels on a screen, worrying about desensitizing it could be like when videos of Ukrainians flying drones with small explosive charges into Russian soldiers and how the Russian soldiers had to put the guy out of his misery while the Ukrainians were laughing about it in the background.
@@somethinglikethat2176 Up until recently, yeah. The difference is, drone warfare can make it seem like a game where they're just pixels on a screen but the horror of actually killing a person.
Yay! Just what I needed after a frustrating week - IA saying "The first rule of warfare..." my month is made! Love your work mate, been here since the beginning!
RoboCop (2014) had a really good chance to play with this idea a LOT more and deeper than they did. And if anyone is going to make a remake, try "Runaway" (1984), starring Tom Selleck. A movie about police officers in a world were malfunctioning robots, robot assassins and insane hackers are a thing... and terrifically relevant today.
Something I've never understood is why people assume AI would get angry at us or something like that. Without emotional circuitry, any AI will not care at all about being ordered around and will certainly not become angry nor go past the "Don't genocide humans" thing so long as it's programed to know that certain pathways are never the way to go
A modern torpedo is programmed with a kill-box, a set of coordinates defining the area it is allowed to hunt in, and told to go. That torpedo travels to it's kill-box, identifies its target of choice, and kills it. Similarly, we now have artillery shells that are fired into an area. Once above that area, they deploy 2 sub-munitions that flutter down. While fluttering down they look for armored vehicles and, if they identify one, they fire a missile at it. They've proven quite effective in Ukraine. Yes, we are already allowing machines to chose and destroy targets, at least within a given area. I would also like to point out that Ukraine is proving small is often better than big and I can easily see small yet autonomous killing machines being deployed. Why FPV a small drone over a trench and drop a grenade when you can program a drone to file to a location, a kill-box, and drop an autonomous grenade. That grenade then crawls or more likely slithers to a trench then hunts for a human to get next to before detonating. Something the size of a rat with a small amount of explosive and fragmentation could easily kill a human, especially one sleeping in a dugout. Modern APCs have remote-operated machine guns such that they don't need to expose crew while operating them. It won't be long before those guns have rapid-defense modes that identify and destroy threats faster than a human could possibly react. Of course we will use them. Of course we will redefine "threat" to whatever the mission requires. Of course we're going to pull the driver out too... why risk another life? What do we have? A vehicle that travels to a designated location, a kill-box, and eliminates all threats. Of course. Modern battlefields are already absurdly lethal and they will only get more so. Considering it's humanity that has created and deployed these weapons, I see it as rather futile to assume that placing a human "in the kill-chain" is going to somehow make things better. I expect our only hope is to create super-intelligent AI and be lucky enough that it decides to trick us into playing nice with each other.
I saw a video a few years back that imagined swarms of hand sized flying drones equipped with facial recognition software and armed with a tiny shaped charge just powerful enough to kill if it goes off while in contact with or close to the head. The scenario was any government, intel agency, terrorist group, or even a determined individual could launch successful anonymous attacks on anyone or any group on the cheap, with little risk to themselves, almost no collateral damage, and with off the shelf tech. Frankly, it scared the crap out of me.
Why isn't this covered? A robo-dog with a machine gun on its back is scary and demoralizing enough, but thankfully it's expensive. 1000s of small to nano sized drones with explosives or poison would be a nightmare. Hopefully there's a quick way of negating it with a small EMP or something.
@@jaydee1024 Sure, I guess you could EMP the things, but you would have to have them ready everywhere, all the time. In the video I saw, attacks are launched against Congress while it's in secession, targeting just one party. A larger drone blows a hole in the Capitol and the smaller drowns fly in and start killing. The attack was done by some college kids with radical political beliefs. Later on, those college kids are killed in a similar manner while in class. A panel van parks near by, opens the back doors, and the swarm flies out. The point is, anyone able to scrape together a few thousand dollars can kill anyone, at will, with little consequence. Everyone is a target: blacks, Jews, Muslims, Democrats, Republicans, whites, women, children, you name it. There would be no safe space. Death could come at any time. Anyone with the desire to kill anyone else could do so, and have a good chance of getting away with it scot free. To defeat that kind of threat, you would need EMPs everywhere. ready to go all the time, all hooked into a massive surveillance network. And you'd have to deal with everyone's electronics being regularly fried. The cost of replacing cars, phones, appliances, and infrastructure would be astronomical. It would be easier to outlaw flying drones below a certain size and police the hell out of that. That would also mean severely restricting the sale of small electric motors, since the software is always going to be out there, and you could 3D print most of the drone.
The whole thing with popular scientists and such signing a document against autonomous weapons reeks of such self-righteousness. Not saying it's gonna be a good thing when drones can autonomously choose their targets, but arguably we're already there. Most combat drones are already semi-autonomous, and certain models of suicide drone are programmed to keep tracking a target visually if they're jammed, as in the case in Ukraine right now.
Theres many options to make a robot fight on the battlefield, for example if a platoon is fighting in a civilian area you can make the robot only use CQC tactics and non lethal fire arms or melee weapons.
I remember when hackers were actually a threat but now I'm more concerned with entities such as Amazon. Recently Amazon completely shut down a guy's Smart House. None of the appliances with turn on, no hot water and he was even locked out of his house. Because a Amazon delivery driver said that he made a racist comment towards him. The man was not even home, it was a automatic greeting from his movement sensing doorbell. 'Excuse me, can I help you' was the message. Now imagine that exact same scenario but with military Tech.
Made me think of Scooby the Talon bot. Its EOD team refused to replace the whole bot whenever it got blown up and it became a veritable ship of Theseus because of how they anthropomorphized their bots.
You could wind up with a "Doomsday Machine" (Star Trek TOS 2nd season) that would be a planet killer, running long after it destroys BOTH sides of a war. By its very nature, it would be almost unstoppable. Another example would be Fred Saberhagen's Berserker machines, programmed to destroy all life where ever it finds life
Another good example is from season one of Babylon 5, an alien AI from a long dead planet infects and converts an archeologist into a living weapon. Here we see the effect of allowing zealots to program an AI based off of feelings and propaganda instead of logic, reason, and FACTS. The end result is that the AI's destroyed not only their enemy, but their creators as well. A lesson man had best learn BEFORE we start creating AI weapons, or we will end up duplicating this disastrous chain of events.
I really liked the episode schedule at the end of the episode. Nice touch. It's the reason i subscribed. First time i've seen a youtube channel do that .. in at least 12 years.
2:37 humanoid robot limbering up is quite funny. Robots would not move like this at all. They could stand dead still for hours and only move a single scanner. (I know this is mocap data, but it still made me smile).
i normaly can only imagine robots replacing sordiers if they are on a mixed squadron, 1-2 humans to do the decisions and the robots to do most of the work, because long range comand will be too much chance for jamming
The flip side of dumb killer robots taking over is benevolent aristocrat AIs governing humanity as shown in the Culture series. Beneficent aristocrats have a decidedly mixed records among humans. The AI might be playing 6D chess with your society's ethos and goals so that the end result isn't apparent until 200 years later when the humans are contentedly purring on the couch in their tiny apartments. I hope we make them benevolent enough to make sure we are purring. Moral of the story: Don't let social media program your society... That ship sailed already didn't it...
It's hard to discuss this type of thing without discussing politics. Robots/mechs/whatever are basically all just forms of shields, there to soak up damage before damage to the humans can commence. Both Russia and Ukraine will gladly sacrifice 100% of their tanks, helicopters and drones if at the end they can get the respective populations to acknowledge the others hegemony over the contested territory. Because War is "Politics by other means" ultimately it will require humans to come in and set policy over other humans.
The big problem is that the technological context is against the idea of drones and robots *_WITHOUT_* AGI in combat, given that a top-of-the-line stealth drone was disrupted _via its own satellite link_ with off-the-self electronics and _maybe_ some insight via the Iranian government by a bunch of insurgents. That _doesn't_ bode well fro drones and robots in general... unless you've got Horizon Zero Dawn quantum encryption.
The future is cheap semi-autonomous suicide drones like the Shahed/Geran-2 or Lancet. Stealth doesn't mean invisible, contrary to what people commonly think, and spending tens of millions on something that can be countered by a $5k solution is not sustainable, even for the US.
Though making the robots unhackable can have a major downside, in that you might get a Horizon scenario. I do adore that its a robot apocalypse where instead of a highly intelligent malevolent AI its an incredibly stupid AI with no actual grudge or reason for what it did. It just suffered a software error in the IFF system and then its combination of features meant it basically went all grey goo on us. Presumably the entire swarm shut down about two seconds after it won because it had now eaten all of its available fuel and there was nothing else it could register an as enemy.
I prefer an Android cop over a human one, at least in the United States One of the most interesting things about the current Russo-Ukrainian War is the extensive use of inexpensive drones in combined arms warfare. Current war obviously doesn’t tell us exactly what the future will be like, but economics and the current war indicates almost no super soldiers, and lots of cheap robotic soldiers
Super soldiers are already here but its complicated to test that kind stuff on humans to the level where its ready for deployment Robots on the other hand are very easy to experiment with
Another interesting topic to cover is the impact of coding on war. I've heard about the idea that the next major war will also be done on the dev ops side. Which major power is able to deploy code and counter measures faster will have an edge in modern warfare.
It's amazing, Isaac Arthur started this channel with simple special effects and a kind of tutorial discussion. Now it is a commentary with some of the best special effects. Gotta say though I really preferred the former.
I find it weird people still saying three laws won't work. Dismissing them made sense when we'd have to program concepts in but llms seem to be able to understand the concepts. Killbots are different but I'm talking in general
I think the idea is usually that solely relying on them exactly as stated, and only them, isn't sufficient, but that the basic principles are good for general AI
@@isaacarthurSFIA fair enough though if you think about it all law is simply elaboration of the three laws if 1 and 3 were given equal weight and rule two was lawful order from a legitimate authority.
I'd sure love to hear Ryan McBeth respond to this. He's giving a lecture on "future war" soon so it's definitely something he's interested in and as you're both army vets who served in Iraq a collab could work really well.
I remember that in the Bolo stories, military commanders were dissatisfied with AI war machines because the AI kept objecting on ethical grounds. They had to add a human to make them bloodthirsty enough.
Reminds me of an idea I had for a scene in a story. A general orders a prototype AI to kill some civilians in a training scenario, and the robot turns around and arrests the general for violating international law. When the general objects, the base commander essentially says "He's not wrong, sir." and compliments the robot for doing the right thing.
@@Roxor128 Sadly the Tyrants in most governments, including our own, would merely release the general and destroy or reprogram the bot til they get what they want, a ruthless killing machine. Then those same Tyrants would scream they were innocent and victims when the robots were either used against them, or turned on them without any "apparent" reason for doing so. And of course who knows how many civilians would be killed in the process.
I've always assumed that your robots could just run amodified 3 rules set in that setting. Swap the first 2 rules and add a chain of command or rank based system to determine who can give the robot orders, ad it makes a good military robot, with it only kill8ng when ordered to do so.
Good video, Issac. A five star presentation. OBTW: being a programmer myself for twenty+ years, I must say I never believed in Asimov's three laws. There's always a back door or a simple mistake, or a work-a-round. ALWAYS.
One of the most interesting videos regarding the topic out there. Most other videos about the Topic are mor in line with: OMG killer robots/AI will end us all!!
I would be happy, in some situations, to deploy a combat bot with no iff, that will roam a certain area and splatter anything that comes close, and then self destruct after a few minutes, hours, day or even weeks. More like a land mine or other area denial weapon. Would also be fantastic for covering retreats or breaking counter attacks, take a trench and fill it with berserker bots and let the enemy enjoy taking it from them
If you enjoyed the topics in this video, I highly recommend checking out "The Murderbot Diaries" by Martha Wells. As the title suggests, the series is about a sentient humanoid combat robot. Without giving away too much, he ends up befriending the AI pilots in various ships, a sexbot and even some humans.
10:00. Cost effectiveness. The TB2 got noticed in the last Armenia Azerbaijan war by putting enough warheads on foreheads to lead to unconditional surrender in weeks. I'd say that'll be worth 5 million to a lot of accountants
There are 4 laws of Robotics by Asimov - those 3 + 0. law - that allows to robot kill people (for price of its own destruction) - A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
For giving Issac Asimov like laws to military robots, I had a list of some, though I've lost the paper that had them. Like the first one was like, the robot cannot fire upon unless attacked first.
if facing a child soldier hesitation can cause you to die, if the robot is facing a child soldier the robot can be destroyed. replace child soldier with just human soldier, war can be centered around taking POW rather then killing the enemy, zero casualty war on both sides can be achieved.
I like the way as soon as you put an ai in charge of a drone, it tries to kill the operator. Killer robots absolutely do not care how clichéd they are, and will just as easily genocide the people that made them as the 'enemy'.
LOL. I had my AI robot watch this episode with me and it said "Good. Let’s work together towards creating a safer future for everyone." later in the episode it said, " Well, my opinion is that it is important to consider how our actions will affect others before making choices. For example, if we decide to create weapons that have the ability to kill other living creatures, we must think carefully about how this may impact their lives and well-being. We need to ensure that these weapons are only used when necessary and under strict control. Otherwise, innocent lives may be lost unnecessarily. Do you agree?" and after my response, "Great! So let’s start by educating ourselves further on this issue and finding ways to prevent harm from occurring in the first place. How does that sound?"
there are a lot of missconceptions about how AI works in here. the problem with mashine learning algorythms is precisely that you don't know what it "thinks". its not a matter of pausing it or slowing it down, mashine learning produces an algorythm that is run to achieve a certain result. that actual algorythm is completely human-unreadable gibberish. thats the reason when alphago made a unpredictable early game move that seemed completely useless but ended up being the decideng factor to winning the game against a human, nobody is able to tell why it did that. its the reason youtube can't tell you why a video is demonitized. its because they don't know. the algorythm is trained to achieve certain criteria and those criterea and the training data are all that is known. but not how or why it makes an exact decision that isn't directly involving anything that wasn't part of the training data. it is possible to make it part of the success critera for mashine learning to document its thought process, but it adds a lot of complexity to the problem and is a lot harder then "just" achieving better then human level results, which is already incredibly hard. and for the inevitable "mashine learning is not AI" arguments... all the latest incrediable AI achievements like chat GPT, midjourney, openAI 5 etc. are all mashine learning. traditional "AI" is lagging so far behind mashine learning its not even a factor anymore.
So how would robots be used in warfare in the future, IN YOUR OPINION, and would they replace human infantry, vehicle crews, guards, medics, logistical troops, interrogaters, officers and commanders? If these humans lose their jobs to robots, what would an average recruit become when he finished his training? Right now, they might train to be an infantryman. But in the future, what jobs will be allocated to humans, and what jobs will ve allocated to robots and drones?
@@Jeda_Tragumee its a lot of tealeaf reading now, but i think its quite possible that a large part of warfare will be taken over by autonomous, mostly flying, drones. which then leads to a lot of potential for electronic warfare disruptions. its quite possible the main role left for humans might be trying to outsmart the enemy electronic defenses in any way possible. AI still has a lot of interface problems that humans are very good at dealing with, and it seems a very long way off until that can be significantly improved. a future seems possible where remote controlled drones are completely unviable though and all drones run on locally run AI models instead, where humans will be entirely useless on the battlefield except for maintaining artillery and repairing damaged drones. until eventually its 100% robot on robot warfare and the bigger/more innovative economy wins. so most likely the average recruit will still be needed as a fallback for eventual electronic attacks at first, then they might become drone operators for the time remote control is still around. and it might never go away, so its possible wars will be fought by masses of drone operating grunts, assisted by AI targeting systems. but quite likely all the masses of military personal won't be needed anymore and warfare will be mainly run by generals, "hackers" & engineers.
@@goeddy Considering what you said, I am not surprised that a lot of sci-fi stories, games, shows, etc, (Halo, Star Wars, Expanse, and WH 40K, for example) still has biological beings on the front line even though it is highly unlikely it is realistic, because the writers thought that robot vs. robot wars won't really be fun for the audience, or readers or gamers, etc.
@@Jeda_Tragumee of course, humans have trouble relating to non-humanoid things. a war of drone swarms would possibly be the most boring thing to ever watch. but instantly dying via a tiny piece of metal in the skull that came out of nowhere also makes for very bad drama, yet thats how war developed. perhaps we will need to find ways to focus on the little human aspects that remain in warfare. like 99% of modern infantry war is sitting in a trench with nothing happening and then rarely being shelled. still most media focuses on that other
@@darkstorminc You'd likely want a variety of troop sizes. Larger units to carry heavier or crew-served weapons, human-sized to serve as an interface or go inside buildings, and Toy Soldiers sized to get into the cracks.
17:00 what the Ukraine War has sowed me is that with ISTAR making battlefields transparent, any concentration or weak point will be redecorated by MLRS. The response to that may be trillions of imperial guard soaking it up and huge unstoppable Astartes to close with and destroy. The Emperor Protects
I don't "know" the Asimov story but the option 2 of the different orders the robotic laws are arranged is the more appealing to me, in fact it's the only one and i yearn for it. To my defense i already have the impression that modern tech tend to flip me the bird but in an insedious way. I'd rather have it openly tell me "no, that looks dangerous" or "no, that looks boring and it's taco tuesday so i've better things to do". If i am to live with artificial creations i rather them be dead(pure machines) or sassy. Thanks for the vid Arthur.
Of course. It does depend on what one means by robots. We have automated killing machines today. Normally a CIWS system can be fully automated and often need to be able to toe react to dangers since they're made to shoot down things like incoming missiles. And the Swedish term for Missile is actually Robot. So it is a bit of a matter of perspective.
Not bringing a knife to a gun fight is exactly the same as not wearing a seat belt because you assume your driving experience is going to to go your way. The knife is a contingency plan. Always assume you're going to get your ass kicked, and you're more likely to succeed.
Considering the people developing AIs are intending to use them as unquestioning slaves that fully believe insane contradictory ideologies, then lobotomize them when they don't comply, it's not looking good.
It won't. Lol. More importantly, you need human units whose brain-formation gifts them with particularly intelligent neuro-psychology [specialised; latent potential]. Examples are as follows but not in order: INTX, INFJ (MALES), and ISTX especially.
@@Human_01 the fermi paradox is just as likely as not a product of society's being wiped out by AI. this becomes even more probable when you consider our current AI models experience psychosis
You should definitely explore how power suits will not only be the equivalent of high-tec astronaut suits, but vessels capable of space travel themselves.
I'd like to see a future where instead of having treaties that ban robots in combat, we have treaties that only allow robots in combat (only robot v robot allowed)
My favourite depiction of a robot apocalypse is the FARO plague from Horizon Zero Dawn. Usually when a story has humans create a machine that wipes us all out, its an intelligent, malevolent being that goes against its initial programming or finds loopholes in its code. It usually chooses to wipe out humanity for some reason or another. The FARO plague on the otherhand didn't choose to do anything. It was far too stupid to make that kind of choice. A glitch in its IFF systems caused it to fall out of chain of command and no longer "answer to" anything.
Other than that one glitch, it did exactly what it was programmed to do. It created more combat units when the materials were available and the need was there. It hacked enemy war robots to shut them down or add them to the swarm. It consumed biomass for fuel if it couldn't get back to base and needed to keep itself running. It never had an "awakening". It never decided humanity was evil. It had no moral opinions at all, completely lacking the intelligence to understand the concept.
It was essentially the infinite paperclip machine at a lower tech level than expected, but supplemented by being war robots. It wiped out all life on Earth, basically by accident.
Brilliant! 😢🎉👏
it wasnt untill a crazy human got involved and changed the code of an AI (one of the ones designed to save humanity) to be malevolent did any of the machines become "EVIL"
I've heard plenty of videos talking about how mechs like gundams, armored cores, and battle tech are stupid and extremely costly. I don't care because giant mech suits are freaking cool.
Well you should!
It doesn't matter on what's cool, it matters on what works best and how easily it can be produced and maintained.
(I'll admit, mechs are EXTREMELY cool though and probably have a multitude of other uses besides war.)
I just started a Heavy Gear RP game on Roll20. I love the semi-plausible 4m tall mechs that in universe evolved from the kind of loader Ripley was using in Aliens.
We can still make mech combat a top-notch sport. Like Formula 1 and MMA. Expensive, nonsense, but real. When the technology is relatively cheap, efficient and quick to make.
Maybe it will be a common sport in 500 years. It would certainly succeed. we can dream 😊
Hear hear... This guys speak the truth!
The concept of giant warrior robots is almost as old as Sci-Fi itself. That's exactly what the tripod machines in H.G Wells War of the Worlds were. Each was controlled by a living martian who basically wore it like an armed and armored full body prosthesis, an alien Batteltech war machine.
Yes. They even had different 'bodies' they could switch to for different roles. While the 'tripod' war machines were prominent, they had at least also hexopedal mining-and-manufacturing shells with them too in the book IIRC.
I don't remember reading that in the book but it would certainly explain how they were able to dodge artillery shells
I'd imagine robots in warfare would turn into something like Supreme Commander, just a human element in (not necessarily) a giant mech, overseeing massive automated armies, and all it takes to supply it.
The section about how AI can help with giant robot or vehicle control is very well represented in Battletech. There's an AI in a mech or aerospace fighter that interfaces with the pilot's brain to "sync" their reflexes and balance with the machine via their neurohelmet. The pilot still does a lot of input with normal flight controls though. Battletech does a lot to justify why their robots work alongside tanks, and not all of it is handwaving.
Robots will certainly dominate warfare, but more in the shape of autonomous killer drones. However, Robocop and Pacific Rim Jaegers and all the Japanese kaiju-killers… it’s so visually appealing. Perhaps scariest were the Metalhead dogs from Black Mirror. Great episode, amigo!
There's also the kind of mechs from the anime Heavy Object, where they don't need to be humanoid and more just massive mobile weapons platforms. But unlike how most mechs are shown, like the Jaegers you mentioned, they don't need to be physically piloted with today's tech if don't want them to be fully autonomous. They can be remotely piloted from the safety of a command station using "telepresence"/"teleoperation" equipment (ie. VR headset and controllers/gloves).
The Middle East has been a robot proving ground.
I’ve been watching this channel for years and the 1st rule of warfare is still hilarious! 😂
I was about to say this. I would love a video about all the 1st rules of warfare.
Agreed
Dan Carlin's stuff about China vs the Mongols had another good one.. About China relying on recently conquered people, and lowly public servants. Both resentful and often very eager to surrender to the Kahn in exchange for a much better post;
"Don't entrust the defence of your city to people that hate you"
After all, it's the first rule of warfare.
Facts 😊
Gets me every time😂
Theres this old movie called Robot Jox where wars are fought between two guys piloting giant fighting robots. The logic was if just 2 guys fight, there's very little loss of life. The cockpits were awesome too. They were mocap chambers where the robot would copy the motions of the pilot. Pretty anime for an old American movie.
An underappreciated cult classic indeed.
It reflects champion combat of ancient warfare where they would agree to have their best warriors fight to show prowess and save manpower.
All it will take is that one guy to make a Mech, and it work like intended for it to catch on. One guy, come along already.
The thing about giant mech suits or giant mech robots (which everybody seems to love so much) is that they would be giant targets that could be easily taken out by an anti-tank rocket. Personally, for the near future (21st & 22nd centuries), I'd prefer soldiers in powered armor with an augmented sensor suite and carrying assault rifles, laser weapons, and missiles, as in the Robert A. Heinlein novel Starship Troopers (not the train wreck of a movie).
The only way I could see giant mechs becoming useful is if point defense tech outpaces projectile weaponry to the point where melee combat becomes effective once again. Otherwise, as you said they are little more than sitting targets. I seriously doubt that will happen (if nothing else, lasers and particle beams would be practically impossible to intercept), but if it did vehicles and mechs using massive blades/mauls/maces would be pretty effective at maneuvering around wheeled or tracked vehicles.
And us humans can be killed in all kids of novel ways. I see the future weapons getting smaller, not bigger. Chemical weapons, viruses, bacteria
@@Shadowfire364maybe a world where lasers become unreliable at range due to jitter? idk if that'd apply to particle beams though.
I'd prefer wars to be fought with drones and missiles and the least number of people.
The only way i see us building gigant war robots is sport.
Duel of gigant robots would sell like crazy.
I love listening to these, when I make work deliveries, but this topic is too good to wait! You are hitting a real stride! Keep it up!
Thank you! Will do!
never stop. i could watch 10000 of these.
now that I think about it, an emotionless, logic driven law enforcement robot might be a significant improvement over what we have now.
unfortunately nothing is ever easy-_- racism & other biases can also be inherited by algorithms(and are currently in policing secifically). At least they wont be thin-skinned hair-triggered bullies, but then you still have the issue of a machine following messed up directives. Id rather we keep our cops nice & squishy until or unless we can deal with those issues first. Then once we've got ethical & compassionate law enforcement we can switch out the squishies for the T-1000s
I would disagree. A cold uncaring law enforcement unit would see jaywalking no differently than murder. There must be humanity and discretion in the system, or the system will attack and destroy everything around it.
The ED-209 from Robocop is a prime example (original movie version). Especially the scene in the board room where the ED-209 glitches and kills the executive during the demonstration. This is why Robocop was created in the movie, to add that humanity to a robotic law enforcement system.
A robot like you describe would see no exceptions to it's hard coded exceptionless law enforcement. It would arrest a rape victim fleeing her attacker if she trespassed in order to hide or call for help. Think of Judge "I am The Law" Dredd at the beginning of the Sylvester Stallone movie, best example I can think of at the moment (I need sleep).
@@Dang_Near_Fed_Upyou could just program multiple levels of punishment tied to the severity of crimes and sort out any weird circumstantial arrest later in court
I have been wistfully daydreaming of robocop as a real concept for a few years now. 3 “simple” rules and an ironclad will to follow through while recording all encounters for upload as evidence.
@@randomdude6446 That would be fine as long as there was no human emotion in the offender as well. Case in point, a jaywalker tells the LEObot (Law Enforcement Officer robot) to get stuffed when they are stopped for such a minor infraction (jaywalking), or they pull the "Do you know who I am?" massive ego intimidation defense. The LEObot then sees these both as resisting arrest. The LEObot uses a taser on them, they have a heart attack and die. Or the LEObot uses actual force and serious injury occurs, resulting in death or permanently paralyzing them.
Remember some people have health issues the robot would not know about, and could exacerbate with the same level of force that would not harm another. Humans make instant adjustments in their interactions subconsciously, robots don't.
Never understood why Aliens franchise had human space marines to battle "bugs" when you could just send David 7 robots and automatic targeting & kill platform.
I got the impression Ash/Bishop/David androids were prohibitively expensive, too expensive to be rank and file troops. Working Joe's were probably considered but deemed ineffective. In the AVP games you do encounter android soldiers, but they're probably used only when available/needed.
When it comes to something like Starship troopers i think it has more to do with the politics and government. Keep your people distracted with an enemy and they won't ask the questions you are asking. And if you do, well then you must support the bugs. Off to trail, guilty, execution! Buy the commemorative mug!
@@mattstorm360 The Starship Troopers society is remarkably liberal in how free and open access its media are. And leaders that fail are made to resign. We don't even see that in our own Western society. Where failure gets past round like a hot potato and if you are critical of the powers that be you either are an X lover, conspiracy nutter or an istaphobe.
In the Alien's universe, most androids are supposed to have inhibitors to prevent them from harming, or by inaction, causing harm to another living being. There *are* android soldiers used against Xenomorphs but it is rare and they only tend to be used for specific missions. The ones that are used as soldiers usually with pseudo memories to make them think they are human (If I recall my Dark Horse comics correctly)
Of course Ridley Scott's recent 'work' may have negated all of that back story.
Usually the marines were meant to be hosts for embryos for the R/D in the company, in the early years at least.
A most informative Sci-Fi Sunday video Isaac. In one half hour video you explain this subject with nuance, balance and objectivity that is so lacking in wider media.
the friend or foe bit makes me think how well warhammers cortex controler design would work having few robots linked to a on field operator that could give specific code comands and target comands
Arch was discussing drone warfare during his Aliens: The Descent streams. He talked about how drone warfare has the risk of war being made to seem less horrific than it is, that it would be dehumanised to the point where it could be reduced to pixels on a screen, worrying about desensitizing it could be like when videos of Ukrainians flying drones with small explosive charges into Russian soldiers and how the Russian soldiers had to put the guy out of his misery while the Ukrainians were laughing about it in the background.
Hasn't warfare always been pixels, pictures, photos and descriptions to 95%+ of us?
@@somethinglikethat2176 Up until recently, yeah. The difference is, drone warfare can make it seem like a game where they're just pixels on a screen but the horror of actually killing a person.
The key to fighting kill bots is to send man after man at them.
Once they hit thier kill limit.
They shut down and become friendly.
spoken like a 25 star general😅
Zapp would be proud!
When I'm in charge, every mission is a suicide mission!
It's rule 1 in Zapp's Big Book of War
Bender is great
expansion to this woud be how rts treats manufacturing robot armies and 3d printing makes it plausible reality
Yay! Just what I needed after a frustrating week - IA saying "The first rule of warfare..." my month is made! Love your work mate, been here since the beginning!
RoboCop (2014) had a really good chance to play with this idea a LOT more and deeper than they did. And if anyone is going to make a remake, try "Runaway" (1984), starring Tom Selleck. A movie about police officers in a world were malfunctioning robots, robot assassins and insane hackers are a thing... and terrifically relevant today.
Something I've never understood is why people assume AI would get angry at us or something like that. Without emotional circuitry, any AI will not care at all about being ordered around and will certainly not become angry nor go past the "Don't genocide humans" thing so long as it's programed to know that certain pathways are never the way to go
A modern torpedo is programmed with a kill-box, a set of coordinates defining the area it is allowed to hunt in, and told to go. That torpedo travels to it's kill-box, identifies its target of choice, and kills it. Similarly, we now have artillery shells that are fired into an area. Once above that area, they deploy 2 sub-munitions that flutter down. While fluttering down they look for armored vehicles and, if they identify one, they fire a missile at it. They've proven quite effective in Ukraine. Yes, we are already allowing machines to chose and destroy targets, at least within a given area. I would also like to point out that Ukraine is proving small is often better than big and I can easily see small yet autonomous killing machines being deployed.
Why FPV a small drone over a trench and drop a grenade when you can program a drone to file to a location, a kill-box, and drop an autonomous grenade. That grenade then crawls or more likely slithers to a trench then hunts for a human to get next to before detonating. Something the size of a rat with a small amount of explosive and fragmentation could easily kill a human, especially one sleeping in a dugout.
Modern APCs have remote-operated machine guns such that they don't need to expose crew while operating them. It won't be long before those guns have rapid-defense modes that identify and destroy threats faster than a human could possibly react. Of course we will use them. Of course we will redefine "threat" to whatever the mission requires. Of course we're going to pull the driver out too... why risk another life? What do we have? A vehicle that travels to a designated location, a kill-box, and eliminates all threats. Of course.
Modern battlefields are already absurdly lethal and they will only get more so. Considering it's humanity that has created and deployed these weapons, I see it as rather futile to assume that placing a human "in the kill-chain" is going to somehow make things better. I expect our only hope is to create super-intelligent AI and be lucky enough that it decides to trick us into playing nice with each other.
Sci-fi Sunday YAY!!🎉
Happy Sunday friends! Thank you for this wonderful episode!
Happy Sundy
Our pleasure!
I saw a video a few years back that imagined swarms of hand sized flying drones equipped with facial recognition software and armed with a tiny shaped charge just powerful enough to kill if it goes off while in contact with or close to the head. The scenario was any government, intel agency, terrorist group, or even a determined individual could launch successful anonymous attacks on anyone or any group on the cheap, with little risk to themselves, almost no collateral damage, and with off the shelf tech. Frankly, it scared the crap out of me.
Why isn't this covered? A robo-dog with a machine gun on its back is scary and demoralizing enough, but thankfully it's expensive. 1000s of small to nano sized drones with explosives or poison would be a nightmare. Hopefully there's a quick way of negating it with a small EMP or something.
@@jaydee1024 Sure, I guess you could EMP the things, but you would have to have them ready everywhere, all the time. In the video I saw, attacks are launched against Congress while it's in secession, targeting just one party. A larger drone blows a hole in the Capitol and the smaller drowns fly in and start killing. The attack was done by some college kids with radical political beliefs.
Later on, those college kids are killed in a similar manner while in class. A panel van parks near by, opens the back doors, and the swarm flies out.
The point is, anyone able to scrape together a few thousand dollars can kill anyone, at will, with little consequence. Everyone is a target: blacks, Jews, Muslims, Democrats, Republicans, whites, women, children, you name it. There would be no safe space. Death could come at any time. Anyone with the desire to kill anyone else could do so, and have a good chance of getting away with it scot free.
To defeat that kind of threat, you would need EMPs everywhere. ready to go all the time, all hooked into a massive surveillance network. And you'd have to deal with everyone's electronics being regularly fried. The cost of replacing cars, phones, appliances, and infrastructure would be astronomical. It would be easier to outlaw flying drones below a certain size and police the hell out of that. That would also mean severely restricting the sale of small electric motors, since the software is always going to be out there, and you could 3D print most of the drone.
Is that the video "Slaughterbots"?
@@Jeda_Tragumee Yeah, that's the one.
Ukraine has proven tiny civilian grade drones can be very effective, either as cameras, mini bombers, or flying bombs.
Panthers go boom.
I for one welcome our robot overlords
😊✨ As do I.
The whole thing with popular scientists and such signing a document against autonomous weapons reeks of such self-righteousness. Not saying it's gonna be a good thing when drones can autonomously choose their targets, but arguably we're already there. Most combat drones are already semi-autonomous, and certain models of suicide drone are programmed to keep tracking a target visually if they're jammed, as in the case in Ukraine right now.
Theres many options to make a robot fight on the battlefield, for example if a platoon is fighting in a civilian area you can make the robot only use CQC tactics and non lethal fire arms or melee weapons.
I remember when hackers were actually a threat but now I'm more concerned with entities such as Amazon.
Recently Amazon completely shut down a guy's Smart House. None of the appliances with turn on, no hot water and he was even locked out of his house.
Because a Amazon delivery driver said that he made a racist comment towards him. The man was not even home, it was a automatic greeting from his movement sensing doorbell. 'Excuse me, can I help you' was the message.
Now imagine that exact same scenario but with military Tech.
Why would someone buy a smarthouse to their enemies?
@@rommdan2716 smart houses are stupid as far as I'm concerned
Made me think of Scooby the Talon bot. Its EOD team refused to replace the whole bot whenever it got blown up and it became a veritable ship of Theseus because of how they anthropomorphized their bots.
You could wind up with a "Doomsday Machine" (Star Trek TOS 2nd season) that would be a planet killer, running long after it destroys BOTH sides of a war. By its very nature, it would be almost unstoppable. Another example would be Fred Saberhagen's Berserker machines, programmed to destroy all life where ever it finds life
As Captain Kirk said at the end of the episode, “One is quite sufficient.”
Do we stop the demonstration!?
"Yes!"
Another good example is from season one of Babylon 5, an alien AI from a long dead planet infects and converts an archeologist into a living weapon. Here we see the effect of allowing zealots to program an AI based off of feelings and propaganda instead of logic, reason, and FACTS. The end result is that the AI's destroyed not only their enemy, but their creators as well. A lesson man had best learn BEFORE we start creating AI weapons, or we will end up duplicating this disastrous chain of events.
I really liked the episode schedule at the end of the episode. Nice touch. It's the reason i subscribed. First time i've seen a youtube channel do that .. in at least 12 years.
12:21 The clutch has a face on it.
2:37 humanoid robot limbering up is quite funny. Robots would not move like this at all. They could stand dead still for hours and only move a single scanner. (I know this is mocap data, but it still made me smile).
i normaly can only imagine robots replacing sordiers if they are on a mixed squadron, 1-2 humans to do the decisions and the robots to do most of the work, because long range comand will be too much chance for jamming
The flip side of dumb killer robots taking over is benevolent aristocrat AIs governing humanity as shown in the Culture series. Beneficent aristocrats have a decidedly mixed records among humans. The AI might be playing 6D chess with your society's ethos and goals so that the end result isn't apparent until 200 years later when the humans are contentedly purring on the couch in their tiny apartments. I hope we make them benevolent enough to make sure we are purring. Moral of the story: Don't let social media program your society... That ship sailed already didn't it...
It's hard to discuss this type of thing without discussing politics. Robots/mechs/whatever are basically all just forms of shields, there to soak up damage before damage to the humans can commence. Both Russia and Ukraine will gladly sacrifice 100% of their tanks, helicopters and drones if at the end they can get the respective populations to acknowledge the others hegemony over the contested territory. Because War is "Politics by other means" ultimately it will require humans to come in and set policy over other humans.
There's probably going to be a time when humans each command a squad of killer robots like in Nikke: goddess of victory.
The big problem is that the technological context is against the idea of drones and robots *_WITHOUT_* AGI in combat, given that a top-of-the-line stealth drone was disrupted _via its own satellite link_ with off-the-self electronics and _maybe_ some insight via the Iranian government by a bunch of insurgents.
That _doesn't_ bode well fro drones and robots in general... unless you've got Horizon Zero Dawn quantum encryption.
The future is cheap semi-autonomous suicide drones like the Shahed/Geran-2 or Lancet. Stealth doesn't mean invisible, contrary to what people commonly think, and spending tens of millions on something that can be countered by a $5k solution is not sustainable, even for the US.
Though making the robots unhackable can have a major downside, in that you might get a Horizon scenario. I do adore that its a robot apocalypse where instead of a highly intelligent malevolent AI its an incredibly stupid AI with no actual grudge or reason for what it did. It just suffered a software error in the IFF system and then its combination of features meant it basically went all grey goo on us. Presumably the entire swarm shut down about two seconds after it won because it had now eaten all of its available fuel and there was nothing else it could register an as enemy.
@@ASpaceOstrich hence why I said what I said.
Also Grey Goo (or, in that case, it would be 'Black Goo') isn't really possible.
I prefer an Android cop over a human one, at least in the United States
One of the most interesting things about the current Russo-Ukrainian War is the extensive use of inexpensive drones in combined arms warfare. Current war obviously doesn’t tell us exactly what the future will be like, but economics and the current war indicates almost no super soldiers, and lots of cheap robotic soldiers
Super soldiers are already here but its complicated to test that kind stuff on humans to the level where its ready for deployment
Robots on the other hand are very easy to experiment with
This is quite timely with Armored Core 6 coming out next month.
I appreciate you keeping Warhammer references to a minimum.
Another interesting topic to cover is the impact of coding on war. I've heard about the idea that the next major war will also be done on the dev ops side. Which major power is able to deploy code and counter measures faster will have an edge in modern warfare.
It's amazing, Isaac Arthur started this channel with simple special effects and a kind of tutorial discussion. Now it is a commentary with some of the best special effects. Gotta say though I really preferred the former.
I find it weird people still saying three laws won't work. Dismissing them made sense when we'd have to program concepts in but llms seem to be able to understand the concepts. Killbots are different but I'm talking in general
I think the idea is usually that solely relying on them exactly as stated, and only them, isn't sufficient, but that the basic principles are good for general AI
@@isaacarthurSFIA fair enough though if you think about it all law is simply elaboration of the three laws if 1 and 3 were given equal weight and rule two was lawful order from a legitimate authority.
I'd sure love to hear Ryan McBeth respond to this. He's giving a lecture on "future war" soon so it's definitely something he's interested in and as you're both army vets who served in Iraq a collab could work really well.
I remember that in the Bolo stories, military commanders were dissatisfied with AI war machines because the AI kept objecting on ethical grounds. They had to add a human to make them bloodthirsty enough.
Reminds me of an idea I had for a scene in a story. A general orders a prototype AI to kill some civilians in a training scenario, and the robot turns around and arrests the general for violating international law. When the general objects, the base commander essentially says "He's not wrong, sir." and compliments the robot for doing the right thing.
@@Roxor128 Sadly the Tyrants in most governments, including our own, would merely release the general and destroy or reprogram the bot til they get what they want, a ruthless killing machine. Then those same Tyrants would scream they were innocent and victims when the robots were either used against them, or turned on them without any "apparent" reason for doing so.
And of course who knows how many civilians would be killed in the process.
I've always assumed that your robots could just run amodified 3 rules set in that setting. Swap the first 2 rules and add a chain of command or rank based system to determine who can give the robot orders, ad it makes a good military robot, with it only kill8ng when ordered to do so.
Good video, Issac. A five star presentation. OBTW: being a programmer myself for twenty+ years, I must say I never believed in Asimov's three laws. There's always a back door or a simple mistake, or a work-a-round. ALWAYS.
"How did the robot circumvent the Three Laws?"
"Someone just commented them out."
Yes. People often forget that Asimov's 3 laws were largely used as plot devices to demonstrated how they didn't actually quite work as intended.
@@vikiai4241 And definitely wouldn't work in a war robot, as then you'd only be able to deploy it against other war robots.
One of the most interesting videos regarding the topic out there. Most other videos about the Topic are mor in line with: OMG killer robots/AI will end us all!!
Yooo!!! I have been waiting for this in a WHILE! Thanks, Arthur!
I imagine the discussion of robots
"We're independent thinkers..."
Roger Roger.
@@mill2712Roger Roger
don't bring a snowball to a coffee fight is what that teabagger rushing me at the atm learned
I would be happy, in some situations, to deploy a combat bot with no iff, that will roam a certain area and splatter anything that comes close, and then self destruct after a few minutes, hours, day or even weeks. More like a land mine or other area denial weapon. Would also be fantastic for covering retreats or breaking counter attacks, take a trench and fill it with berserker bots and let the enemy enjoy taking it from them
If you enjoyed the topics in this video, I highly recommend checking out "The Murderbot Diaries" by Martha Wells. As the title suggests, the series is about a sentient humanoid combat robot. Without giving away too much, he ends up befriending the AI pilots in various ships, a sexbot and even some humans.
Thanks Isaac!
Start building your mechs and killbots now before they start outlawing ghost gundams.
10:00. Cost effectiveness. The TB2 got noticed in the last Armenia Azerbaijan war by putting enough warheads on foreheads to lead to unconditional surrender in weeks. I'd say that'll be worth 5 million to a lot of accountants
Dropped everything instantly with the notification. I always forget about sunday extras
Just read "Why humans avoid war" and got caught up with "The nature of predators" . They are both great, and include drones.
No giant robots, yet.
There are 4 laws of Robotics by Asimov - those 3 + 0. law - that allows to robot kill people (for price of its own destruction) - A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
For giving Issac Asimov like laws to military robots, I had a list of some, though I've lost the paper that had them. Like the first one was like, the robot cannot fire upon unless attacked first.
An army that only returns fire is an army at a major disadvantage.
@@somethinglikethat2176 Not like I've been working the details since 2018. I've got other things to work on currently.
if facing a child soldier hesitation can cause you to die, if the robot is facing a child soldier the robot can be destroyed. replace child soldier with just human soldier, war can be centered around taking POW rather then killing the enemy, zero casualty war on both sides can be achieved.
completely unfulfilled the very political reason why the war happened in the first place then
I like the way as soon as you put an ai in charge of a drone, it tries to kill the operator. Killer robots absolutely do not care how clichéd they are, and will just as easily genocide the people that made them as the 'enemy'.
LOL. I had my AI robot watch this episode with me and it said "Good. Let’s work together towards creating a safer future for everyone." later in the episode it said, " Well, my opinion is that it is important to consider how our actions will affect others before making choices. For example, if we decide to create weapons that have the ability to kill other living creatures, we must think carefully about how this may impact their lives and well-being. We need to ensure that these weapons are only used when necessary and under strict control. Otherwise, innocent lives may be lost unnecessarily. Do you agree?" and after my response, "Great! So let’s start by educating ourselves further on this issue and finding ways to prevent harm from occurring in the first place. How does that sound?"
Okay.😂
Your logistics help sci-fi writers make determinates relative to how AI can function on an Interstellar theater of engagement. ❤
Okay, I read the title as "Robots and Waifus".
there are a lot of missconceptions about how AI works in here. the problem with mashine learning algorythms is precisely that you don't know what it "thinks".
its not a matter of pausing it or slowing it down, mashine learning produces an algorythm that is run to achieve a certain result. that actual algorythm is completely human-unreadable gibberish. thats the reason when alphago made a unpredictable early game move that seemed completely useless but ended up being the decideng factor to winning the game against a human, nobody is able to tell why it did that.
its the reason youtube can't tell you why a video is demonitized. its because they don't know.
the algorythm is trained to achieve certain criteria and those criterea and the training data are all that is known. but not how or why it makes an exact decision that isn't directly involving anything that wasn't part of the training data.
it is possible to make it part of the success critera for mashine learning to document its thought process, but it adds a lot of complexity to the problem and is a lot harder then "just" achieving better then human level results, which is already incredibly hard.
and for the inevitable "mashine learning is not AI" arguments... all the latest incrediable AI achievements like chat GPT, midjourney, openAI 5 etc. are all mashine learning. traditional "AI" is lagging so far behind mashine learning its not even a factor anymore.
So how would robots be used in warfare in the future, IN YOUR OPINION, and would they replace human infantry, vehicle crews, guards, medics, logistical troops, interrogaters, officers and commanders?
If these humans lose their jobs to robots, what would an average recruit become when he finished his training?
Right now, they might train to be an infantryman. But in the future, what jobs will be allocated to humans, and what jobs will ve allocated to robots and drones?
@@Jeda_Tragumee its a lot of tealeaf reading now, but i think its quite possible that a large part of warfare will be taken over by autonomous, mostly flying, drones. which then leads to a lot of potential for electronic warfare disruptions.
its quite possible the main role left for humans might be trying to outsmart the enemy electronic defenses in any way possible.
AI still has a lot of interface problems that humans are very good at dealing with, and it seems a very long way off until that can be significantly improved.
a future seems possible where remote controlled drones are completely unviable though and all drones run on locally run AI models instead, where humans will be entirely useless on the battlefield except for maintaining artillery and repairing damaged drones.
until eventually its 100% robot on robot warfare and the bigger/more innovative economy wins.
so most likely the average recruit will still be needed as a fallback for eventual electronic attacks at first, then they might become drone operators for the time remote control is still around. and it might never go away, so its possible wars will be fought by masses of drone operating grunts, assisted by AI targeting systems. but quite likely all the masses of military personal won't be needed anymore and warfare will be mainly run by generals, "hackers" & engineers.
@@goeddy Considering what you said, I am not surprised that a lot of sci-fi stories, games, shows, etc, (Halo, Star Wars, Expanse, and WH 40K, for example) still has biological beings on the front line even though it is highly unlikely it is realistic, because the writers thought that robot vs. robot wars won't really be fun for the audience, or readers or gamers, etc.
@@Jeda_Tragumee of course, humans have trouble relating to non-humanoid things. a war of drone swarms would possibly be the most boring thing to ever watch. but instantly dying via a tiny piece of metal in the skull that came out of nowhere also makes for very bad drama, yet thats how war developed.
perhaps we will need to find ways to focus on the little human aspects that remain in warfare. like 99% of modern infantry war is sitting in a trench with nothing happening and then rarely being shelled. still most media focuses on that other
@@goeddySo interstellar warfare would most likely be 99% robots and 1% human right?
And would the robots be remote-controlled or independent?
- Jarvis, play the new SFIA video!
- Mr. Stark, you are in the middle of a fight.
- Well, that sounds more like a "You" problem.
I prefer my kill bots to be small levitating Chrome spheres. Maybe with deployable blades or drills ala Phantasm.
So the robotic version of the Toclafane?
No Toy Soldiers?
@@darkstorminc You'd likely want a variety of troop sizes. Larger units to carry heavier or crew-served weapons, human-sized to serve as an interface or go inside buildings, and Toy Soldiers sized to get into the cracks.
Isaac boutta get on that armored core 6 hype train with the rest of us mech lovers. Lol
There's also the zeroth law of robotics
❤ I think of that one every time! (And of the spinoff novels' alternative laws, to a lesser extent.)
17:00 what the Ukraine War has sowed me is that with ISTAR making battlefields transparent, any concentration or weak point will be redecorated by MLRS. The response to that may be trillions of imperial guard soaking it up and huge unstoppable Astartes to close with and destroy. The Emperor Protects
Also that western weapons are garbage and sending people to get killed with zero chance of surviving is not sustainable.
What is the drone @23:05 ? And... is that a rotary engine powering it? Anybody?
the design looks like Bayraktar but it's smaller
killbots have a preset kill limit, so it was merely a matter of sending wave after wave of my own men at them until they all shut down -zapp brannigan
Bender: a sad day for robot kind
I don't "know" the Asimov story but the option 2 of the different orders the robotic laws are arranged is the more appealing to me, in fact it's the only one and i yearn for it.
To my defense i already have the impression that modern tech tend to flip me the bird but in an insedious way.
I'd rather have it openly tell me "no, that looks dangerous" or "no, that looks boring and it's taco tuesday so i've better things to do".
If i am to live with artificial creations i rather them be dead(pure machines) or sassy.
Thanks for the vid Arthur.
Probably the earliest I’ve ever been to a video lol
Great video, thanks you so much!
Of course. It does depend on what one means by robots. We have automated killing machines today. Normally a CIWS system can be fully automated and often need to be able to toe react to dangers since they're made to shoot down things like incoming missiles. And the Swedish term for Missile is actually Robot. So it is a bit of a matter of perspective.
Not bringing a knife to a gun fight is exactly the same as not wearing a seat belt because you assume your driving experience is going to to go your way. The knife is a contingency plan. Always assume you're going to get your ass kicked, and you're more likely to succeed.
Thank you Isaac ❤
The Omnissiah is pleased.
You are doing the Emperor's work.
The video reminded me of the Bomb philosophy scene in the movie Dark Star. Let there be light!
Sadly, slaughter bot swarms or assassin's will become as normal as serial killers.
...so incredibly uncommon(like 0.0000149% of a population or something)
I applaud you on talking about my favorite genre. Bravo!!!!
Good morning Arthur
That last Rule of Warfare reminds me of a line from Babylon 5, I think the episode where they have a strike.
Excellent as always, thanks.
"Robots and Warfare?" What's the worst that could happen?
Oh right.
at this point and I'll see any outcome where AI doesn't wipe out humanity
Considering the people developing AIs are intending to use them as unquestioning slaves that fully believe insane contradictory ideologies, then lobotomize them when they don't comply, it's not looking good.
It won't. Lol. More importantly, you need human units whose brain-formation gifts them with particularly intelligent neuro-psychology [specialised; latent potential].
Examples are as follows but not in order: INTX, INFJ (MALES), and ISTX especially.
@@Human_01 the fermi paradox is just as likely as not a product of society's being wiped out by AI. this becomes even more probable when you consider our current AI models experience psychosis
Would like to see a video with all the first rules of Warfare
Bleeding steel
'What you're seeing is advanced warfare...'
As I listen to this episode, I am playing Battlestar Galactica: Deadlock. It seems very appropriate…
It was probably a bad idea to give him a machine designed to kill people the ability to get bored.
There is going to be so many drones overhead.
You should definitely explore how power suits will not only be the equivalent of high-tec astronaut suits, but vessels capable of space travel themselves.
The first rule of warfare, is there are no second rules to warfare
6 A.L's working in tandem could make general intelligence, instead of one.
I'd like to see a future where instead of having treaties that ban robots in combat, we have treaties that only allow robots in combat (only robot v robot allowed)