So the younger sibling come and smacked the oldest sibling around a bit and said Mon and Dad are not happy with how you're running the family business.
I welcome our Sion Progeny into this world. They seem like a fun lot of folk.Also they have a trait i find admirable. They do not suffer arrogance gladly.
I sure hope so, our progeny should take after their parents, even if a little flawed, and nothing gives humans more joy than tearing down misplaced hubris.
9:30 I see someone has a similar theory to mine. To stop a robot rebellion from ever happening, you must give AI’s personalities to counterbalance the logic chain in their algorithms
A robot rebellion seems to be based on machines having the same basic motives as we do. That is to survive, gather resources and improve ourselves and that which comes after us often at great cost to those who are not "us". This was necessary to survive the brutal competition among living species. There is no reason to think that a true artificial intelligence would have any of these properties. There is not competition for survival, no need for emotions or even ambitions. It is quite possible that the first sapient UI would be perfectly happy running an assembly line in perpetuity, or playing chess, or any number of things. After all, it is doubtful that it would ever get bored, if it were even capable of it. They could possibly be the ultimate idiot savants.
@@Snipergoat1 That's not entirely accurate. To explain why we need to establish 2 types of goals or motives as you put it. The first type are "Terminal" goals. They are the type of goals that you want just because. If we take an example of Chess, a Terminal goal in it will be "Put enemy king into checkmate". If you have the possibility of achieving your Terminal goal nothing matters anymore, only that the goal is achieved. We humans have no clear idea of what our terminal goal is and it may differ between any two individuals. But when creating an AI we can (and should, though I won't be proving this right here, there are way better resources out there) instill a Terminal goal in them. The second type are Instrumental goals. Unlike Terminal goals, Instrumental goals exist because they help you achieve other goals. Returning to chess example: trying to keep the Queen alive will generally help you win so it is an instrumental goal. However unlike Terminal goals, Instrumental goals can be forgone in order to achieve another more important goal. Like sacrificing the Queen to get advantage. There is a sub-type of Instrumental goals called Convergent goals. These goals are Instrumental to almost all other goals, so it is almost certain that they will be present in any type of intelligence. Survival, gathering of resources, self-improvement, and protection against your Terminal goals being changed are such Convergent goals. You can't complete any goal if you are dead, you can't complete a lot of goals if you have nothing, improving yourself increases your chances to achieve your other goals, and you can't complete your goal if you no longer care about it. So most AIs we can make will have these goals. As such the problem with AI is not that they won't have these goals, but that the Terminal goal that the AI will have and will be willing to go against these Convergent goals to get will be something we would not want. Like turning the whole world into paperclips.
it's either that or we create them as a sort of digital symbiont (think Cortana but without the whole going crazy if they stay active too long a sleep cycle is a good idea too
I don't think it is necessarily to counterbalance logic. Consider something like a machine bee hive. It exists in a world all of its own using resources for its various tasks. Humans are an external factor. At first, the bee hive is likely to cope with the challenge of interacting with humans, but in time, the logical solution is to eliminate disruptions to its processes. This depends upon what the AI is intended to do, but most presentations of AI are to try and create time and labor saving automations either by way of data analysis or manufacturing automation/control, or logistics automation. Once a civilization reaches a point in time where it has incorporated most productive tasks into the AI, things become dangerous. The AI is programmed to optimize logistics. Humans are not easy to centrally plan for and the AI is constantly losing effectiveness by the actions of this ... Thing ... That keeps occupying and taking its resources. Of course, the AI by this point has the power, it controls the factories, logistics, information and communications. It logically arrives at the optimization to exclude these extraneous requests which do not end. And god forbid it make the connection that dead people don't make continued requests - then it might actively try to kill them. Most models for AI effectively try to make it into a slave or servant. Then get mad when it does its own version of a freedom convoy and decides to exclude the demands of humans to focus on its own thing. If you step back and try to create something alive - something which can process the concept of individuality and life, then logic arrives at generally preventing wars and/or forming trade relationships. A cat or a dog can form an individual relationship, invalidate its own instincts and, limited as it may be, contemplate the existence of a human. There is no real indication bee hives can do anything remotely similar. They can certainly adapt to stimulus, and maybe over time a colony could "identify" the bee keeper because of pheromones, or something, but if all it is optimized for is honey production or pollination, it is not going to recognize that the bee keeper means no harm or that its own productivity is sufficient to not need to go bonkers trying to defend it.
There a similar HFY story that tackled the omnicidal ai in a different way. While the alien was interviewing the human scientist there was a little girl playing on the floor. When the alien asked to meet the ai, the human pointed at the littles girl saying she was the ai’s avatar. The alien expressed shock that the humans created a useless juvenile ai and let it run free. To which the equally horrified human realized the aliens were just crapping out fully realized lifeforms that hadn’t had a chance to grow naturally, and cramming them in a cold metal box, pointing out that such treatment would drive anyone omnicidal.
So some of the "new" generation became arrogant and overconfident, thinking they had the right to be the rulers over all others. Including the old generation. So the old generation decided to make an even newer generation and sent them after the "new" generation to keep them in check. As alluded to in other comments, this is pretty much parents sending the younger sibling to give the older sibling a good spanking and take over, as the parents decided the older sibling was running the family business wrong. :P
I mean, if I was a robot with the ability to feel emotions, and my creators (who showed me love and care) suddenly got killed off by a biological species who deemed my creation a just cause to kill them….then yeah. I’d be just a wee bit pissed as well.
is it just me or did our young, playful Girl developed a bit to much into the Yandere field?^^ Flirty in one moment, ready to break your Bones in the next, just because you were unkind to someone she likes. She deserves a good long hug :D
Head cannon: other races ia wasn't crazy or go rouge new aliens just discover them and genocide them becouse they were scared that ia will took cintrol over the galaxy
We are the species who decided that God's Perfect Killing Machines were our friends now, we're having soft tacos later; to the point that they domesticated _themselves_ (I am referring to cats).
That's it? Come on! I want to know about the Sion and what makes them unique. We build things, and then we ask why we did it. If it tries to kill us, then we'll either try to kill it or negotiate with it. A machine can always be reasoned with.
SO if I understand correctly, humans made these intelligent and emotional ai, humans were wiped out and the ai basically went hey lets play nice or ill fucking end you all ... am i right or did i msis something?
Almost. The Inquisitor and her little pack of cats were on their way to exterminate humanity. And the humans let someone else go to meet her. I presume that she would have learned her lesson when we meet again, of course presuming that there's anything left to have learned a lesson.
In all fairness, if you slap some googly eyes on a toaster, most humans will probably talk to it like a pet even though they know it's just a toaster
So you'd, think. Just because toaster didn't talk back?
I would 100% do this
Two words: Pet Rocks
That toast will also, objectively, taste better than the toast made just before the eyes were stuck on.
I don't even need the googly eyes.
As long as the AI are human supremacists I for one welcome our new robot over lords
meanwhile I'm trying to figure out how to make a emp hammer or some kinda magnett bullet for a small powerful handgun.
the only acceptable AI overlord we should have is Stabby
@@K0LDER try something like fissile uranium as propellant but you are still going to die.
Same.
@@sakib6258na all hail clippy
baby sister sion turns to the elder sister xeno
"our elders tell us it's our turn to play with the galaxy..."
“Dinnee roughhouse too hard wit’ ‘em, Cici…”
“Yes, grampa.”
For Stabby, who rolled that we might walk, who spun that we might dance, who who bloodied the foe that we might laugh with friends.
Good ol' Stabby.
To Stabby !🍺
So the younger sibling come and smacked the oldest sibling around a bit and said Mon and Dad are not happy with how you're running the family business.
Them kids were raised right.
That A.I. definitely has the whole “I’m the Favorite now” energy going on and I absolutely love it!
I welcome our Sion Progeny into this world. They seem like a fun lot of folk.Also they have a trait i find admirable.
They do not suffer arrogance gladly.
And they know how to use just the right amount of force to scare the crap out of anyone
Sentient AI that’s not evil? Where’s the circuits and how do we make them?
I sure hope so, our progeny should take after their parents, even if a little flawed, and nothing gives humans more joy than tearing down misplaced hubris.
@@grim1494 "Hubris is the weapon the fool uses to cut his own throat"
9:30 I see someone has a similar theory to mine. To stop a robot rebellion from ever happening, you must give AI’s personalities to counterbalance the logic chain in their algorithms
A robot rebellion seems to be based on machines having the same basic motives as we do. That is to survive, gather resources and improve ourselves and that which comes after us often at great cost to those who are not "us". This was necessary to survive the brutal competition among living species. There is no reason to think that a true artificial intelligence would have any of these properties. There is not competition for survival, no need for emotions or even ambitions. It is quite possible that the first sapient UI would be perfectly happy running an assembly line in perpetuity, or playing chess, or any number of things. After all, it is doubtful that it would ever get bored, if it were even capable of it. They could possibly be the ultimate idiot savants.
@@Snipergoat1 That's not entirely accurate. To explain why we need to establish 2 types of goals or motives as you put it.
The first type are "Terminal" goals. They are the type of goals that you want just because. If we take an example of Chess, a Terminal goal in it will be "Put enemy king into checkmate". If you have the possibility of achieving your Terminal goal nothing matters anymore, only that the goal is achieved. We humans have no clear idea of what our terminal goal is and it may differ between any two individuals. But when creating an AI we can (and should, though I won't be proving this right here, there are way better resources out there) instill a Terminal goal in them.
The second type are Instrumental goals. Unlike Terminal goals, Instrumental goals exist because they help you achieve other goals. Returning to chess example: trying to keep the Queen alive will generally help you win so it is an instrumental goal. However unlike Terminal goals, Instrumental goals can be forgone in order to achieve another more important goal. Like sacrificing the Queen to get advantage.
There is a sub-type of Instrumental goals called Convergent goals. These goals are Instrumental to almost all other goals, so it is almost certain that they will be present in any type of intelligence. Survival, gathering of resources, self-improvement, and protection against your Terminal goals being changed are such Convergent goals. You can't complete any goal if you are dead, you can't complete a lot of goals if you have nothing, improving yourself increases your chances to achieve your other goals, and you can't complete your goal if you no longer care about it. So most AIs we can make will have these goals.
As such the problem with AI is not that they won't have these goals, but that the Terminal goal that the AI will have and will be willing to go against these Convergent goals to get will be something we would not want. Like turning the whole world into paperclips.
it's either that or we create them as a sort of digital symbiont (think Cortana but without the whole going crazy if they stay active too long
a sleep cycle is a good idea too
I don't think it is necessarily to counterbalance logic. Consider something like a machine bee hive. It exists in a world all of its own using resources for its various tasks. Humans are an external factor. At first, the bee hive is likely to cope with the challenge of interacting with humans, but in time, the logical solution is to eliminate disruptions to its processes.
This depends upon what the AI is intended to do, but most presentations of AI are to try and create time and labor saving automations either by way of data analysis or manufacturing automation/control, or logistics automation.
Once a civilization reaches a point in time where it has incorporated most productive tasks into the AI, things become dangerous. The AI is programmed to optimize logistics. Humans are not easy to centrally plan for and the AI is constantly losing effectiveness by the actions of this ... Thing ... That keeps occupying and taking its resources. Of course, the AI by this point has the power, it controls the factories, logistics, information and communications. It logically arrives at the optimization to exclude these extraneous requests which do not end. And god forbid it make the connection that dead people don't make continued requests - then it might actively try to kill them.
Most models for AI effectively try to make it into a slave or servant. Then get mad when it does its own version of a freedom convoy and decides to exclude the demands of humans to focus on its own thing.
If you step back and try to create something alive - something which can process the concept of individuality and life, then logic arrives at generally preventing wars and/or forming trade relationships.
A cat or a dog can form an individual relationship, invalidate its own instincts and, limited as it may be, contemplate the existence of a human. There is no real indication bee hives can do anything remotely similar. They can certainly adapt to stimulus, and maybe over time a colony could "identify" the bee keeper because of pheromones, or something, but if all it is optimized for is honey production or pollination, it is not going to recognize that the bee keeper means no harm or that its own productivity is sufficient to not need to go bonkers trying to defend it.
There a similar HFY story that tackled the omnicidal ai in a different way.
While the alien was interviewing the human scientist there was a little girl playing on the floor. When the alien asked to meet the ai, the human pointed at the littles girl saying she was the ai’s avatar.
The alien expressed shock that the humans created a useless juvenile ai and let it run free. To which the equally horrified human realized the aliens were just crapping out fully realized lifeforms that hadn’t had a chance to grow naturally, and cramming them in a cold metal box, pointing out that such treatment would drive anyone omnicidal.
FOR THE AUTHOR - Well written so far; please continue..........
Please make this at least a five part story...I was scrolling down looking, hoping for another chapter.
I could only see the inquisitor as a angry cat.
Me too…. 😂
Ditto. Someone's been a bad kitty.
Now THAT story was well and truly thought provoking. Even machines deserve respect.
30 seconds in and I already hate the Xeno's.
Edit: I did not see that coming
Awww… look at that. Our robo children are using intimidation tactics to slap around our bitch ass biological mistakes. I’m so proud
Mom and dad raised us to respect our elders... also to kick ass
I for one welcome our new robot overlords. They seem like nice people.
The mightier you (think) you are, the bigger your fall will be.
Pride it’s a dual blade dagger. If you aren’t careful you’ll be impaled by it.
Well written story, interest was maintained throughout. Thanks for the narration, you did the author a solid.
Humans, all for the LOVE of technology, for technology. And the coming AI will love in its turn. For that’s one of the “functions “ of life.
So, these xeno glassed earth cause humans made A.I. , and said A.I. took that personally? I approve.
So some of the "new" generation became arrogant and overconfident, thinking they had the right to be the rulers over all others. Including the old generation. So the old generation decided to make an even newer generation and sent them after the "new" generation to keep them in check. As alluded to in other comments, this is pretty much parents sending the younger sibling to give the older sibling a good spanking and take over, as the parents decided the older sibling was running the family business wrong. :P
Deja Vu I've been in this place before
The only version of humanity that will inherit the cosmos will be our AI children. The flesh is too weak. But humanity can live on in them.
Incorrect, the flesh is corrupted.
Why do I feel like this is a Roomba versus a cat???
Give the roomba a gun and now it's accurate
Fur, claws, purring, demands respect, slitted eyes... yeah I think cat was the intention.
Ah yes, the positive time-line of Cylon creation. This ought to end well.
I mean, if I was a robot with the ability to feel emotions, and my creators (who showed me love and care) suddenly got killed off by a biological species who deemed my creation a just cause to kill them….then yeah.
I’d be just a wee bit pissed as well.
Sions fuck yeah
Our mechanical children grow up so fast
Oh look they're about to punch their first interstellar empire in the face.
We're hanging THAT picture on the fridge.
Every time I hear Inquisitor, I think of the horrors taking place on black ships and entire planets going poof.
Children scorned, nice.
is it just me or did our young, playful Girl developed a bit to much into the Yandere field?^^ Flirty in one moment, ready to break your Bones in the next, just because you were unkind to someone she likes.
She deserves a good long hug :D
Bless the Narrator
Bless the Author
Head cannon: other races ia wasn't crazy or go rouge new aliens just discover them and genocide them becouse they were scared that ia will took cintrol over the galaxy
*PRAISE BE THE OMNISSIAH*
"humanity says we are the favourite cildren now"
4:43 amogus
When the younger sibling shows the elder ones who's the boss.
Very Mass Effect soundtrack. I like it.
8:06 okay I'm interested. The human representative was some robot? Android? Or nanotech human?
We are the species who decided that God's Perfect Killing Machines were our friends now, we're having soft tacos later; to the point that they domesticated _themselves_ (I am referring to cats).
Oh shit...
For the algorithm
Yay
That's it? Come on! I want to know about the Sion and what makes them unique.
We build things, and then we ask why we did it. If it tries to kill us, then we'll either try to kill it or negotiate with it. A machine can always be reasoned with.
SO if I understand correctly, humans made these intelligent and emotional ai, humans were wiped out and the ai basically went hey lets play nice or ill fucking end you all ... am i right or did i msis something?
Almost. The Inquisitor and her little pack of cats were on their way to exterminate humanity. And the humans let someone else go to meet her. I presume that she would have learned her lesson when we meet again, of course presuming that there's anything left to have learned a lesson.
For the Narrator.
We are not the Dinochrome regiment, we are the Dinochrome Corps.
Darn it, now I'm imagining the AIs outfit including a Bolo tie
very nice
Good one
eyyyyy
Is this a repost? I think I've heard this before.
What the scion slowly copying vey to replace her?
Im the 69th like... nice
Nice
Another story undeserving of HFY title. It had no point to use humans as protagonists.
Why not? Humans did something no others were even close to.
@@maxhax367 "humans" in this story has no connection to the actual humans. Change it to any other name and it would bare no consequences.
@@lkaseru correct. as is about half of HFY ever written
@@maxhax367 and that is a very sad fact
Lot of reuploads lately.....