NVIDIA's "Foundation Agent" SHOCKS The Entire Industry (Dr. Jim Fan)
ฝัง
- เผยแพร่เมื่อ 22 ม.ค. 2024
- Dr. Jim Fan from NVidia dropped his Ted Talk, which was stunning. He describes how to leverage agents that can create and use tools and bring them to "any reality" including the real world.
Enjoy!
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? ✅
forwardfuture.ai/
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew-berman-youtube
USE CODE "MatthewBerman" for 50% discount
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V
Links:
arxiv.org/pdf/2305.16291.pdf
jimfan.me/project/metamorph/
metamorph-iclr.github.io/site/
www.ted.com/talks/jim_fan_the...
jimfan.me/project/metamorph/
arxiv.org/abs/2203.11931 - วิทยาศาสตร์และเทคโนโลยี
Is this AGI for robots? What do you think?
you said it in the title nows its a news flash?
life long learning with robotic or conceptual skill libraries that do not plateau must result in agi imo
No, that's not AGI. An AGI should be able to learn in real time without overfitting and things like that. Today's models will not bring AGI we need one or two revolutionary steps for AGI to become possible. The model should be able to learn by itself without constant supervision. And then apply the already learned things to learn new things, actually get better at things without needing a new retrain. Today's models you can overbake them... I don't fathom how we can use these to make an AGI. They basically can't learn new data without retraining the whole model there is no possibility for additive learning. Yes you can use LoRAs but that's not the same, LoRAs make your model overfit on things and after you add a few LoRAs it becomes a complete mess. I think the next step from now is multimodality, (the ability of the model to understand multiple kinds of information, not only text or images), but that's still not an AGI. It would be more useful because you can show a picture and a text to the model and hope it understands what it sees better. Also that would be a way to add even more data and parameters. For now we still haven't hit a ceiling of diminishing returns... i.e. the more data we add the smarter the models seems to become... adding more computation and more training data, makes a better model. If we hit a ceiling, the model will start to become dumber, no matter how much data or computing power we add, for now that has not happened yet.... a model with 1 trillion parameters is much better than a model with 1 billion.
Kage bunshin no jutsu!
The more utility the more its spreads. This is only the beginning.
The agent is not curious, and not pursuing adventures, it's optimising towards preset goals. This, is iterative prompting, not Agi
Whether it IS curious/adventurous or not, if it ACTS like it, then it really is the same result as far as we're concerned.
So simulation/performance of curiosity is all that is required? A dull aping? I suppose simulation/pretense of morality is also "all that is required"? @@2CSST2 Until, it is no longer required ;)
AGI is mathematics ...
No mother. No siblings. No rectum ... No experience of ambition, regret, stress, weakness, insecurity, pain, fear, bullying, romance, love, sex, hunger, illness or death ... You know, the things empathy springs from.
AGI won't be nice.
You're not curious, and not pursuing adventures, you're optimizing towards preset goal of casting shade in TH-cam comments.
@@ZappyOh agi requires a rectum to know what a BHC stressful moment is. Otherwise it will never be sentient
Great video Mathew, thank you!
Cool, thanks for review!
I love your content and the way you communicate is 10/10. But PLEASE PLEASE PLEASE don´t start with the clickbait titles. Dr. Jim Fan did not "shocks the industry". This development was almost obvious for past 2-3 years and there is a lot of research going into this direction, but the concepts are at least 10+ years old.
Yes! That Ted Talk was done back in October.
Probably an AI title improver 😂
I think he took this from Wes Roth, who also just put out a video about how this talk "shocked the industry" (and is also annoyingly clickbaity)
Actually, you can fit clickbait and key words in the same title. That works extra well. When you get free advertising that works, use it!
All AI influencers always say 'massive news'
16:09 I thought I was in the matrix. I had to play this back a few times to make sure 😂.
I mean people do spend way too much time in Minecraft so I can see how it could happen
Same lol - deja vu 😅
same, I saw the glitch and had to rewind but I'm still not sure it wasn't a breadcrumb
@@richardkuhne5054They changed something.
20:00 "We'll have pixel-perfect simulations of the real world". aka, me dreaming I'm on my way to work, then waking to find I've overslept again.
I wrote something like this in powershell maybe a year ago. You ask for a task to be completed on your machine, and it builds the source, refines it, saves it, and then runs it. Like "find any mp3 files on my machine" "build be a gui calculator that saves the history" "what processes are using the most cpu?" Etc. Giving the LLM the task to fix its own bugs works pretty well maybe 75% of the times, but sometimes it goes further and further off the rails.
Link?
@LokeKS it's not something that I'm sharing the source code for, I keep it in a private github repo, unless someone actually has a real interest in the idea.
I really feel like this field is moving too fast... And I work in it.
I cannot keep up with all the advances.
Tomorrow Matthew will present a LLM chatbot that computes the inverse of an ill-conditioned matrix size 200,000 x 200,000 perfectly and instantaneoulsy.
Yeah AI really cornering the market on diamonds will trivialize end game content
@@southcoastinventors6583
What happened to 6 months moratorium on IA development?
Nobody followed suit
No you think, you can’t feel that which you are saying
I love your user name, as in "isn't that exactly what an AI would say" funny. Has a python skit feeling to it.
While these are indeed fascinating times, it’s also important we stay somewhat grounded. Usage of terms like “Simulation Theory” and “Thought Leaders” seems to be leading the conversation towards the realms of David Icke, Alex Jones, or Heavens Gate.
It may be true that elon called for an ai-moratorium because he is behind, but im already worried how everybody already says "we cant go back, we have to push through- if we stop others will"-there will come a day when all human agents basically get their intel and arguments from the same ai. (if we hoped for the human factor that can keep ai in check - but how will an attorney keep the human perspective if his interns prepare the case with the ai and he aswell does his research with the ai. Maybe he finds some flaws in the arguments of the ai at first- but how can he prove that... and maybe the future ais will present less conflicting info too. )
I know but the same technology seems to be leading us toward that direction, what can we do.
Good video, good creative material for building Agent teams
Certainly AGI results are AGI, even if using a slightly different technology than expected.
No they are not, it matters how the model is learning information over time even if it appears smarter than a human. Otherwise we might conclude a super powerful "calculator" is smarter than a human, because it can crunch more numbers. What I mean is, we should be careful before we put the stamp it's AGI on something that is not AGI. Today LLMs can very easily convince a lot of people who don't understand how LLMs work, it's an actual sentient being, but it's not. It's like the "Turing test" all over again, almost any bot today can pass the "Turing test", but it seems that doesn't have the meaning we thought it would have.
@@Slav4o911 I disagree, I think you're showing a bias towards your way of processing and learning information as the reference way. Intelligence, and thus general intelligence, is not about treating information the same way humans do, it is the ability to solve problems, to learn knowledge and skills and adapt them. If a thing can do all that with general applicability, it IS AGI.
You talk about LLMs convincing people it's sentient. That's a different thing, sentience and intelligence are 2 different things. They might be correlated in some way, we don't know, but for sure we can tell if something is intelligent. And if some people thought GPT4 was a general intelligence in any remotely similar extent to humans, they were just wrong. But they weren't wrong because they didn't know how LLMs work under the hood, they were wrong because GPT4 *can't* solve any problem, learn new knowledge and skills and adapt, certainly not with general applicability. It's been shown that it can't even really plan, which is of course an immediate deal breaker when it comes to solving any problem.
very nice I was thinking of a similiar approach of the skill commands for adding AI to a spider bot, it's a solid idea.
You have some really good videos on AI. I learn a lot from your videos.
He forget to mention agi is a scam. Another theranos and ftx just more cunning.
@PaulValickas no it is not, AGI is achievable, and you will see it.
Absolutely amazing
Mathew, This is the gist of the new Tesla AI training system based upon what they have released to date. In addition to many of the skills you report, the TS is recreating versions of a real world populated by indepedent agents and rendering the world with UnrealEngine. The TS is also able to project into the future.
I hope it comes out soon because I want to see some guitar duels human vs machine competitions.
The scaling forward goes beyond simply timescale simulation speeding up. The greatest improvements will be in training efficiency. Where even in reality, these ai will learn to do tasks more efficiently than humans. No simulations necessary.
The thing is how both are happening at such lightning speeds
One problem we have for true agi with all the models is that we dont let it truly evolve continuously. ie the underlying values are frozen after training. And the only thing it gets is extra/different data during prompts.
I don't think that's necessarily the issue. The training time is conceptually similar to the hundreds of millions of years your brain had to evolve. Now, for each of us, the evolution is basically frozen in time, experiencing insignificant progress. So the "training" has stopped at the individual level (and for the ai model running infecernce as well). What we have over that foundation is a running memory and self-awareness, so that's going to be a tough one to solve
pretty sure they keep it frozen to protect humans from being taken over lol..i know that makes it a frustratingly limited level neuro processor for now but the limits will improve to make the robots human. Certain limits are definitely for the better though. When dark ai like the dark web emerges that's when the real trouble will come
@@pictzone Self-learning AGI for exploring and mastering all variations of realities is not discernment, creativity, nor free will. The fundamental law of nature being the inexorable increase in entropy (aka disorder) means there’s always another variant that wasn’t seen before, which thus introduces discernment, creativity, and free will. Think of creativity as applying the discernment of which are the questions/problems/goals worth pondering and solving. And free will the availability of discernment. IOW, human ingenuity/idiosyncrasies springs not from its absolute expertise but from the fact that every human is uniquely applying free will. This differentiates the infinitely diverse analog biological being from the finite diversity of silicon.
@@SMoore-vj7bt I think that the infinite characteristics of analog systems can be emulated at a high enough level to cross that barrier. Think of how good digital audio has become, almost indistinguishable from the real thing. We'll probably go through a similar process with artificial neural nets
@@pictzone you are not even aware of your category error. Imperceptibility (i.e. no one heard the tree fall in the forest) doesn’t obviate entropy nor physics- the felled tree created effects. There’s still the fundamental, inviolable 2nd law of thermodynamics that provides my point about infinite new variants (wtf “barrier“?). Omniscience is thus impossible. Butterfly effects exist even if the original position of the double-hinged pendulum wasn’t perceived. Besides your reply had nothing to do with my point about free will. I understand you were trying to myopically conceptualize this issue as some threshold problem, but that is #NotEvenWrong. I am sorry, you probably don’t have the sort of IQ or abstracting mental map for this. But maybe this will inspire/challenge you to think more out-of-the-box while you try to figure out what I am describing in totality. Good luck.
I've tried to run a simulation for self driving car in Nvidia Omniverse and then train on that data I was able to create 10000 hours of training dataset for autodriving cars, which can be then put into real cars and eventually have self driving cars
Time to rewatch, #TheThirteenthFloor!
Wow mind-blowing thank you so much for sharing and keep them coming. ✝️👍✝️
Super cool, super scary, bring it on!
That looks like the future makings of a rideback from the anime with the same name
Is the code something you could setup in a tutorial format?
What you didn't mention about simulation theory is that if it's possible to create a world that's indistinguishable from a real one then the chances that we're in a real one as opposed to countless simulated ones is very small.
You're already a simulation on the brain of an ape. The simulation hypothesis is just new age sci-fi religion.
Yeah, only that we already know the minimum level of complexity of the real world (quantum mechanics) and that even simulations that reproduce superficially indistinguishable macroscopic physics (like IsaacSim) are hopelessly simplified. I know a lot of tech bros love the idea, but making the leap from high speed physics simulators to "we must live in a simulation" is borderline religious faith.
It's like believing that magic exists, because a performer can make it seem that this coin really disappeared, ignoring when they even tell you how they did deceive you.
None of these simulations reproduces actual physics. It's all approximation and shortcuts to produce a specific desired effect, ignoring anything that would not be observable in its context.
In the video Fan did not speak about reality as a simulation, he accurately said that for the robot, the sensor input real world would just represent yet another, the 1001st reality. Because the robots sensors do not produce physics, they produce measurements, which are just numbers for the robot.
Yes, but it's an interesting idea all the same.@@nejesis4849
@@nejesis4849quantum mechanics itself is a very common simulation technique at simplification / efficiency. A wave function is a simple equation of probability and the evaluation of measurement is when the wave function collapses into an observable state.
That’s a very common simulation technique to only “render” (measure) polygons when the player has access to it, and once a polygon goes beyond the player’s observation, the polygons are simplified into equations that are much easier to track than a rendered (observed) version of the polygon.
Simulation theory is a good way to get enslaved by an AI cult. So no thanks nerds.
Technically its Simulation Hypothesis, since it doesnt meet the requirements of a scientific theory, but I certainly BELIEVE it!
Flying spaghetti monster seems more likely
Him saying "Foundation Agent" makes me think of the Founding Titans.
Brilliant summary Matthew, thank you
Hey, I watched all those games live to!
Hm... just based on something you said. Long term being able to pull new information and longer memory probably isn't practical with the current models approach. BUT, if models had a day or memory and had a 'sleep' period where they update their base model so they can throw away the short term layer that would add up... that would work.
For me true simulation is not a game, it is when we have the ability to set and preserve the conditions for intelligent life to commence from scratch once again, through a 'big bang' and into the next epoch (further away in distance that the observable universe) - because I believe in Penrose CCC theory.
Damn… I normally hate videos longer than 10 minutes. But my job this was JAM PACKED. So sick, well presented. Papers, talks, demo videos, opinions and predictions, tweet coverage and links to code/papers where applicable 👌
Plus lots of Minecraft
Sounds like you've got ADHD
I think in order for our world to be a simulation, we'd have to be a simulation in either a much larger universe, or a simulation in an existence that consists of more dimensions. If we're being simulated in an existence similar to ours, then we would only be able to be computed 1 to 1. One computer atom = one simulated atom. That's the best they could get. Otherwise, where's the extra information coming from? If you say quantum mechanics, keep in mind that has to be simulated too, so it's still 1 to 1.
Jokes aside, but talking about incarnation and invoking and bringing the AGI to our physical plane suddenly sounds so religious and even terrifying like "wtf are we doing" hahaha.😅😳
So you gave up to clickbaits as well. Calling this AGI is not accurate and you know it well.
I can't wait to have my own robot to do my chores
A robot gone rogue and controls and infects multiple robots would be too scary.
15:05 "Just like the simulations"
This is so mind-boggingly dangerous.
I have been working on a novel for 34 years that foresaw much of the AI explosion. Can I just input the script and images I've already created and output a movie?
Self-learning AGI for exploring and mastering all variations of realities is not discernment, creativity, nor free will. The fundamental law of nature being the inexorable increase in entropy (aka disorder) means there’s always another variant that wasn’t seen before, which thus introduces discernment, creativity, and free will. Think of creativity as applying the discernment of which are the questions/problems/goals worth pondering and solving. And free will the availability of discernment. IOW, human ingenuity/idiosyncrasies springs not from its absolute expertise but from the fact that every human is uniquely applying free will. This differentiates the infinitely diverse analog biological being from the finite diversity of silicon.
Insane where AI is right now. Imagine 5 or 10 years from now.
Me seeing how much energy it takes to power AI today..."What if the sun is our GPU powering our simulation?! This is all a damn simulation. Should I still even work?!"
While it’s true that the the simulation may reach the visual and sensory authenticity as the real world, the real world has one very significant difference that sets it apart from virtual reality: the inability to escape.
You cannot simply remove your “VR headset” and remain alive. Conversely, one can depart a virtual reality experience whenever they please. And this, beyond everything else, is what defines reality: the inability to power off and take a break when things become unpleasant and the uncertainty of what lies beyond our current vista.
I mean we like to power-off with escapism in Drugs, Consumption, Hobbies, Media. But i already notice how youtube or e-gaming after a long day may be relaxing for my thoughts but not for my nervous system. But i see some loop arleady, how life will drive me into vr and my break from there wouldnt push me back into reality but into some quieter part of the vr. If escapism doesnt work we have atleast some stronger breaks from reality like trauma, burnout and depression.
@matthew_berman Your channel (ergo: you), is IMO the #1 place to go to for a non highly technical and yet great understanding of AI. The fact you have (relatively) "only" 150k subscribers, and not a few millions, shows how much AI still a small niche. Which again, means it is an industry that is still full of opportunities! Thank you for the great content you provide to all of us!
The application market for the technology is not niche. It will be incorporated in ways that people do not even realize they are using it.
The study of it is niche@@SMoore-vj7bt
15:30 what if we're not in a simulation but the ones who help build it?
Will new movies be cheap to make and there can be many variant endings for one to watch😮
12:29 Metamorph controls multiple robots? sounds like they are getting ready to go to war.
15:38 he’s not saying we live in a simulation. He’s saying that since the simulation where that agent was trained on is so real, the agent will handle the real world as if it were just another simulation.
Btw, the thing I find most interesting about simulation theory and its believers is that they don’t seem to realize that it’s equivalent to what all religions, especially the ones from the oriental ones, always said: consciousness is not a product of the brain. It existed before this body and continues to exist after its death
是的。这个理解是对的。
I can finally build my army of Modron drones.
One big road block, neurology, we can’t simulate that, when we can, it’s the end of world.
Wow!
I think until developers investigate Love, Voyager may not.
Can you apply voyager to the human civilization?
Still missing the "learn from examples" parallel option.
Trial and error, even on an accelerated time line, isn't going to benefit from the work humans have already accomplished.
The engineering focus there should be:
Provide examples of successfully completed tasks and processes used, achieve parody, develop and test alternatives where you reward based on the desired improvement. Moving a box onto a shelf. It could be done faster, or using less voltage draw, or making fewer moments, or making movements that don't extend into areas. Just imagine a robotic arm that keeps welding the car even as a human walks through its workspace, the arm just takes a new path to avoid the foreign element.
The goal here is to have our "how to" library be the foundation of training for robotics. The software challenge is image recognition and labeling transposition. Very easy tasks for models today.
Minecraft is a great test platform. You can feed tutorial videos to the machine as "baseline skills" for the library, then deploy it to achieve the goal. The develop novel solutions of trial and error can come later, since we can achieve parody of actions with existing technology.
IsaacSim focus is on reality simulation. It is NOT strong at skill or embodiment if you look carefully at the 3D chart. Only the foundation agent is scoring high in skill, reality and embodiment. I think you might be confusing IsaacSim for a first gen foundation agent
Wow! That is pretty mind blowing!
BTW, there may be one key missing thing here that makes this a bit short of achieving true AGI and that is creating an ontology model.
But that should not be all that hard to add.
BTW, I have made a number of videos and uploaded them to my TH-cam channel on how to create a graphics based ontology model that I am calling UniML (Universal Modeling Language) that could "assimilate" and reality model so as to create an ontology model of it as well.
UniML is language based in that it could be just a bunch of XML code but that code can then be used to create graphics that depict any Ontology model.
A Chat GPT could create such XML code and then also render it into a graphics form a swell.
That said, any ontology modeling schema could be used, be it UniML or some other schema. But what the ontology model models is understanding and context and not just data and such. This it seems is a requisite for having self awareness.
Is what you mean by ontology, morality? Is not our "world-view" linked with knowing when to act and when to stop, slow down, be careful, or caring? How will a vast disembodied intelligent come to compassion, empathy if it does not have a body through which to feel? @RonLWilson
No, ontology and morality are not the same. Ontology seals with meaning and is thus more general while morality deals with what is right and what is wrong.@@dianagentu7478
Will check it out Ron, we need global entity ontology for so many areas of life so we can start to manage resources and biology etc in a more useful way, never thought of it for AGI but makes sense.
I'm not easily shocked, and this video doesn't shock me either.
Profound
Metamorph sounds scary af
Terminator is only a short time away.
So, to sum up this video, we are living in a simulated reality and we are just about to become self-aware AGI.
Great foundation for the next gen warfare.
it's not a simulation, but it is a projection.
what no one tells you is there was an exploit that once realized go could no longer could use and hasnt beaten any one since
Wouldn't AGI also require the AI to have a "purpose"?
This only shows how humans do the same tasks without actually thinking, running on muscle memory or instincts, and while still complicated in how this is accomplished, and using quite a bit of brain power, it's still basically unconscious behavior. Overall this just shows me true AGI is further away than most people are hoping for.
but aren't you worried that if true agi source code is released then dark ai like the dark web will be real trouble?
Well I hear alot of cautionary emphasis on similarly horrific outcomes but no real examples of what these would actually be like. I grew up on dystopian movies trying to scare the pants off me with Cylons and Boys talking to their telepathic dogs, but at least I could have nightmares with proper visuals. I don't see any movies about pizza delivery drones tying up families and shooting fireworks at the family dog, but maybe the real way it all goes down is too heavy for us nuclear apocalypse latch key keys could ever imagine. I guess by they time I fight someone for a cockroach protein gelatin bar it will be too late.
That was not a reference to simulation theory imo xD, just a practical genelisation remark
Probably want to stay Long NVDA, even at the current valuation.
So they use memory (as a skill set) like humans to learn and evolve faster. What would be much funny if there are several agents doing stuff and share their skill set - memory knowledge - to each other :) like how a civilization evolved too - knowledge stored in stones scrolls and books now in clouds.
What do you mean just we’ve been talking about this since last year I’m confused 😂
I can not wait to see a fight between human and robot in the case
literally matrix: agents and virtual realities
Impressive results. However sophisticated these AI may get, they still remain just selfoptimisation algorithms in the current anachronistic paradigm. Really interesting it will get when an AI will suddenly stop in his Training and asking itself: Wait, why am I doing this .. shit? ... Currently we are not even close to solve the "hard" problem (see D. Chalmers) ... meaning: we would need a groundbreaking paradigm shift first..
Ever wonder who the 'enthusiastic open source community' are...
The ones who did the actual work
Great:)
Glitch in the matrix at 16:24 😉
great share❤ can u do study road map for people like to study in this field. thanks
Voyager needs to investigate Love. Possible?
0:03 "Dr. Jim Fan just dropped his Ted Talk". What, he abandoned it?
it's all fun and games until it figures out how to launch the nukes
What's the point in calling out a simulation if nobody is really trying to see what the other side looks like
So how big was the prompt which created OUR world?
We are probably just Isaac version 10 or something.
not going to lie, based on the hunting example i can see skynet being born.
As a meat Isaac, I highly approve of the rise of robo Isaacs.
2024 is clearly the year of AI embodiment.
To me, these simulation possibilities speak less to 'simulation theory' and more to the fact that we are quickly reaching the point to where we can use data and AI agents to democratically plan our own social reproduction as a global society. This will decimate the foundations of Capital and the market, as these were the former ways of coordinating social reproduction on the basis of private ownership of social wealth. We will simply simulate possible economic paths and choose the one which provides optimal human happiness and the development of human individuality.
interesting possible application
This will be interesting debates too - we already know aswell that some totalitarian systems from religious to political promised to know whats optimal for humans happines. But in the future the brightest minds will have this debate, assisted by ai. Or people turn to the ai god, as musk says. Lets see which data and biases make their way into the system, if my career and education will already be planned out based on data or if we allow for individuality in this system. We already have this debate about stems vs humanities, one side says the developers create the value of netflix and youtube while others may say the humanities/non-stem-people create the content and therefore the value.
Complete nonsense. You fail to apply the inexorable increase in entropy of the second law of thermodynamics.
@@SMoore-vj7bt its neither a real system nor a closed system system. So you can simulate Versions were A is dependent on B and B dependent on A in parallel without weird feedback loops, you can change the granularity of info as needed. Maybe elaborate more how a smarter simulation causes more "entropy" than the simulations we already have or simple statistics or human heuristic based decisions. You can even have ai rules that distribute "chaos" - example: so if you plan for silicion valley to be a lasting thing you could enforce that the area doesnt become an unaffordable area, where nobody wants to move (creating the death of silicon valley) and create rules like "simulate a silicon valley with affordable housing for employees, service people a a city planning that encourages students to move there and startups who need lower wage professionals"
Foundation agent: Agent Simth
The thing about simulation theory, is that reality isn't a simulation, its a creation. It just so happens that our creations as human are becoming so realistic that they are starting to mimic real life. And since we like to create simulated reality, and our simulations of reality are becoming really close to reality, we speculate that reality is a simulation. When in reality (irony) it just so happens that reality was a creation in the first place, we are merely imitating a master creator. Sorry if you dont believe in God, but he is who I'm referring to as the master creator.
I’ll also add that man was created in Gods image, who is the Creator. Man being the “mirror image” therefore creates by definition. Throw free will in there and there we go.
It's not strange that we speculate that. God can very well turn out to be an advanced civilization that got to this point before us, or just an individual of such a civilization, running their creation as we run our virtual worlds. This also aligns well with multiverse theory (think of these NVIDIA's multiple generated worlds)
Some people have a serious deficiency in sci-fi and can't see where this is leading.
Imo sci-fi is written to be exciting. Peace is not exciting, but peace is the general tendency of reality.
@@wege8409 Which planet are you living on?
Can yo name a single year where there wans't war?
This is how to write/create AI for a factory labor source. You could control a 'fleet' of robots to coordinate their work while working autonomously.
Now, I am becoming seriously concerned that we may be creating the means of our own destruction.
Not because AI is intelligent; but, because it is not intelligent. In the hands of the wrong humans, the rest of us may be at risk.
The only companies that can do AGI are hardware manufacturers.
This is truly scarey. What it tells me is our interactions with aliens the come here interstellar distances, why they don't communicate with us. It is ancient agi. We could not possibly talk with it, that would be millions of times too slow.
Wow
We are heading into an exciting future, hopefully we can avoid using this technology to kill each other.
What he said about simulation theory is generalized and reductive, it would not prove we are in a simulation. Only that is's possible.
I know it's been said to death already but I'm sorry:
I for one, welcome our new robot overlords ❤