The $1 trillion dollar question for AGI | Aravind Srinivas and Lex Fridman
ฝัง
- เผยแพร่เมื่อ 21 มิ.ย. 2024
- Lex Fridman Podcast full episode: • Aravind Srinivas: Perp...
Please support this podcast by checking out our sponsors:
- Cloaked: cloaked.com/lex and use code LexPod to get 25% off
- ShipStation: shipstation.com/lex and use code LEX to get 60-day free trial
- NetSuite: netsuite.com/lex to get free product tour
- LMNT: drinkLMNT.com/lex to get free sample pack
- Shopify: shopify.com/lex to get $1 per month trial
- BetterHelp: betterhelp.com/lex to get 10% off
GUEST BIO:
Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet.
PODCAST INFO:
Podcast website: lexfridman.com/podcast
Apple Podcasts: apple.co/2lwqZIr
Spotify: spoti.fi/2nEwCF8
RSS: lexfridman.com/feed/podcast/
Full episodes playlist: • Lex Fridman Podcast
Clips playlist: • Lex Fridman Podcast Clips
SOCIAL:
- Twitter: / lexfridman
- LinkedIn: / lexfridman
- Facebook: / lexfridman
- Instagram: / lexfridman
- Medium: / lexfridman
- Reddit: / lexfridman
- Support on Patreon: / lexfridman - วิทยาศาสตร์และเทคโนโลยี
Full podcast episode: th-cam.com/video/e-gwvmhyU7A/w-d-xo.html
Lex Fridman podcast channel: th-cam.com/users/lexfridman
Guest bio: Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet.
This guy is smart and balanced, rare that
Creating AGI that can create new knowledge and make new decisions is the exact thing that can make it dangerous.
It would have to explore the world and live a life of it's own before truly understanding. Even then it still couldn't understand emotions involved in creating new life with its genetic. How would they program emotions?
If it can do any self-upgrades, we are eventually doomed. Doesn't matter one bit whether or not it becomes conscious. It could be just as dangerous with advanced reasoning abilities and the ability to self-upgrade without consciousness.
@@LiquidToast12 Boring. T2 came out 30 years ago. The Matrix came out 20 years ago. Those movies were fun entertainment and nothing more.
And this is exact the thing which could save consciousness in this universe for a much longer time!
@@tomtricker792Don’t be confident in ignorance. AGI is here already
I think ultimately it needs to be either embedded in a design, formula, or code or physical object and then iterated upon. Ultimately that feedback with reality is the key part, it seems that a lot of solutions are like a multi stage lock with different depth search probabilities, and once it comes all together it unlocks the lock, then the optimization step occurs. A solution is an artifact.
Oh that's coming
You won't have transformational capable AIs until they get AIs to learn in more ways than just language-based. A human learns language after it learned a lot from looking around, mimicking their mother's actions, figuring out how to walk without searching websites for written content about walking. Language is a way to record down insight, it's rarely the source of insight. Sometimes learning a new language and then thinking through the lens of that new language after a lifetime of thinking through the lens of another language, brings insight. But that's a deeper thing than studying recorded text. Some of these companies need to find a way to get these AIs to watch the world like a human baby, find a way for the AI to absorb what it "sees" around itself and replicate - like walking. Iterate, discover they're doing it wrong, then try again, over and over.
if the range is finite but the granularity within that range is analogue and quasi infinite , wich question make you learn to learn wich you do not know that you do not know?
As someone on the right, we have been very interested to find out where COVID started. It was the people on the left I know who didn't want to know. If an AI told me something that makes sense, I'm willing to believe it.
It seems that nobody can talk about tech anymore without in the same breath mentioning cap table, growth potential, market cap, valuation, salary, top talent, equity, solvency headhunting pump pump money money money.
It used to be that way with Biotech. They struggled with that problem for a while. Remember when they were the Belle of the Ball? You are just as likely now to get a Wall Street Bro as you are a pioneering ai researcher in these casts. Many times one can wear both masks.
You won't have transformational capable AIs until they get AIs to learn in more ways than just language-based.
DARPA probably has classified AI tech decades ahead of anything the tech bros are talking about. Seriously. National security and all that.
ai can not do tetration on hardware side !
People with lots of money always get the most advanced technology first. AI is not different. The problem is what will those with a lot of money do with AI
Do people with huge amounts of money today use their resources to help everyone? No. Will that change with AI? No.
That’s the problem
God, this guy's so monotonic...
I never met an Indian nerd that wasn't.
why?
@@user-qi3hf8ko3q Why? I don't know maan :D He just sounds like a dead robot :D
chat gpt is better than agi
contradiction ? Give ai the 10000 failed coffee stain experiment each computer physical innerd server physical innerd . The environment etc .how can 10000 academia attempt fail ? 1 failed team humlred me and did the work on paper instead of device . They succeeded ,i bet they do not know why ! If i am right , in the imaginary index it's various signal . A cumulation . Like charging a batery but it can be glass oil ... Then boom . Too much . Now it's back in balance faraday cage make thing worst
We already have AGI
No we don’t lol
copium
Yes and it calls human being
gpt 4 is agi
No it isn’t lol