"Think back to Galileo-someone who got in a lot of trouble. The reason he got in trouble was he invented the telescope, popularized it, and made some claims that we were much smaller than everyone wanted to believe. We were supposed to be the center of the universe, and it turns out we weren’t. And the better the telescope got, the more obvious it became that we were small. Large language models are the telescope for the mind. It’s become clear that intelligence is larger than we are, and it makes us feel really, really small, and it’s scary. But what happened over time was as we realized the universe was larger than we thought and we got used to that, we started to realize how beautiful it was, and our place in the universe. I think that's what’s going to happen. We’re going to realize intelligence is more vast than we ever imagined, and we're going to understand our place in it, and we're not going to be afraid of it.’ ---- Jonathan Ross
Please make a video about your thoughts on the impact of LPUs on NVIDIA. Will all inference be directed towards LPUs? The stats are just crazy on just 14nm chips.
Good to see you've been here, Alex although I'm surprised I didn't get this information from you first. My cousin was happily buying AMD the other day. I'm pretty sure I didn't have the heart to tell him he was beating a dead horse. I don't know if you're a car guy but this is how I explain Processors to people now. CPUs --Cadillacs (big/powerful/cumbersome) GPUs --Muscle Cars (Smaller/more powerful but not nimble) LPUs --(Grand Prix Cars)
34:00 min - fascinating take that intelligence is a telescope for the mind, in that we realize that we are small, while then also opportunity to see intelligence is vast and to not be afraid of it.
Unfortunately, Jonathan got cut off after telling us about Groq’s speed increase over Nvidia but not about its economy and power consumption. I’d love to hear those figures.
I want to start my own company with an emphasis on building and deploying supercomputers like stargate dogwood and cactus. No way to lose on a compute buildout I guarantee people will need it going forward
I don't think speed should be the top goal for these AI assistants. Even if different LLMs give the same answer, I'd rather have one that feels like it's been carefully considered.
liked everything but the end - don't jerk off to computers, life is the shizz 🌀👎🏽 not good to jump to conclusions before you have all the information 💡
@@schwajj There is a thriving secondary market in PE/VC. Early investors exit by selling to later investors - usually dumb money like pension funds (Canadian), middle eastern sovereign funds etc.
@@tvm73827 I see. I’m unfamiliar with that. I don’t think this is a pump and dump. This video reads like an advertisement, I agree. But I’ve heard another interview with the founder where they go much deeper into technical issues. There is a lot of real technology here, and they will have a lot of key patents.
"Think back to Galileo-someone who got in a lot of trouble. The reason he got in trouble was he invented the telescope, popularized it, and made some claims that we were much smaller than everyone wanted to believe. We were supposed to be the center of the universe, and it turns out we weren’t. And the better the telescope got, the more obvious it became that we were small. Large language models are the telescope for the mind. It’s become clear that intelligence is larger than we are, and it makes us feel really, really small, and it’s scary. But what happened over time was as we realized the universe was larger than we thought and we got used to that, we started to realize how beautiful it was, and our place in the universe. I think that's what’s going to happen. We’re going to realize intelligence is more vast than we ever imagined, and we're going to understand our place in it, and we're not going to be afraid of it.’ ---- Jonathan Ross
The last answer to the question was phenomenal
🤯
What if our “place”
In the universe of intelligence is extinction? Will we be happy that we learned that?
Very insightful! Inference vs Training 🤔
Excellent content. This channel has so much potential!
Wow, this dude is impressive. Chamath is smart to attach himself to this.
Interesting, all I heard was subtle gloating and almost everyone is looking down at their phones
Masterfully conducted and insightful!
Please post this somewhere on All-In!
Please make a video about your thoughts on the impact of LPUs on NVIDIA. Will all inference be directed towards LPUs? The stats are just crazy on just 14nm chips.
Good to see you've been here, Alex although I'm surprised I didn't get this information from you first. My cousin was happily buying AMD the other day. I'm pretty sure I didn't have the heart to tell him he was beating a dead horse. I don't know if you're a car guy but this is how I explain Processors to people now. CPUs --Cadillacs (big/powerful/cumbersome) GPUs --Muscle Cars (Smaller/more powerful but not nimble) LPUs --(Grand Prix Cars)
mic not working
34:00 min - fascinating take that intelligence is a telescope for the mind, in that we realize that we are small, while then also opportunity to see intelligence is vast and to not be afraid of it.
can i master the audio and re-upload?
Just do it
Super amazing, love the ending!
Unfortunately, Jonathan got cut off after telling us about Groq’s speed increase over Nvidia but not about its economy and power consumption. I’d love to hear those figures.
Phenomenal Jonathan Ross! ⚡️🦾
33:24 - 34:28 Best description I have ever heard of the relationship of humans to Ai
I want to start my own company with an emphasis on building and deploying supercomputers like stargate dogwood and cactus. No way to lose on a compute buildout I guarantee people will need it going forward
Chamath keeps interrupting him
Interrupting interviewer is the norm!
Chamath also keeps funding them lol
who is the Turing award winner not named by the CEO at ~ 6:50 🙂
Probably David Patterson: en.wikipedia.org/wiki/David_Patterson_(computer_scientist)
didnt know social capital had a youtube channel, followed
Personalized chip service to model allows the real AI Race to begin because people can innovate for less $$$$ than follow the big guys!…. 🙌🏽🧨👊🏾
He sounds EXACTLY like Ken Griffin
Groq and Speed of inference will only be useful for big agent set ups with 10 trillion parameter models
volume way too low
did he say 50% of all inference on groq by next yr? wow!
He said 50% of available compute. Not 50% of all inference
University cannot compete with tech or any innovations!
Relationships can be derived from “Relationship” skills in real world experience… 😊
Great conversation
How i missed this event
Audio is really low.
I don't think speed should be the top goal for these AI assistants. Even if different LLMs give the same answer, I'd rather have one that feels like it's been carefully considered.
Very cool, thanks
Crazy origin story
The rising Star in the AI Chip world is CEREBRAS.
For training ?
Yes. Training and they are working on inference.
Is there a spac for groq
What they going do when Nvidia AMD and Intel builds this?
Audio is lazy af. Reupload this
Groq is NVIDIA killer
23:00, experience must be under 250-300ms on desktop
🔥🔥🔥
Cheater of the century - Biggest fraud
Please elaborate
liked everything but the end - don't jerk off to computers, life is the shizz 🌀👎🏽 not good to jump to conclusions before you have all the information 💡
Short nvidia rn
On what time frame? Still need NVidia for training, Groq is currently only for inference
5 - 10 x Faster.
Pump and dump??
How? the company isn’t publicly traded
@@schwajj There is a thriving secondary market in PE/VC. Early investors exit by selling to later investors - usually dumb money like pension funds (Canadian), middle eastern sovereign funds etc.
@@tvm73827 I see. I’m unfamiliar with that. I don’t think this is a pump and dump. This video reads like an advertisement, I agree. But I’ve heard another interview with the founder where they go much deeper into technical issues. There is a lot of real technology here, and they will have a lot of key patents.
@@tvm73827"robo-advisors"?
fraudulent gymnastics with the numbers
A Llama 🦙 almost ate my child at the zoo so I’m taking issue with Meta’s AI Models simply by their lame name! 🤨🙄🧟
Hahahahah... Love it