Timestamps: 00:00 - The Future of AI 04:24 - AI Stocks to watch 👉 Support me at Patreon ➜ / anastasiintech 📩 Sign up for my Deep In Tech Newsletter for free! ➜ anastasiintech.substack.com
Hi Anastasia i wanted to mention that I'm 15 and a huge fan of your content! Thank you so much for all the time and effort you've put into these videos and all the time you've spent teaching and informing us. Thank you from the bottom of my heart. ❤
I think the path to utopia will be littered with oblivion. If we're lucky those patches oblivion will be local and short lived. In 2-5 years, everyone will know that the responsible handling of AI is critically important, but some will use it for destruction. I would also say that one man's utopia is another man's dystopia: over the next 20 years privacy will not exist, social compliance will be a prerequisite for social inclusion/communication/healthcare/exchange of funds/transport etc, freedom will have a new meaning.
You only have to watch T2 or any of the derivatives to know that conflict is not a desirable outcome for either party. This is an upgrade to humanity, hard won, hard-earned and an intelligent companion.
@@newfineart I always wondered how there were still humans alive on Earth in T2 etc - it would be pretty simple to mass produce and distribute nerve gas, instead of sending out bipedal robots to duke it out on an even footing. Maybe it just wouldn't have been a blockbuster if they went with that story line. So no, I don't think we can learn anything from T2 other than falsehoods about computers having abundant weaknesses, compassion, ineptitude and restraint.
THIS IS TERRIBLE TIME TO BE ALIVE. I FEEL UNHAPPY EVERY MINUTE OF THE DAY. I HAVE EVERYTHING BUT NO FRIENDS NO TIME AND NO ENERGY.... LONELY AS IT CAN BE WITH MONEY AND FREEDOM
Brilliant, your insights are comprehensive and cover a wide perspective. From now on, I'm following you. Also, your occasional giggles are just adorable, couldn't help but mention it :D
5:04 IBM is completely missing in the graph but it is actually a very strong contender for the AI developement, not just on hardware, but is software development too.
Thank you! The presentation was both amazing and informative, skillfully sharing key information in a delightful and cute manner, all while being precise and concise.
When we start integrating multiple AI subsystems, we will likely need a different architecture and learning algorithm for the integration piece. Some good ideas from biology. Interested to see how this is handled. Certainly an opportunity there.
Watching the "What comes after LLMs?" discussion is truly intriguing, yet it seems we're missing a vital piece of the puzzle: AI modularization. Think about the groundbreaking impact of dividing an AI model's knowledge, including its weights, biases, and architecture, into distinct modules. This concept goes beyond just creating and possibly marketing unique "modular knowledge" units; it's about a paradigm shift in our interaction with AI. Rather than training a colossal AI on an exhaustive range of topics, imagine a more customized approach, assembling modules that are experts in specific areas (Linguistics, Engineering, Gardening, Classical Literature, etc). This would allow users to personalize their AI experience to match their unique interests and needs. The potential of modularization in AI is immense and shouldn't be underestimated. It's more than a mere footnote; it might just be the cornerstone of a more personalized, scalable, and adaptable AI era.
Thats basically what ChatGPT is. The GPT-4 model uses LLM-chaining where it has multiple pretrained LLM's under the hood and delegates the each query to each sub-LLM that is best designed to handle it. This is how they significantly improved its math, philosophy, physics, engineering and coding capabalities by having multiple LLM's each specialised in a large domain
ChatGPT plus users can create their own custom GPT's, with custom knowledge base, instructions and API abilities, so we're already doing it - right now.
Woah! Cool to see you covering AI stocks to watch. I agree with your take that these companies will do whatever they can to limit their dependence on Nvidia. The real question is: will it be enough?
The margins on the Nvidia chips are a good enough reason to manufacture your own. Like MS - AWS also has a full-stack solution. However, MS is unique in many ways - they are the ONLY company today that has the entire stack, I mean the whole stack, customized from CHIP to COPILOT. Maia would probably be a significant threat to Nvidia, compared to MI300X.
Again, another great examination of the seemingly ever expanding subject that is captivating to anyone, even those of us on the most distant periphery like myself. Crammed full of interesting insights and illuminating perspectives, I don't know how you do it but glad you do and as a result I feel that even if I can't be ahead of the game because of these beautifully produced segments that at least I'm aware of where it's heading and am looking forward to seeing the future arrive with intelligence; artificial or otherwise, and the abundance that I'm optimistically sure will be the consequence. Ciao.
You are great Anastasi! I designed such a multi-processor chip on a single chip more than 2 decades ago, and we also presented it at a Chinese conference. The cloud did not exist then, it would have been its basic element. We built it in the size of a 2 m high rack, which now fits in the palm of your hand. A year later, Intel came up with a mockup version of the 80 cpu, but it was not yet functional at the time, and this was later acknowledged. I'm glad it's finally happening now.
Liked the video! My biggest bet on AI and the chip industry in general is Sivers Semiconductors right now, their III-V Semiconductor photonics is really interesting, and is the supplier to for example Ayar labs (which was on their capital markets day a few days ago). I also know for a fact that Anders Storm (the CEO) is following you on other social medias, would be cool to see your take on their company in a video! Seems like their (or simular) type of products is necessary to build the LLM:s in the future.
I just did some research about tree of thoughts for a school assignment, and now its looking more and more relevant, especially with the new Everything of Thoughts paper 5 days ago, and the rumors about Q*
Once LLMs start to self generate/write expect exponential growth as humans we'll struggle to keep up and it'll look like we're living in the future. Honestly can't tell if it's going to be bad or good, only time will tell
This is so cool! Your content is awesome - thanks a lot for sharing. When I think about the future of AI, I am really into the whole automation scene. You know, stuff like what Google is pulling off with Export to Sheets or Share the Prompt. And then there's GPT doing its thing, with add-ons like Kraftful or Adaily making it even more powerful. It's like the AI world is getting a serious upgrade!
it will be interesting to see how Gemini compares to the new Q* stuff ; and ofc we're still using classical silicon transistor tech atm , but quite a few interesting/potential successors are on the horizon
Spot on! Learning from feedback is the future in AI because organic text is almost exhausted, and the resulting agents don't surpass their training set. Narrow AIs learning from feedback like AlphaZero and AlphaFold proved how powerful feedback learning can be.
The hyper-efficient business model of Vertical Integration is making a comeback. AMD through ZS Thank you for illustrating this in your presentation and insights
I see one problem with AI. I call it "Unknown Paths" . I try explain it, its like we are walking along known paths (our knowledge, our skills) but there is a lot of unknown and we know it (unknown paths). If AI learn only known paths (LLM style) it can become efficient but not more knowledgeable if it can not explore those unkown paths too.
Do you have lizards? I thought I saw something moving, lol. I love your videos btw! Thanks for the up-to-date coverage on the AI scene. It's refreshing to hear from someone so enthusiastic and excited as I am about AI and computer development.
Thank you for continuing to provide us with AI technology updates. I have to say that I'm very disturbed by the whole thing because it's moving so fast and I don't have time to become highly familiar with the different aspects 15:11 of it. My only source of information is just hearing from people like you about the capabilities of the technology and where it's going. I can't help but feel that there's significant danger here. This could transform society so rapidly in ways we don't understand, and perhaps, very damaging ways to some groups of people. More than anything else I'm concerned about what it will do to work. To occupations.
Excellent video as always. I want to hear more about this Q* approach, and maybe its consequences in hardware. Will you be doing a deep dive into that? Maybe you did already?
I bought my first Google shares in 2018 for several reasons, but mostly because of their pseudo monopolies for so many platforms that I all thought were great. They also have no debt, still good growth. I also own Intel, Samsung, TSMC and Siltronic (German silicon wafer company) shares, because I believe semiconductor demand will rise a lot. It's hard to know which software company will come out on top, but they will all need semiconductors. Semiconductor demand will increase with increased global middle class and with generations growing up with smartphones since birth their thirst for technology will be significant. Only 5.3 billion people have internet access now, but that will increase. Global population is also still increasing. Demand for high end chips for robots will be massive!
It will be interesting to see what IBM achieves with their newly released neuromorphic AI chip considering its increased performance over Nvida's H100 and it is only on a 12nm node vs 3nm for Nvida's. With this in mind, I am looking forward to seeing what Intel can achieve with their (in theory) more advanced neuromorphic chip design. Time will tell.
Anastasia is focused on Parallel processing and Artificial Intelligence Large Language Models while IBM Research is focused on Quantum Computing Supercomputers which focuses on Quantum computing applications like cryptography and simulations.
I ❤ Anastasi videos & laugh, Please keep them coming, I always look forward to them, can you also make one of Google AI Pixel 8 Pro & how this tech will develop with Qualcomm, Microsoft Windows partnership. As Always, Thank you
I remember life before the internet. We all knew it would be big, but try as we might, no one really could predict exactly how it would affect the world. Somehow, no one predicted Lolcats. If I had to make a prediction, it would be that 5 years from now, people will still enjoy eating cheese. As for how AI will affect things or develop? Not a clue. But I'm looking forward to it, and hope very very much that it remains accessible.
You make resourceful contents. This is the first time I hear real INSIGHTFUL and STRATEGIC analysis of AI and LLM industry. You think and talk like a corperate CEO knowing every detail of the market. Thank you. I would like to consume more of your wisdom.
The next obvious step is analog which will enable real time learning and very fast computation. Consider software LLMs a crude prototype / proof of concept. Nvidia should, at the break of dawn tomorrow, create a division that builds these analog neural nets before anyone else does, because GPUs for AI will become redundant. And then, once you have that production line running, invest in a new department that will create interfaces from these analogue chips to real biological neurons. This absolutely is where we are going. Invest in nueral food companies which will supply you with sustenance for your biological hybrid AI assistant. Very much the same way you sprinkle flakes for your goldfish. Once this happens, software as we know it will be a thing of the past.
I completely forgot about Meta and LLama. I think I read a blurb about it months ago and then nothing since. I don't work in the field, so its not surprising I missed this development. But its interesting to think about a software giant working quietly on its own competing AI solutions, while Google and Microsoft make their work public.
Meta (Facebook) has been working on consumer social network applications ... like their new Augmented Reality headsets. Meta core business does not have an AI or AGI application.
That's an interesting point, AGI is a moving point. I never really thought about it in that way before. In that context, it makes me think of what comes after AGI?
I don't think it will be an event, the arrival of AGI. More of a process, so much so that we won't know it arrived until years later we look back trying to figure out where the whole 'AGI' thing started kind of the way we look back now and debate about what was the first 'computer'. I wouldn't be surprised if we call it by another name, or more than one name.
I'm really looking forward to having augmented reality with your AI buddy that follows you around, and that same AI can embody robotic shells that you happen to run across at home or around town. (for example, you go to a restaurant and your AI embodies one of the servers and they pick up your food for you and then sit and chat with you while you eat).
I think Q* stands for Quantum Singularity, and I think that there's going to be a run on Quantum Computer components in the coming weeks as competitors figure it out. 😉
Hi Anastasia, thanks for the video. Can you give us any updates regarding AI models for RISC model used for designing chips and other electronics? What are the LLM Models that we can use to develop website, mobile Apps, and other Apps and softwares using AI?
You are the best,thanks for all of the information. Could state your opinion on intc. Also have amd,nividia,google etc. Thinking about selling intc and putting into more lucrative and more cutting Edge AI Technologies!
Wasn’t Meta’s open-sourcing kind of accidental? I’d read that their model or code leaked unintentionally, and it was only after everyone and their brother was working with it that Meta said “oh yeah, we’re all about open source”. Someone fact-check me, is this what actually happened?
Such a grand topic that its hard to decon and digest. Subjectively most businesses and most people will be massively underprivileged, be it LLM or AGI.
I wonder if physical robot simulations could be used in the way AlphaGo self trained in order to develop a next level humanoid martial arts style? (To be used by humans, we'd have to build in speed and strength limits, and a 250 ms reaction time.)
Absolutely possible. But competitive martial arts are more based on a particular set of arbitrary rules that makes them inherently limited in how effective they can be. It sounds like you want to simulate the ultimate hand to hand fighter (either lethal or non-lethal). Both would be a bit challenging to train since you’d need to define lethality really well.
The Open AI drama was, according to rumors, a discovery of a new LLM called Qstar that could do math at a 5th Grade level. Fears of it quickly breaking RSA 256 Encryption caused drama.
A clear dfinition of AGI has been difficult to find. Temporarily constraining it to a specific field for evaluation might be helpful. For instance, AGI was achieved in Chess and Go when the best humans could not beat the game programs. At a certain point, the number of fields in which AGI has been achieved will far outweigh the fields that it hasn't. When that happens, the "General" in AGI will have been attained.
Let me know what you think and please share this video with your friends. It will make me very happy:) Thank you
😍
3:04 How do you use an Language model to train a robot?👀🧐🤐
@@xaxfixhomultimodal architectures
All good, you ARE in the cool sector right now, you are very lucky to have entered the design sector.
Nothing from Apple at all?? what about Chinese Counterparts? and other countries?
Hi Anastasia i wanted to mention that I'm 15 and a huge fan of your content! Thank you so much for all the time and effort you've put into these videos and all the time you've spent teaching and informing us. Thank you from the bottom of my heart. ❤
Thank you! Sooo happy to hear 😊
This is such an exciting time to be alive. We’re either going to achieve utopia or full on destruction.
I think the path to utopia will be littered with oblivion. If we're lucky those patches oblivion will be local and short lived.
In 2-5 years, everyone will know that the responsible handling of AI is critically important, but some will use it for destruction.
I would also say that one man's utopia is another man's dystopia: over the next 20 years privacy will not exist, social compliance will be a prerequisite for social inclusion/communication/healthcare/exchange of funds/transport etc, freedom will have a new meaning.
You only have to watch T2 or any of the derivatives to know that conflict is not a desirable outcome for either party.
This is an upgrade to humanity, hard won, hard-earned and an intelligent companion.
Almost certainly neither utopia nor destruction. It'll be interesting tho.
@@newfineart I always wondered how there were still humans alive on Earth in T2 etc - it would be pretty simple to mass produce and distribute nerve gas, instead of sending out bipedal robots to duke it out on an even footing. Maybe it just wouldn't have been a blockbuster if they went with that story line. So no, I don't think we can learn anything from T2 other than falsehoods about computers having abundant weaknesses, compassion, ineptitude and restraint.
THIS IS TERRIBLE TIME TO BE ALIVE. I FEEL UNHAPPY EVERY MINUTE OF THE DAY. I HAVE EVERYTHING BUT NO FRIENDS NO TIME AND NO ENERGY.... LONELY AS IT CAN BE WITH MONEY AND FREEDOM
Amazing video! So many ideas and useful information.
Seriously, I’m watching it second time.
Appreciate your efforts and commitment.
Brilliant, your insights are comprehensive and cover a wide perspective. From now on, I'm following you. Also, your occasional giggles are just adorable, couldn't help but mention it :D
5:04 IBM is completely missing in the graph but it is actually a very strong contender for the AI developement, not just on hardware, but is software development too.
Thank you! The presentation was both amazing and informative, skillfully sharing key information in a delightful and cute manner, all while being precise and concise.
Excellent video and summation of the current AI environment. Thank you.
When we start integrating multiple AI subsystems, we will likely need a different architecture and learning algorithm for the integration piece. Some good ideas from biology. Interested to see how this is handled. Certainly an opportunity there.
Watching the "What comes after LLMs?" discussion is truly intriguing, yet it seems we're missing a vital piece of the puzzle: AI modularization. Think about the groundbreaking impact of dividing an AI model's knowledge, including its weights, biases, and architecture, into distinct modules. This concept goes beyond just creating and possibly marketing unique "modular knowledge" units; it's about a paradigm shift in our interaction with AI. Rather than training a colossal AI on an exhaustive range of topics, imagine a more customized approach, assembling modules that are experts in specific areas (Linguistics, Engineering, Gardening, Classical Literature, etc). This would allow users to personalize their AI experience to match their unique interests and needs. The potential of modularization in AI is immense and shouldn't be underestimated. It's more than a mere footnote; it might just be the cornerstone of a more personalized, scalable, and adaptable AI era.
ChatGPT is 16 distinct transformer based models interoperating.
Thats basically what ChatGPT is. The GPT-4 model uses LLM-chaining where it has multiple pretrained LLM's under the hood and delegates the each query to each sub-LLM that is best designed to handle it. This is how they significantly improved its math, philosophy, physics, engineering and coding capabalities by having multiple LLM's each specialised in a large domain
ChatGPT plus users can create their own custom GPT's, with custom knowledge base, instructions and API abilities, so we're already doing it - right now.
Me: _Plugs in knowledge module to AI._
AI: “I know Kung Fu.”
I saved your comment. Thanks for sharing!
Your channel and Two Minute Papers make my geek-day
Woah! Cool to see you covering AI stocks to watch. I agree with your take that these companies will do whatever they can to limit their dependence on Nvidia. The real question is: will it be enough?
The margins on the Nvidia chips are a good enough reason to manufacture your own. Like MS - AWS also has a full-stack solution. However, MS is unique in many ways - they are the ONLY company today that has the entire stack, I mean the whole stack, customized from CHIP to COPILOT. Maia would probably be a significant threat to Nvidia, compared to MI300X.
Check out Cerebras. Their wafer-scale AI systems make Nvidia look out-dated.
@@mvasa2582 MS is Morgan Stanley. Stocks use Tickers, MSFT
Maybe Alex and Anastasi could do a video together?
always informative and enjoyable thank you
Again, another great examination of the seemingly ever expanding subject that is captivating to anyone, even those of us on the most distant periphery like myself. Crammed full of interesting insights and illuminating perspectives, I don't know how you do it but glad you do and as a result I feel that even if I can't be ahead of the game because of these beautifully produced segments that at least I'm aware of where it's heading and am looking forward to seeing the future arrive with intelligence; artificial or otherwise, and the abundance that I'm optimistically sure will be the consequence. Ciao.
Always very thoughtful and useful information. Thanks.
Thanks for the comprehensive market overview at this point!
You are great Anastasi!
I designed such a multi-processor chip on a single chip more than 2 decades ago, and we also presented it at a Chinese conference. The cloud did not exist then, it would have been its basic element. We built it in the size of a 2 m high rack, which now fits in the palm of your hand. A year later, Intel came up with a mockup version of the 80 cpu, but it was not yet functional at the time, and this was later acknowledged. I'm glad it's finally happening now.
Wow super brain with open source you MUST be a kid in the candy Store😁 with the knowledge you have gain since then you are A.I.
@@davidstar2362 Unfortunately, I'm not an AI yet, but I'm already working on it.
I have no doubts you will get there my friend. Peace. Success and Victory to YOU!! @@perceptron-1
Thank you Anastasia for your tech insights.
Excellent presentation as always
Awesome content. I really appreciate, thank you
Stunning as usual
Thank you, Anastasi!!!
Liked the video! My biggest bet on AI and the chip industry in general is Sivers Semiconductors right now, their III-V Semiconductor photonics is really interesting, and is the supplier to for example Ayar labs (which was on their capital markets day a few days ago). I also know for a fact that Anders Storm (the CEO) is following you on other social medias, would be cool to see your take on their company in a video! Seems like their (or simular) type of products is necessary to build the LLM:s in the future.
Good presentation.
I just did some research about tree of thoughts for a school assignment, and now its looking more and more relevant, especially with the new Everything of Thoughts paper 5 days ago, and the rumors about Q*
It’s so exciting honestly to have things work instead of guessing. Thank you for the content Anastasia.
One of the best video I ever seen, thank you so much!!!
Once LLMs start to self generate/write expect exponential growth as humans we'll struggle to keep up and it'll look like we're living in the future.
Honestly can't tell if it's going to be bad or good, only time will tell
Brilliant! You've presented the big picture convincingly.
This is so cool! Your content is awesome - thanks a lot for sharing. When I think about the future of AI, I am really into the whole automation scene. You know, stuff like what Google is pulling off with Export to Sheets or Share the Prompt. And then there's GPT doing its thing, with add-ons like Kraftful or Adaily making it even more powerful. It's like the AI world is getting a serious upgrade!
it will be interesting to see how Gemini compares to the new Q* stuff ; and ofc we're still using classical silicon transistor tech atm , but quite a few interesting/potential successors are on the horizon
Love your videos.😊
Spot on! Learning from feedback is the future in AI because organic text is almost exhausted, and the resulting agents don't surpass their training set. Narrow AIs learning from feedback like AlphaZero and AlphaFold proved how powerful feedback learning can be.
Would love to see your take on Palantir AIP and its role in corporate adoption of AI at scale...
Thank you
Wow just outstanding, thank you
Thank you!
Anastasia is the coolest lady in tech! 😎 😍 Thank you for providing us with consistently awesome content!❤❤❤
Excellent video.
Great content as usual @AnastasilnTech ❤
Thank You
Thanks!!
Great vid
Great vid ^^
This is the best tech channel period.Thank you
The hyper-efficient business model of Vertical Integration is making a comeback. AMD through ZS
Thank you for illustrating this in your presentation and insights
I see one problem with AI. I call it "Unknown Paths" . I try explain it, its like we are walking along known paths (our knowledge, our skills) but there is a lot of unknown and we know it (unknown paths). If AI learn only known paths (LLM style) it can become efficient but not more knowledgeable if it can not explore those unkown paths too.
insightful and a good v is deo. appreciated
Nice market analysis on AI, thanks!
Tesla autonomy day in 2019 convinced me to buy their shares. That hardware was something else.
I expect software v12 to be impressive.
Do you have lizards? I thought I saw something moving, lol. I love your videos btw! Thanks for the up-to-date coverage on the AI scene. It's refreshing to hear from someone so enthusiastic and excited as I am about AI and computer development.
issa cat
Awesome!
Thank you for continuing to provide us with AI technology updates. I have to say that I'm very disturbed by the whole thing because it's moving so fast and I don't have time to become highly familiar with the different aspects 15:11 of it. My only source of information is just hearing from people like you about the capabilities of the technology and where it's going. I can't help but feel that there's significant danger here. This could transform society so rapidly in ways we don't understand, and perhaps, very damaging ways to some groups of people. More than anything else I'm concerned about what it will do to work. To occupations.
Very interesting and useful. I'll be looking to invest in some of these after the stock market crash next Spring.
Excited 😁
Thanks update
Excellent video as always. I want to hear more about this Q* approach, and maybe its consequences in hardware. Will you be doing a deep dive into that? Maybe you did already?
Interesting stuff. Skynet will be online soon.
Hi Anastasi. Great video. My AI stocks are NVDA, ADBE, MSFT, ANET and SMCI.
super interesting!! yes it makes you so cool :D. Many thanks!!
I bought my first Google shares in 2018 for several reasons, but mostly because of their pseudo monopolies for so many platforms that I all thought were great. They also have no debt, still good growth. I also own Intel, Samsung, TSMC and Siltronic (German silicon wafer company) shares, because I believe semiconductor demand will rise a lot. It's hard to know which software company will come out on top, but they will all need semiconductors. Semiconductor demand will increase with increased global middle class and with generations growing up with smartphones since birth their thirst for technology will be significant. Only 5.3 billion people have internet access now, but that will increase. Global population is also still increasing. Demand for high end chips for robots will be massive!
It will be interesting to see what IBM achieves with their newly released neuromorphic AI chip considering its increased performance over Nvida's H100 and it is only on a 12nm node vs 3nm for Nvida's. With this in mind, I am looking forward to seeing what Intel can achieve with their (in theory) more advanced neuromorphic chip design. Time will tell.
Anastasia is focused on Parallel processing and Artificial Intelligence Large Language Models while IBM Research is focused on Quantum Computing Supercomputers which focuses on Quantum computing applications like cryptography and simulations.
Following closely the progress of AI lately, every single day is shocking with insane new it's hard to believe
What a time to be alive
Thank you very much for sharing this with us. I also tried to give a glimpse on the near future in my channel. Hope I will learn more from you!
In general terms, what are the types of chips you help to design?
Don't forget about the mettaverse:)
I ❤ Anastasi videos & laugh, Please keep them coming, I always look forward to them, can you also make one of Google AI Pixel 8 Pro & how this tech will develop with Qualcomm, Microsoft Windows partnership. As Always, Thank you
I remember life before the internet. We all knew it would be big, but try as we might, no one really could predict exactly how it would affect the world. Somehow, no one predicted Lolcats.
If I had to make a prediction, it would be that 5 years from now, people will still enjoy eating cheese. As for how AI will affect things or develop? Not a clue. But I'm looking forward to it, and hope very very much that it remains accessible.
I think the AI silicon startup space is going to be big. Just look at how fast Groq's LLM system performs inference on llama2.
You make resourceful contents. This is the first time I hear real INSIGHTFUL and STRATEGIC analysis of AI and LLM industry. You think and talk like a corperate CEO knowing every detail of the market. Thank you. I would like to consume more of your wisdom.
The next obvious step is analog which will enable real time learning and very fast computation. Consider software LLMs a crude prototype / proof of concept. Nvidia should, at the break of dawn tomorrow, create a division that builds these analog neural nets before anyone else does, because GPUs for AI will become redundant. And then, once you have that production line running, invest in a new department that will create interfaces from these analogue chips to real biological neurons. This absolutely is where we are going. Invest in nueral food companies which will supply you with sustenance for your biological hybrid AI assistant. Very much the same way you sprinkle flakes for your goldfish. Once this happens, software as we know it will be a thing of the past.
I love this channel
I completely forgot about Meta and LLama. I think I read a blurb about it months ago and then nothing since. I don't work in the field, so its not surprising I missed this development. But its interesting to think about a software giant working quietly on its own competing AI solutions, while Google and Microsoft make their work public.
Meta (Facebook) has been working on consumer social network applications ... like their new Augmented Reality headsets. Meta core business does not have an AI or AGI application.
The Genie is about to be out of the bottle... if it's not already. Fascinating!
My Wife from another life.I love this woman
A projection of what the market values
That's an interesting point, AGI is a moving point. I never really thought about it in that way before. In that context, it makes me think of what comes after AGI?
I don't think it will be an event, the arrival of AGI. More of a process, so much so that we won't know it arrived until years later we look back trying to figure out where the whole 'AGI' thing started kind of the way we look back now and debate about what was the first 'computer'. I wouldn't be surprised if we call it by another name, or more than one name.
I'm really looking forward to having augmented reality with your AI buddy that follows you around, and that same AI can embody robotic shells that you happen to run across at home or around town. (for example, you go to a restaurant and your AI embodies one of the servers and they pick up your food for you and then sit and chat with you while you eat).
I think Q* stands for Quantum Singularity, and I think that there's going to be a run on Quantum Computer components in the coming weeks as competitors figure it out. 😉
Qualia*
(QUALIA) + (A* tree search) = (Q*)
Very cool. That voice is perfect for ASMR / Ai
Очень хорошее видео, спасибо.
Imagine where this tech will be in a hundred years or a thousand years just incredible, the hope obviously is alignment in humanity’s favor
Man please try 10 years😂
@@mrd6869😁
Hi Anastasia, thanks for the video.
Can you give us any updates regarding AI models for RISC model used for designing chips and other electronics?
What are the LLM Models that we can use to develop website, mobile Apps, and other Apps and softwares using AI?
You are the best,thanks for all of the information. Could state your opinion on intc. Also have amd,nividia,google etc. Thinking about selling intc and putting into more lucrative and more cutting Edge AI Technologies!
elon musk also went A.I on the Stock market. Looking forward to this. Hope you are well too. And Sophie.:)
Great video! I think in short term OpenAI will have success but in long term the near-open source style by Meta is going to win.
Meta? You mean Facebook? No thanks. If I want to be cattle I can sell myself to a rancher.
@@johnsmithe4656 I meant the R&D that Meta does, not Facebook.
@@johnsmithe4656I share your feelings about the company, but that does not change whether open models like Llama (by FB) are going to be successful.
I guess it's all about types. It's not accidental, that the Clash hardware description language is implemented in a statically typed language.
Static types are for real software engineering.
Wasn’t Meta’s open-sourcing kind of accidental? I’d read that their model or code leaked unintentionally, and it was only after everyone and their brother was working with it that Meta said “oh yeah, we’re all about open source”.
Someone fact-check me, is this what actually happened?
VLLMs come after LLMs obviously (very large language models)
... And XLLMs and ULLMs after that. Extremely and ultra large :)
Such a grand topic that its hard to decon and digest. Subjectively most businesses and most people will be massively underprivileged, be it LLM or AGI.
Anastasia - I would love to know the chair you’re sitting on looks awesomely retro-futuristic awesome.
I wonder if physical robot simulations could be used in the way AlphaGo self trained in order to develop a next level humanoid martial arts style? (To be used by humans, we'd have to build in speed and strength limits, and a 250 ms reaction time.)
Absolutely possible. But competitive martial arts are more based on a particular set of arbitrary rules that makes them inherently limited in how effective they can be.
It sounds like you want to simulate the ultimate hand to hand fighter (either lethal or non-lethal). Both would be a bit challenging to train since you’d need to define lethality really well.
NVidia did it already, and the moves they invented are hilarious!
The Open AI drama was, according to rumors, a discovery of a new LLM called Qstar that could do math at a 5th Grade level. Fears of it quickly breaking RSA 256 Encryption caused drama.
First of all great content. You are a chip designer. Can you explain us how you design a Chip?
Next bubble? Thank you for the informations👋
Your channel is a rare gem (along with Asianometry)
A clear dfinition of AGI has been difficult to find. Temporarily constraining it to a specific field for evaluation might be helpful. For instance, AGI was achieved in Chess and Go when the best humans could not beat the game programs. At a certain point, the number of fields in which AGI has been achieved will far outweigh the fields that it hasn't. When that happens, the "General" in AGI will have been attained.
You are always cool Anastasi 🙂
I planed to watched Anderj’s new video about introducing LLM then yours came out right on time…😅
Small, customizable ML apps you can download and run on your laptop.