Considering a young mid income, short term minded person with constant need for cash, are these still worth investing into? I am new to all of these and have incurred so much losses, I am beginning to think I am not doing what is good for me but just following people blindly. .......
Following you has been an amazing journey , you have shown me the best way to earn much better profits despite the bad economic situations, God bless you with more knowledge gladdis harriet...
The guest speaker is misinformed. The logic sounds ostensibly correct - a human being doesn’t need to know Java to write Shakespeare. But LLMs are not humans. It is widely established that the remarkable capabilities come from hoovering up vast amounts of high quality data. Capabilities increase with the amount of data they are pretrained on, which includes code. So in a sense, yes, they do need to know Java to write better Shakespeare. Small models will turn out to be highly useful for research in the long-run, but not commercially applicable. There will probably always be some cases in which models that need to be deployed at the edge will need to be small, but small is relative. Fast forward to 2030 and those “small models” at the edge may look like GPT-4o or GPT-5.
I think the answer to Andrew's question regarding LLM investment vs smaller domain specific LM's might be, at least in the the US and Europe, that there are a lot of large private companies (the GE's, IBM's and Pfizer's, etc of the Fortune 500 for example) that have amassed their own industry- and operations-specific private data sets, over decades and decades, that will never be part of the LLMs which have largely resulted from scraping the public internet.
Andrew, you were getting close. The answer is that the baseline intelligence from which to create and reason is available open-source and as good as GPT-4. Fine-tuned LLMs are called agents. Open-source LLMs allow us to create agents. Agents have tools, memory and planning and actions. That is what LLMs will be. This is the application layer. The incumbents will be like Google on steroids, they are better just thinking machines. You just need a little more understanding of the technology to see where the market is going. Happy AI trails!
This host, Andrew has a habit of cutting off others and rambling. We don't want to hear your opinions Andrew. We want to hear the real expert on the issue, Michael Fertik!
I think Nvidia can take it's lead and take on enhancing mobile data processing-keep its lead. As it is even with the small large language modeling, it's faster to ask a cloud based model for the answers than to do the computations locally.
Imagine a future where we have to register computational capacity like cars or firearms. A new vision for the ministry of truth. Thought police reimagined.
I bought before the A.I. hype. Will buy more after the 10-1 stock split. But, from my experience the stock goes down and stays there for awhile then jumps up suddenly. Would be cool if I can buy $100k of the stock and 25 years from now it turns into $1 million. Then, sell it and buy dividend paying stocks and have a monthly income for retirement. But, I'm not holding my breath.
But you need larger models to emulate human reasoning and intelligence by brute force learning. You need 100000 tokens text 1000000 tokens images 1000000 tokens sensors data in order for a robot pick an egg by first try without breaking it and you need very very reliable models in order to trust the model
#KARRAT business partner with #NVIDIA massive pump around the corner 💥💥💥💥 soon 50$ for one coin 🚀🚀🚀🚀🚀 #KARRAT project number one definitely soon massive blast 🌋🌋🌋🌋🌋🌋🎉🎉
Considering a young mid income, short term minded person with constant need for cash, are these still worth investing into? I am new to all of these and have incurred so much losses, I am beginning to think I am not doing what is good for me but just following people blindly.
.......
Following you has been an amazing journey , you have shown me the best way to earn much better profits despite the bad economic situations, God bless you with more knowledge gladdis harriet...
Great to see you guys talking of gladdis, Trading on your own can be very dangerous i can testify to that. This woman changed the game for me
Any info on how i can liaise with her, i'm new at this
she often interacts on Telegrams
.....
Harriet715
AGI is a philosophical question
they cut him off i want to hear andrews emails
Excellent discussion
The guest speaker is misinformed. The logic sounds ostensibly correct - a human being doesn’t need to know Java to write Shakespeare. But LLMs are not humans. It is widely established that the remarkable capabilities come from hoovering up vast amounts of high quality data. Capabilities increase with the amount of data they are pretrained on, which includes code. So in a sense, yes, they do need to know Java to write better Shakespeare.
Small models will turn out to be highly useful for research in the long-run, but not commercially applicable. There will probably always be some cases in which models that need to be deployed at the edge will need to be small, but small is relative. Fast forward to 2030 and those “small models” at the edge may look like GPT-4o or GPT-5.
The host wants to shine more than the sun 😂
He is a journalist King. Check out “Too Big To Fail”.
@@matthew.m.stevick He isn't a king...please dont put avg journalists on a pedestal
I think the answer to Andrew's question regarding LLM investment vs smaller domain specific LM's might be, at least in the the US and Europe, that there are a lot of large private companies (the GE's, IBM's and Pfizer's, etc of the Fortune 500 for example) that have amassed their own industry- and operations-specific private data sets, over decades and decades, that will never be part of the LLMs which have largely resulted from scraping the public internet.
Bro interrupting is annoying
the guest talks like he knows something while stating the obvious
Disagree with the small language models. Sure they’re faster but having one LLM to have access to your data is crucial.
Nvidia has no competition anytime soon.
Andrew, you were getting close. The answer is that the baseline intelligence from which to create and reason is available open-source and as good as GPT-4. Fine-tuned LLMs are called agents. Open-source LLMs allow us to create agents. Agents have tools, memory and planning and actions. That is what LLMs will be. This is the application layer. The incumbents will be like Google on steroids, they are better just thinking machines. You just need a little more understanding of the technology to see where the market is going. Happy AI trails!
I heart 💚🖤 Becky 🇺🇸📈
The guest didn't answer the question. It is a good question.
This host, Andrew has a habit of cutting off others and rambling. We don't want to hear your opinions Andrew. We want to hear the real expert on the issue, Michael Fertik!
I think Nvidia can take it's lead and take on enhancing mobile data processing-keep its lead.
As it is even with the small large language modeling, it's faster to ask a cloud based model for the answers than to do the computations locally.
Imagine a future where we have to register computational capacity like cars or firearms. A new vision for the ministry of truth. Thought police reimagined.
Let the guy finish talking
Waiting for AI to block all the phishing and pathetic scamming posts under every single financial news video.
Because small models are great for on device. Jeez man….. why is CNBC letting this guy interview people!??
you can't call AGI a philosophical question if there is a likelyhood that it's already in the lab
anywhere close and sam altman would be the first to tell everyone from the top of the building if that were the case lol
@@md.mohaiminulislam9618 then why is he so secretive about the q star thing
I bought before the A.I. hype. Will buy more after the 10-1 stock split. But, from my experience the stock goes down and stays there for awhile then jumps up suddenly. Would be cool if I can buy $100k of the stock and 25 years from now it turns into $1 million. Then, sell it and buy dividend paying stocks and have a monthly income for retirement. But, I'm not holding my breath.
But you need larger models to emulate human reasoning and intelligence by brute force learning. You need 100000 tokens text 1000000 tokens images 1000000 tokens sensors data in order for a robot pick an egg by first try without breaking it and you need very very reliable models in order to trust the model
Nvidia has gone up too much. Whoever buys it from: will it go up to 5 trillion in market cap ?
GameStop amc baby! :)
NVDA to the Moon $.
Dude, how about saying something we don’t know.
holy interrupting is so annoying
More free commercials
Lol no mention at all of bitcoin mining , which is the real reason behind nvidias explosion in value !
Dave Grohl moonlighting as analyst
Nvidia is a monolopy
He doesn't know where it's going, he's a clown.
#KARRAT business partner with #NVIDIA massive pump around the corner 💥💥💥💥 soon 50$ for one coin 🚀🚀🚀🚀🚀 #KARRAT project number one definitely soon massive blast 🌋🌋🌋🌋🌋🌋🎉🎉
I have seen AI jesus
This host is very rude.
You have a serious problem talking over your guest 🤢