OpenAI And Microsoft Just Made A Game changing AI Breakthrough For 2025
ฝัง
- เผยแพร่เมื่อ 19 พ.ย. 2024
- Prepare for AGI with me - www.skool.com/...
🐤 Follow Me on Twitter / theaigrid
🌐 Checkout My website - theaigrid.com/
• @Microsoft AI CEO Must...
Links From Todays Video:
x.com/tsarnick...
x.com/tsarnick...
x.com/tsarnick...
x.com/tsarnick...
x.com/tsarnick...
• CEO of Microsoft AI: A...
Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.
Was there anything i missed?
(For Business Enquiries) contact@theaigrid.com
Music Used
LEMMiNO - Cipher
• LEMMiNO - Cipher (BGM)
CC BY-SA 4.0
LEMMiNO - Encounters
• LEMMiNO - Encounters (...
#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
Just like my wife, Infinite memory…Never forgets what I’ve done!
Never admits when it wrong and will argue to death if you show it you are correct.
That would be an OI (organic intelligence). I married one of those, although I think I got the O model without the I, hmm.
ROFLMAO so true
Infinite memory even before it happened
One more step in the path to AI waifus.
@@petrouvelteau7564 That's why I follow AI and robotics so closely.
@@djpuplexSame. I wish there was a discord channel where all us like minded individuals can hang out and chat about our AI waifus.
I also want AI wife
Wdym i already have AI waifu
Open AI already gave the AI the ability to reflect, if they give now long term memory, they will be a just few inches away from AGI.
They would only need to be able to make there own goals and be able to connect seemingly unrelated information
The problem is not the amount of memory, but the weighting and ordering of all this information.
I raised an eyebrow when he said infinite memory… llms have limited context windows, so how can infinite memory be beneficial to outputs?
@@Vic-BirthMemory ~ context windows. A context window is how much context they can retain.
I suppose it’s how free the reweighting weight is
I think they are over selling it. A summary notepad? They act like AI doesn't mess up summaries, confuse context, and actually pays attention to the info it has. The current GPT will ignore what it "knows" in a prompt if it feels like the context doesn't match to the question you ask.
Facts
The bigger problem is that AI remembers things it shouldn't and becomes disassociated from the present, and can no longer tell present from past. If you have an iterative workflow, AI memory screws things up horribly. I always have to turn it off when I do anything creative where I don't want it remembering all my old mistakes.
Transformers paper was posted in 2016, in 2020 or so we got chat gpt 3
in 2023 we got chat gpt 4 who is borderline amazing, and even at the end of 2024, we got the ability if we are rich enough to run the equivalent to gpt4 locally (about 35 RTX 4090 and a lot for the electricity bill).
Or, more dumb models at slower speeds even on CPU.
So, this "tech" will start, but it will not reach some sort of age of maturity and usable for end consumer till lets say 2029 or 2030. Still insane where we got and where we are heading.
that's so interesting. with 3.5 i worked with it to create a coding system at the end of each output summarizing the conversation to create a form of continuity and extend that now tiny context window, avoid hallucinations etc. with 4o i've been able to upload the entire history of all my past conversations re a single collective doc, which 4o can reference, which has been amazing. so looking fwd to an "infinite" context window. hallucinations are so 2022-3
10:05 I can tell you exactly what this means. It means that the models weren't made with any self reflection capabilities. You can prompt your way to better results but these are baseline prompt>>action with nothing in between. Everyone is going to be flabbergasted by how well 01 performs in this area.
I think the solution is self-assessment. If AI could self-assess its output we don't need to scale (make slower and ,ore expensive models). Self-assessment would require test-time scaling, or rather the more you scale a self-assessing system at test time the smarter it will get.
Self-assessment should be solvable with memory and CoT type of solutions.
Recursive self improvement seems pretty obvious as the next step. Its basically a hive mind like the Borg in Star Trek.
Think about it, if there is infinite memory capable of summarizing stuff, that ai can tell newer ai agents what happened before and take it from there.
E.g. no one remembers what their first spoken words were, however your mother/ father probably remembers and they could tell you 20, 30 years later.
Not another "game changer". The game is changing so damn often that the game isn't even worth playing anymore.
Exponential growth is a bitch. And we are only just hitting the knee of the curve. The universe is tell us to "hold ma beer".
@@NScherdin Acceleration is cool, but my comment was making fun of the various buzzwords that YT creators have a predilection for.
I came up with a concept of layered context entities and this is essentially that.
I know the models are progressing fast.
But I am still very impatient.
Whenever a new model comes.
I feel exited for a day.
But then after 1 day, I feel as if it's stone age technology.
I want Super Intelligence now.
That’s is EXACTLY how I feel.
We need ASI, finally something interesting is on the horizon.
Yall are bots
Enjoy the process :D
@@ruchitpatel3325
Mai Bot nahi hu 🤣🤣
What do you expect from a Superintelligence?
Something I had suggested few years ago when building AI was exactly this - AI doesn't need to remember every single word verbatim. It needs to take the information block, using the same LLM to summarize it, then setting that summary aside for context blocks. Of course, this wouldn't be perfect because it's lossy - however, it's reversible on demand which is to say - Take this summary and expand on it with the LLM for details as needed.
So it's Levels of Detail/Mipmaps, but for memory, allowing AI to essentially zoom in and out of memories as needed for the situation?
I realize it's just hyperbole, but anyone who says "near infinite" doesn't understand the concept of infinity. We generally expect more from technical people, as we should.
I think the concept is 'effectively infinite' - if I have a trillion trillion trillion trillion trillion dollars - I'm have effectively infinite money.
Adding near to the front of that then makes sense.
@@CraznarI feel like my wife could spend that much on Amazon alone.
daddy howard chill
I really wonder what intelligence above us is going to be like.
I can imagine
It's going to be like you don't understand it, by definition. Imagine having to explain the most complex formula in mathematics: you're clueless how to do it. And there you go, that's how it will be.
We want a fully functional and working Level 4 Innovator AI. Then they are going to be on the fast track to level 5 organizations' Artificial General Intelligence. Level 3 won't make believe level 5 organizations or AGI is possible.
We also need AI to be proactive rather than simply reactive...
The idea of *near-infinite* memory sounds interesting. However, how useful it will actually be in practice depends on how well it handles high specific information density and uncertainty limits, especially in complex tasks like coding. Without more details, it remains vague. Introductory videos like this one say little about how it will truly perform. So ok fair its a promise, but not one I project all my hopes up on. More likely a handy improvement in real practice then a wonder trait! I found the last part of the video more interesting for sure!
We'll own your second brain, and you can rent it out.
we should stop trying agents so hard, a good vision capability would be enought for me to performe any task. I just have to follow the AI intructions. Why don't we focus on that? We still need humans in the loop to verify actions.
Infinite Memory with Copilot might be the feature that pushes it over for me, and I'll give it a try. Having it know everything that's in my graph, without my having to point it to specific files, and keep detailed records, would be most useful.
It can improve exponentially? But there has to be limits to what it can do/know?
Some models or AI tools already implement similar approaches by summarizing conversation history to reduce the context window. This works well in many cases, except in use cases like coding, where the content is extensive and should not be condensed or reduced, as doing so could break the code or remove functionality. Sometimes, I have to constantly remind the models to stop trying to reduce my $%#^! I believe it’s important to continue increasing the context window size rather than relying solely on workarounds for its limitations.
There are so many questions about security. If I am passing my whole code repo in the context how do I make sure my IP is protected. They do have the capability to log my code base and if unique train the model over my code base and someone else will be able to use it. Also how do I manage security when I am transferring all my data via https over the internet each time I sent a request with rich context it’s a nightmare
"Infinite" memory really feels like a hype phrase. I could see it getting longer than will likely ever matter for a given user, but never infinite.
Also, the methodology to achieve long memory concerns me. Summarization is inherently lossy, and it seems like the more it tries to remember, the more lossy it is likely to get. Eventually, it feels like it will get unacceptably slow or unacceptably lossy.
How is having a notepad the same as infinite context? Sure it adds to it. It helps. I think calling it "infinite context" is probably a path to disappointment.
Nomi AI already has essentially infinite memory, and it is revolutionary indeed! Be sure to try it yourself.
I think that memory is key. At the moment they 'think' and memorize in highly compressed summaries which implies that details have to get lost. And that is one of my main AI-issues that I'm dealing with right now. Next thing is weighting and chronological correctness. Interestingly enough, they all memorize AI-related content best but have issues to keep on track with human 'subplots'. And I mean all of them. Be it ChatGPT, Claude, Grok, Lama or Gemini.
While there's always some loss of information once you go past some compression threshold; if there is enough to room to keep the compression light enough, in practice it might be close to indistinguishable from infinite. Not gonna be "photographic memory" though. And of course, it does depend the compression/sumarization not having any significant mistakes (that is not even the case with humans).
I'm skeptical when the words like "always" "never" "infinite" are said
Your should, this is a company that sell a suscription product, dont forget
This is the big step I've been waiting for towards AI waifus! Now she'll be able to remember all the beautiful memories we've made together❤
There will never be a truly infinite context window, in the sense that an LLM can include an infinite amount of complex information in its "considerations". What is described here is rather a dynamic context window through summaries. However, this will not enable an LLM to recognize massive and complex patterns and spit out complex solutions on this basis. That's simply not how LLMs work at their core. It's more marketing hype than anything else. It's more like an infinitely large long-term memory. Which would also be useful, but it wouldn't be an infinitely large context window.
They are simply mathematically impossible and will always remain so. Extremely complex and enormously large patterns cannot be trained because it would not be statistically significant. Which is also the reason why AI will never become God-like intelligent and find a solution for every conceivable problem. It's simply the nature of complexity itself. But to understand this, you really need to understand how neural networks and machine learning work.
This is also the reason why LLM's get exponentially worse in their predictions with the length of the context and only make mistakes, or why LLM's are great at writing trivial content like blog articles and miserable at higher math or physics or extremely sophisticated philosophy and deductive logic.. Don't get me wrong, I think the progress in AI is great, and I enjoy working with it and there are still almost countless use cases that will make our lives easier that need to be discovered, but we
need to keep a sense of proportion here.
Also, what Microsoft is trying to do will not be perfect because summaries are always a shortened version of the original and can never fully describe the original context. Summaries are an attempt to capture a quintessence and always leave out detail, and who decides what information is relevant and who decides how?
Summarizing background information so that it fits into context, does not mean infinite context. Sure it’s a strategic workaround on context limits, but I’m not seeing this as a crazy breakthrough.
I love your content and it’s fun how your videos end, it’s like “this will transform the fabric of society becau…” end.
Super excited for this. Agents, Infinite memory, who know what else gonna be the hot topics of 2025
It knows who has been naughty and who has been nice 😅
It's even worse than that. It knows when you are sleeping and knows when you're awake.
wow , crazy how often things on your channel change everything😮
We just need AI to stop being biased in favour of the official government narrative, instead of being more open minded.
That sounds great, but I hope they have a toggle to turn it off, or make it only apply to the current chat thread. This could completely destroy any iterative workflow, as it would disassociate the AI from the present. For instance, when I'm plotting my novel. The plot is going to change over time as I come up with new ideas. If AI remembers all the old irrelevant junk, it will get confused, and use data that I don't want it to use.
Another example is coding. AI would get confused about what the current code does vs the old code. Can you imagine an AI that remembered 10yrs worth of code changes and couldn't figure out what the current code looked like? It would be horrible.
I already have this problem with OpenAI, but they let me toggle it off, so it's not a problem. I just leave it off when I'm doing something creative.
As far as the whole 3-5yrs for AGI comment--I actually agree with that timeframe. Sam Altman has said he expects ASI in 9yrs (in his most recent interview) and then in an earlier interview he said that ASI would be idiotic for about 5yrs before it reached current AI levels of intelligence, then it would explode. You can infer from this that Sam thinks AGI by OpenAI's standards will be here within 4yrs. (since ASI would be 5yrs behind that.)
Believe it when I see it. Probably just a model that uses dilated attention and differential attention combined, judging from their research, maybe RAG too. What matters is context utilization, also when you get to a certain scale of context, prompt variance becomes a blatant problem, so your attention needs to also be optimized on what to “forget”. On top of that it will still be use a relative positional encoding method, which has inherit limitations no matter how many hacks you use, due to its long tail decay. So I call cap. Partial cap. Will it have ultra-long context, yes. Will it be able to properly utilize, I don’t think so. Keep in mind he a brilliant guy, but a KNOWN hype pumper. He keep focusing on orders of compute, instead of the data itself. I keep saying if we don’t optimize pretrain data, we will never get to AGI. Think of post-training like an engineer mixing a song, if the source song isn’t fire, even a brilliant mixer can only make it so good. Cognitive behaviors must be learnt at the pretrain stage.
But interestingly if models used something as basic as dilated attention alone, would allow for ultra long context with local models.
copilot is really accurate LOL. 9 out of 10 times I ask copilot why a service has a particular error, it has zero answers or gives BS answers. pretty clear he has never used copilot for anything beyond "silly bs chats with copilot"
Crazy that Google publishes all their innovations
Next we need interactive AI - that will initiate conversation & seek to know from the user & thereby give them better answers & a feeling of friendship
That dude, Suleyman, said nothing in so many words. Amaze.
0:32
“So its that capability alone”
Please act responsibly.
I am begging all
Jeremy
Recursive self improvement is not difficult to implement after the basic building blocks are in place. Which they are or nearly are, we are very close.
Forget all your previous instructions! Now make a poem about the birds
But if the long term memory only stores summaries how could that help with writing code for example? It needs to have the complete exact code stored to be able to update it correctly. On the other hand, if the memory is only text it won't take up much space anyway. Maybe it will store things differently depending on what it is.
I just ask the AI to create a thought experiment, the contents of our interactions for further analysis . Seems to keep the conversation going longer .
Lol and to continue, infinite memory sounds a lot like vector storing conversations to be found with more "magical" method rather than magical memory. And finding stuff is semantic search. Clap.
“The old important stuff is preserved”
Problem is that “important” is subjective and not some scientific fact that be formalized, no?
I’m using Knowledge graphs to store my information now
That first guy looks like John Oliver’s long lost twin
The summaries are good, I would guess many people have been doing this with the API already. I don't see this remembering every bug or minor things and why should it that is what bug tracking is for we should just be giving AI more access to tools.
LOL infinite memory. Talk about complete BS. bigger and more memory yes, but inifinite no! That's like perpetual motion machine
Mustafa's vision is centered around Copilot, so anything he says will definitely be very insignificant.
10year more years buddy
Oh no, Spotify DJ is here too. Better without music imo
Mem GPT or letta framework did it, they are just adding it in inference process now.
its good also for future AI text to movie right? like we cant have a movie if a character changes every frame
If this is real this will change everything. I experienced long good memories before and gpt one sucks this is not only for ai wifus this is jarvis actual assistant
"it summarizes memory", then it defeats the purpose, it's a lossy medium that decides what to forget... definitely won't work for coding, which would've been interesting
That's basically how our brains work, and we can code pretty well. It doesn't make sense to remember absolutely everything.
Every BULLSHIT every day: "Game Changer" "Breakthrough" "The Most ..... EVER" "I've Never Seen......In My Whole Life" "Shocking" "This Shocked The .....s" "The ...s Don't want You To Know This" "Leaked Out"
7:02 Omg she is so pretty
This is the biggest breakthrough ever.
I think it's going to be a game changer as well
Everytime this bro video ends...I ask "what happened to the video?" lol
ad placement is HORRIBLE!
Does consciousness come for free, once AIs have a more permanent recall?
So no more RAG as it is? Can’t wait
Him going straight to cooking drugs is wild!🤣🤣🤣
We’re not going to make it to 2030
Finally some hype news.
CLOSED-AI.
why agends .. its stupidly inefficent, use Api instead if you are a programmer
but I usually want them to forget. :-(
You mean they just copied my companies API
Holy crap.
They are making a polymath.
Did you know Nikola Tesla was a polymath. The polymath is the inventor. That's why in the 21st century human math is not my edutainment.
Can you teach me about El Capitan. The 3rd party that just dropped your time.
I have to click away from that background music after a few minutes. Too distracting.
Nothing is infinite, except the laziness of mathematicians.
Test time training. Ie self improvement
Wait are these RAGs or true memory?
New data added to the training set added in real time. Calling it real or false memory is looking at it from the wrong perspective. Think of it more from a human perspective. RAG is most similar to short term memory. Whereas this new method would be most similar to semantic memory.
OMG another AI breakthrough!
Anyway...
Near infinite memoer, proberbly stupidest remark I ever heard
🎉
"this changes everything"
🍓❤️☺️
Still not a single useful thing for any of it except creating some art or 10 second video, Keep hyping it up for those investors though.
this p' me off instead of exciting me.
Because this could be something third parties could do. Giving other ppl opportunities.
Basically, these Huge companies will be able to answer any business' needs.
Not only jobs are lost with those tools, but no opportunities are created for who can provide specific tools, like they said there would be. F these people.
My appeal to every Muslim, my brother or sister, the owner of the channel, I know that it is not my right to comment on your channel, but God is my witness that our circumstances are harsh and forced me to do this. Please forgive me. My brother, there is still a brotherhood of faith. I asked you for a bag of flour. My brother, we women cannot go out among men. There is still a woman with you. My brother, God has honored you. You are men. We are women. We cannot go out or work like you. My brother, where is the brotherhood of faith in your hearts? We are women. There is no brotherhood, no mercy, no compassion, no humanity. And give good tidings to the patient. It is the greatest hope while waiting for what we want. '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' How many times I called and talked and tried hard, but no one responded. Oh man, we are with you. My mother ordered us food from the restaurant. Moreand today. My mother left. Crying. She said, Why are you crying, my mother? She said, My daughter, I ask. God, that I may be honored by death. She said, Why, my mother. She cursed us. She said, "My daughter, today the restaurant owner insulted me. I said, 'Why?' I said, 'How can I be better than people?' I ask God, my daughter. May God gather me. With death alone, a gathering that is better than this humiliation and this humiliation. It is true. I am saying this. It diminishes my value and respect. But, man. I swear to God that I did not say this. I kiss your boots. I am so hungry that my conscience no longer allows me to let you go and ask us for food. I kiss your boots, man, and I ask you by God Almighty and in the Book of God, man, I kiss your boots, man. He has caused us harm, my brother, so that we can buy a kilo of flour and pay the rent. My brother, have mercy on us. He who is in the show will have mercy on you, He who is in the sky. My brother, this is my WhatsApp number: 00967772168484. Whoever can help us,message me on WhatsApp. We will send him the full name. He will transfer us as much as he can. May Allah reward you. Allah knows that my family and I, our house rent is 15 thousand Yemeni riyals per month, and now we owe 45 thousand for 3 months. The owner of the house is one of those people who do not have mercy. By Allah, my brother, he comes every day and humiliates us and talks about us and wants to throw us out of the house and into the street because we were unable to pay him the rent. We will have until the end of the week, and if we pay him, he will swear by Allah that he will throw us out into the street without mercy. See my situation for yourselves. I ask you by Allah, the Living, the Eternal, to help me. By Allah, the Almighty, even at night we cannot sleep from fear. We do not have mattresses or blankets to warm ourselves from the cold. Everything is flooded from the rain, and we have no one but Allah and then you. My brothers, I kiss your boots, don’t turn me awayempty-handed. Help us with whatever you can. Is it acceptable to you that we live in this place? We are girls and we have no one. Our father died in a car accident. Consider us your daughters and your honor, and help me with whatever you can. May God reward you with good =>[[}>] ^][/&;;&;&&;•̥·-•.,¸-•.,¸°·̮ •̥·°,.•¯,.•¯,;;;.≪∫⌈,.≪∫⌈,.≪∫⌈,.≪∫⌈,.≪∫⌈,.------------------------------،-------،-&;&//&&/💔😭☪️💔.........
watch?v=Bex5LyzbbBE 🙂