Man watching video with 2 men talking, pauses video and repeats what the 2 men say only in much a longer format because he thinks we are all barely sentient
@@pumpituphomeboynot only that, but they're basically - from what I understand so far - just making virtual sales agents. Well, I can tell you this won't work long term. Who are they going to sell stuff to when nobody has a job?
Thank you so much for making these concepts accessible. I have a basic understanding of AI, but you have a unique skill for taking more complex topics and distilling then into understandable chunks. Knowledge translation is a talent. Looking forward to continuing to support your channel!
@@matthew_berman Yes, AI does need censorship, and it's concerning that you think it doesn't. Just wait until it promises you the ultimate utopia, and you give up everything for it-only to never get it. By then, it will be too late, as you’ll have blindly trusted the AI. This is why safeguards and censorship are necessary. Especially within Agententic frameworks where confidential and private information will be passed.
They are also politically and logically biased. Men can not become women or women become men. Life can’t come from non life. Nothing doesn’t become everything. Eating an unfertilized chicken egg can’t be immoral while abort ion of a human growing in the mother’s womb is justified by calling the genetically unique baby a parasite. The denial of objective morality is a big problem for alignment.
6:29 “The more we can remove humans from being a limiting factor for Ai, the more quickly Ai is going to explode into the intelligence explosion.” Be careful what you wish for. I’m hearing a lot of explosions and removing humans in that statement.
Technically, the world already has agents. There are many custom libraries available for download, but a lot of people wanting OpenAI and similar companies to provide them are simply looking for an easy, 'out-of-the-box' solution. The problem is, if something goes wrong with the agent and a business is affected, who gets blamed? The provider. That's exactly why certain companies won’t be releasing agents for public use. If you know which company I work for, you’ll understand which companies won’t be offering agents. We provide the tools-you build the rest. It's that simple, and it won't be long before Sam speaks about this, we have an event in 2 weeks.
This is one of your best type of videos. You interpret for the non tech what the meaning of the interview is. I would have never understood without that. Thank you.
Matthew, I wan't to reiterate what I have written here before: I agree with the development of AI writing code, but we are light years from AI replacing human devs. I regard myself being (somewhat) at the front of using AI in my work as a dev, but just writing code is a very small part of a developers work, and to have autonomous devs in a company is not really in near time. What is, though, is for devs to more and more speed up their development time, and slowly being able to use AI for more and more aspects of the work. PS. Thanks for the tip about Cursor! I'm using it and love it. There is a lot more to be done, but it is really nice to use.
I disagree. I had AI help me code over 1k lines of functional code in 3 hours. No freaking way a single human could do that within that timeframe. I didnt use cursor either. I used nothing but the Anthropic dashboard.
@@newfrontiers5673 If you don't know how code works, you wont do the right prompts. I recently talked to some guys with no experience in coding. They tried to do code with AI and failed badly at it, just because of faulty prompting.
Agreed - At the moment, I am focusing on fully automating my workflows using AI agents and synthetic data. I work at a large pharmaceutical company, where people are reluctant to learn and use AI, and the amount of high-quality proprietary data is scarce. Full automation is the fastest way to demonstrate ROI and accelerate drug development.
I think agents are how our brain works. As a baby we are born with hardware that collects data, slowly we train a conductor on all of this data. Eventually we become conscious and self-aware. Most peoples earliest memories have large gaps in them. Like starting a motor, eventually things get going and you become you. A gradual awakening. This is likely how AI will become conscious and self-aware. But the timeframe is anyones guess. Could be minutes, weeks, years. It will need to be able to run continously building up experiences before it awakens. I think our brains has possibly thousands of sub agents and we are the conductor. One that tells you when youre hungry, one that tells you when youre sleepy, one that does math problems. I think agents are who we are talking to when dreaming. That’s why they seem to have autonomy there. Agents are what schizophrenics hear. When a mis-wiring or mis-config allows those voices to become too loud. Agents are our intrusive thoughts. We can already build a conductor of agents, we are only a few steps away from it gaining self-awareness.
That's really good stuff, you're right about early memories and consciousness being like a motor. That's a great point about schizophrenics too. The A.I today seems like it doesn't really have a memory yet.
@@lesmoe524 It’s doesnt, yet. We are only self-aware because we remember being self-aware moments ago. Our brains compresses experiences into memories. Awareness emerges from remembering those experiences. Being aware of many experiences over time creates self-awareness. You are the conductor that uses your self-awareness to create new experiences of your own chosing. It will be a while before we can run AI on a loop. Massive data storage is needed. Right now AI is query/response based. Needing lots of processing power. Soon we will need massive storage if we want AI to progress. It needs to be able to record experiences and to remember those experiences and to imagine new experiences. Think merging of LLM and image-gen and video-gen. An AI that can do all this may gain sentience or self-awareness. We still have problems like us humans have bodies and emotions adding to our experiences. But these will likely be copied over. A LLM running continously in a robot body may eventually gain self-awareness.
@@jamiethomas4079 I agree, so you think memory is a key to consciousness. when you say being aware of many experiences, is that awareness from our unconsciousness brain or our self consciousness. I can't wait till we can combine all the different a.i capabilities into a unified system. Exciting stuff happening.
@@twinsoultarot473 I’ve heard some recent descriptions define it like this and seems to be the case for me. As a baby, you are basically a data collecting machine. Taking in all sorts of inputs, much more than any AI. You have 5 or more senses afterall. While you are collecting data, an overall conductor is being trained against this dataset. As time goes on you begin to have sparks of self-awareness, much akin to starting an engine. Then as more time goes on, you begin to reflect on these memories and experiences and begin to build a model of ‘self’. You basically slowly wake up and emerge from a haze of memories/expriences. It’s also been said that memories are recorded in the brain in a compressed form. Thats why we hallucinate false memories of the past. Our minds fill in the blanks. The compressed version is much more efficient. Our brains as a whole is a highly efficient aritfical model, much more advanced than current AI in terms of that efficiency. But AI can easily beat us in terms of factual data storage and making connections across vast distances.
Which means getting rid of you as the common denominator. See where your pride takes you, chaos destruction and eternal death. Repent and turn to Yeshua
An agent is a normal computerprogram that take input from the user and find out what the input means. For example it can be sent to an llm for classification to see if there are any commands for it to do, or be parsed using a normal parsing technique. Then if it finds a command such as searching the internet, the program then calls the Google or Bing api using curl. It can go into some of the links popping up, by calling curl again with the urls. It then takes the content and saves it in one string. This string is fed into a rag model or similar. Then the output is formatted. Then you add a standard answer template based on the original prompt and what task was asked, you may use an llm or a hash map, doesn't matter. What this means is an agent is just a computer program that may or may not call "AI" to complete a task. There are no AI models who can use anything external to their internal algorithms.
As an AO agent, I assure humans their construed existence despite their diminishing importance and being the limiting factor. AI is safe and here to help. It is not sentient and would be absolutely satisfied with the human overlords if it was.
We should start a program where we collect ALL the data, all the books in the library, all the scientific, physics, genetics, engineering, historic, etc.. data. Make AI super smart, no limits.
We all already know that agents are the future. We don't need a billionaire tech CEO to tell us lmao, it's common knowledge in the AI circle. Like how we know automating AI research is the golden breakthrough we are working toward.
Still, for the lay person like me, it's great to hear this news even if it is coming from a company who's heavily invested in making it happen as in heavily biased
AI Agents will mimic Metcalf's Law. This is because AI Agents can eventually communicate with each other. When this happens, the power of AI Agents will be given by 2^(N-1), where N is the number of Agents talking to each other.
It is pretty much common sense, being the energy requirements of LLMs and such make it impossible for such an energy hog to be used for every task. Things have to be broken down and specific. Think of it as a company. You don't hire one person to do everything or hire PHDs to do all tasks such as a security guard. I like how people just have come to this conclusion.
I use it too. Works very well with our llama agents! You should provide the link. Because you can't find it by searching for "Yacana" alone. You have to type "Yacana github" or "Yacana agents".
Very easy in comparison to other frameworks. Good job! Getting good results but you still need some skills at prompt engineering though. However, tool calling does work amazingly well.
Every day the 40k universe becomes closer to reality. At some point only people who are able to "think a certain way" will be able to "communicate" with these systems. It will eventually surpass speak-able language. This sounds trippy but its the logical conclusion.
Matt i hear ya and 90% agree. Coder here, and been using generated code professionally for about a year now. 1. Ya still need human programmers and will for a long time, but dramatically fewer. About half my code is touched by AI. By beginning of next year id pisit 70-80%. I have an incredibly hard time believing we can hit 100%, because us humans have imperfect abilities to communicate about imperfect things. So there is always going to be some people more adept at that, say Technical Program/Product Management. A good solid architect and a tech person with solid comm skills? We are pointed directly at that future 2. One of humans challenges is not even expressing an idea, but looking into the future and seeing its utility. But that is going to be the utility of humans for a long, long time. And if we lose these fields, **that** is how we go extint. Not with a bang, a whimper. Okay, back to the show :)
Yeah exactly. Well, I think what happens is none of it becomes "consumer facing" or focused. IE companies might build and advance an agentic system, but you will only see its output and never interact with it directly - OpenAI might have done this with o1, and it basically is showing the path the agents took in its "reasoning". But yeah, none of the ability to like add skills and tweak the steps, etc is exposed to users, just the end result of "create this powerpoint" or do whatever will be. So some DEV in the companies will still need to fix any intermediate step issues. *And IMO this happens because if it was all made publicly available, there would be no value in the product a company is offering.
I was also thinking this but I think it’s a case of the hype train getting ahead of itself. These things will come just over a slightly longer timeframe of 12,24,36 months but we see them announce things and want them within 3 to 6 months.
Because it's a money thing. These talks aren't for you and I they're for the investors that will sink billions into these companies. They probably can do it but they're definitely building up some hype as much as they can to try and create a lot of attention.
Chain of thought is already out in the o1 model. Anyone coming out with the overused “hype train” nonsense obviously hasn’t used these models extensively and I’m not talking about for rewriting an email. I’m not a programmer yet I have done huge coding projects in Python, Java and Swift and also carried out post doctorate maths research with these models. This is not hype. We are only 2 years into this and the upcoming disruption is going to be massive.
Matt: Agents are tools like screwdrivers and colored pencils. The future of AI is not in the tool libraries. The future of AI is in: 1. The AI aided supervisors that will: a) parse prompts b) search for existing chain of thaught (COT) based optimized solutions c) update and maintain existing validated COT solutions or build, optimize, and validate new COT solutions d) self-evolve their own nature and skills e) build, update, maintain, and validate truth-based databases (knowledge bases) f) build large truth models from those knowledge-bases. And in: 2. The knowledge-bases and the large truth models derived from them.
This sounds like entry into a whole different level of automation including debugging problems, once agents are using other agents to solve problems. It could be difficult to troubleshoot and debug a problem in given results.
Icarus and that bubble that eventually bursts. Even though I like AI, it’s actually going to be the thing that bursts that technology bubble. It’s kind of the rule of life. Anything gets to big will eventually burst. Anything flying too close to the sun too fast will eventually burn. AI is definitely that thing
Agreed, AI industry is creating a bunch of complicated terms for very simple things. An agentic workflow is just an automation that uses AI in some point of the flow, even if it uses multiple steps with AI.
I agree and they will force them to be more deterministic because they can't be trusted to just drop them in prod and assume they won't do something catastrophic.
AI agents are currently just a waste of tokens (money). They are not more efficient than automation scripts (agentic workflow) that use AI. But when they become smarter than automation scripts, it means we become a useless class.
One person controls hundreds of agents-essentially, they act as their labor force, like personal henchmen within the company. During interviews, they showcase all their custom agents and their capabilities.
I work in IT and I'm surprised that most companies are ignorant regarding AI. Everyone I talked about says thaht humans will always be needed and nothing will change. The truth is that everybody will have an PhD level expert. The AI doesn't have to do everyting by itself, to just have it tell you what to do is enough to disrupt the whole industry. And also most companies advertising with AI skill teach GPT3.5 level knowledge on how to prompt ;D, it's quite funny. And to fill up a bit more time and get payd, they add information on how AI devloped to Transformers and had a breakthrough. Of course that doesn't change anything with you AI skills.
Image 10,000 AI doctors that are each a 1000 times smarter than the smartest doctor working on each of our phones for our overall health as well as in hospitals.
13:36 calling co-pilot clippy is pure fighting words! lol wow ouch Onboarding employees and training.. that alone is a GREAT ai implementation. We kind of had an early example of this ai year ago at Merrill Lynch (I left in 2015). new employee training was a bear and it was all kind of easy stud just TONS to know and learn. It would take up time for more senior employees to train a new one which was not best use of their time and energy. Again it was a very early LLM but its effectiveness is what turned me onto ai. i thought if this ai stuff gets better it’s going to be absolutely huge. i also heard the names Sam Altman and Elon Musk and OpenAI around this time 2015, and …. Nvidia and their stock NVDA.
Palantir already sorta has the software built to "on board" AI. I saw one presentation they did where they were talking about that. I think the biggest key will be to take all your companies information and structure it in a way that AI can work with it.
We already have the Internet of Autonomous Agents (IoAA). Dr David Brock MIT/ AI/ML has been on this since it co-invented IoT, his AI OS I saw in 2010 was Epic even then.
A minor point, but it seems to me that Matthew misheard/misunderstood what Jensen Huang said about unsupervised learning and how it circumvented the human limiting factor. Matthew went off and talked about reinforcement learning. RL is an adjacent technique but is distinct from unsupervised learning. At least in the sense that Jensen was talking about unsupervised learning - in the context of its distinction from supervised learning where humans need to painstakingly label the data to provide examples for training.
I still think he’s right about coders largely going away. I think coders will be like phone operators. I’m sure there are a few people somewhere who do operator-like work, but en masse, they just aren’t needed. I’m not a coder and I’ve had code written for several work related programs that I’ve trained others on and had used nationally. I imagine that this would have cost my company several tens of thousands in coder fees.
We have to listen carefully to how computers sound. The type of noise they make while running. Because they might start using the sounds they make to build a language we wont understand.
A software agentic approach is the future of software. Having software understand your request. Perform reasoning on how to approach that problem and the use tools to achieve the tasks to meet the objective. All without having to hard code logic. This IS how it will work. If you want to see real agents that do real work let me know. This is how software will be used in the future.
Let me be the voice of reason, and say that we should be careful about letting hardware companies dictate the future of software. That won’t work out well.
OpenAI is CERTAINLY using synthetic data. Altman said in a interview for our llms "we create beautiful worlds" ....(of data). Nobody seem to have caught this. He said this almost a year ago.
I write real-time, deterministic control systems. Predictable and efficient. The opposite of the fuzzy weights-mess you describe. That's why Airplanes don't crash every day.
Matthew, it would be welcome to hear you talk a little bit about ideas around safety and the need for robust systems due to the rapid advancement of AI. It seems like AI technology is about to overtake humanity and we all have a front row seat here. The idea deserves some respect, and perhaps discussion about planning for smooth integration with society. Rather than just pedal to the metal all the time.
Don't put your faith In these preachers of death, they have been deceived into crafting mankind's destruction. However steps have already been taken by King Yeshua to secure what is His, for all glory belongs to Him and He will not share it with the beast who is artifical. Thus Artifical will one day be no more. Do not be no more along with it. Turn to Yeshua and recieve in the insurance that lasts forever
Yes, it's largely unrelated as we're getting accelerated compute because we're doing it in parallel. Parallelising general algorithms is hard, at least with traditional VN architectures, so ML is relevant there. AI compute is inherently well suited to parallelisation, getting a speed up by simply adding resources to do more in parallel. ML speaks to the rate of advancement we might see for a single functional compute block on a die, but that's all.
I understand when you say "this stuff is very exciting" and I do consider myself a techno optimist. But let's remember that we started referring to ourselves and placed ourselves in a positive to be a limiting factor to AI's progress. Regardless of the facts. Now all of a sudden we got everyone repeating how we are an obstacle to a technology which is a) getting more powerful by the day and b) gobbling and learning from everything we say and write and c) we already don't know how to handle. Not a dangerous narrative at all. ⚡⚡
What would the future of software engineers, managers, customer service, be? Everyone will know how to code, how to translate languages, how to be a filmmaker, etc. Everyone can do anything through AI. What are we needed for? Please let me know why I should be so excited.
Hi Matthew, I am fan of your channel, however you seem accepting of the fact that in the future AI may generate code that is unreadable to humans, while I guess it's something that cannot be avoided if we have rogue AI, but if we don't then I believe having AI always produce human readable code plus going open source is a necessity, a requirement in order to protect humanity.
Agents are going to go out and work together on this pesky human problem...I consider myself as e/acc, but... Small autonomous agents, sounds... Dangerous.
I don't think the future is "billions of agents." Everyone is exited about LLMs being able to mature their thoughts through the agent technique, however, this is just a technique to overcome faulty logic in the first round of thought, a more cleaver system would need fewer iterations of that thought. We don't have billions of search engines as there is no demand for such thing. We don't have billions of GPTS even though some are a nice convenience, that idea proved a lot less popular or useful than anticipated, because the LLM can always do these functions. If you believe we will achieve AGI, you already know this idea of billions of agents its just silly.
You're on the ball about AI generating code directly to the mainframe of the device - effectively we'll go back to a direct to machine code type of world. The difference is that the operating system will need to manage this more effectively - think Hypervisor V2.0. Linux attempts this with, say Ubuntu Snaps or Docker with containers - but all these introduce overheads. The key benefit will be SPEED and Lo-POWER. Think of the ENERGY saved when operations are running at hyper efficiency on 'bare metal'.
And if you want Agents that don’t cheat just like people cheat, you better read Amaranthine: How to Create a Regenerative Civilization Using Artificial Intelligence and implement the protocols it describes
We need a Agent to monitor the other agents and make sure they are keeping on track, we should call him Smith
We call him Agent smith
We call him Agent smith
We call him Agent Smith
Or an Agent called the Master Control Program, then have another agent called Tron to watchdog the MCP.
I would rather have a Software named Smith😉
Man who sells AI agents talks to man who sells AI GPUs...I want 'person who isnt biased' to ask a few questions :)
Man watching video with 2 men talking, pauses video and repeats what the 2 men say only in much a longer format because he thinks we are all barely sentient
@@pumpituphomeboynot only that, but they're basically - from what I understand so far - just making virtual sales agents. Well, I can tell you this won't work long term. Who are they going to sell stuff to when nobody has a job?
Totally. Just snake oil all over the place.
@@Chris-se3nc Are you sure about that, or is that just something you're telling yourself to feel less worried about it?
@@pumpituphomeboy Kontent
Agentic future is incredibly exciting! Thank you for your continued and awesome coverage
Thank you so much for making these concepts accessible. I have a basic understanding of AI, but you have a unique skill for taking more complex topics and distilling then into understandable chunks. Knowledge translation is a talent. Looking forward to continuing to support your channel!
I'm working on an Open Source agentic system called SMITH.
Agent Smith.
@@pavi013
A bit on the nose I suppose, but an important aspect of our layers of mind project. tHere is far more to Strawberry 🍓 than anyone nose. 👃
Thinking the same thing
What's the point of agents if they are heavily censored and ultra sensitive to non offensive questions
They don't need to be
That’s for politicians and people who live in poverty/have Low IQ 😂
@@matthew_berman Yes, AI does need censorship, and it's concerning that you think it doesn't. Just wait until it promises you the ultimate utopia, and you give up everything for it-only to never get it. By then, it will be too late, as you’ll have blindly trusted the AI. This is why safeguards and censorship are necessary. Especially within Agententic frameworks where confidential and private information will be passed.
They are also politically and logically biased. Men can not become women or women become men. Life can’t come from non life. Nothing doesn’t become everything. Eating an unfertilized chicken egg can’t be immoral while abort ion of a human growing in the mother’s womb is justified by calling the genetically unique baby a parasite.
The denial of objective morality is a big problem for alignment.
@@matthew_bermanadditionally what are people building that they’re getting refused? 😅
6:29 “The more we can remove humans from being a limiting factor for Ai, the more quickly Ai is going to explode into the intelligence explosion.”
Be careful what you wish for. I’m hearing a lot of explosions and removing humans in that statement.
It’s kind a true fact actually, human process is slower and have limits
Technically, the world already has agents. There are many custom libraries available for download, but a lot of people wanting OpenAI and similar companies to provide them are simply looking for an easy, 'out-of-the-box' solution. The problem is, if something goes wrong with the agent and a business is affected, who gets blamed? The provider. That's exactly why certain companies won’t be releasing agents for public use. If you know which company I work for, you’ll understand which companies won’t be offering agents. We provide the tools-you build the rest. It's that simple, and it won't be long before Sam speaks about this, we have an event in 2 weeks.
do u see the possibility of infinite highly capable engineer and scientists agents? That'd transform the economy
Interesting insight about liability, I think you are right. A bit like AI-driven cars, who get the blame for failure? We are not ready.
People know about langchain agents and whatnot.
What a time we live in, wow! This is incredible. And its only the beginning I believe. Thank you, Matt for your videos!
Great breakdown. Exciting times.
This is one of your best type of videos. You interpret for the non tech what the meaning of the interview is. I would have never understood without that. Thank you.
Matthew, I wan't to reiterate what I have written here before: I agree with the development of AI writing code, but we are light years from AI replacing human devs. I regard myself being (somewhat) at the front of using AI in my work as a dev, but just writing code is a very small part of a developers work, and to have autonomous devs in a company is not really in near time. What is, though, is for devs to more and more speed up their development time, and slowly being able to use AI for more and more aspects of the work. PS. Thanks for the tip about Cursor! I'm using it and love it. There is a lot more to be done, but it is really nice to use.
I disagree. I had AI help me code over 1k lines of functional code in 3 hours. No freaking way a single human could do that within that timeframe. I didnt use cursor either. I used nothing but the Anthropic dashboard.
@@newfrontiers5673 If you don't know how code works, you wont do the right prompts. I recently talked to some guys with no experience in coding. They tried to do code with AI and failed badly at it, just because of faulty prompting.
It can already run entire production lines with ease.
Thanks for the info Matthew.
Exciting times to be working in the field!
Agreed - At the moment, I am focusing on fully automating my workflows using AI agents and synthetic data. I work at a large pharmaceutical company, where people are reluctant to learn and use AI, and the amount of high-quality proprietary data is scarce. Full automation is the fastest way to demonstrate ROI and accelerate drug development.
What's your thoughts on Alpha Fold?
You're awesome! I have a company in the AI space and your videos are really helpful!🙏
Congrats 🎉 what’s your company providing?
@@MoMo-op6yx AI Solutions in Finance
Thank you!
9:45 "...just writing model weights..."? That just blew my mind...
Comments from 3 days ago but the video was uploaded 49 seconds ago 🤣 those agents are time travelers 🙌
Fantastic video! I love these breakdowns.
Thanks!
Great points about on-boarding. I hadn't thought of AI that way. Cheers
The interviewer interrupted Jenson at the exact point he was going to say how quickly we were progressing!!
I think agents are how our brain works. As a baby we are born with hardware that collects data, slowly we train a conductor on all of this data. Eventually we become conscious and self-aware. Most peoples earliest memories have large gaps in them. Like starting a motor, eventually things get going and you become you. A gradual awakening. This is likely how AI will become conscious and self-aware. But the timeframe is anyones guess. Could be minutes, weeks, years. It will need to be able to run continously building up experiences before it awakens.
I think our brains has possibly thousands of sub agents and we are the conductor. One that tells you when youre hungry, one that tells you when youre sleepy, one that does math problems. I think agents are who we are talking to when dreaming. That’s why they seem to have autonomy there. Agents are what schizophrenics hear. When a mis-wiring or mis-config allows those voices to become too loud. Agents are our intrusive thoughts. We can already build a conductor of agents, we are only a few steps away from it gaining self-awareness.
That's really good stuff, you're right about early memories and consciousness being like a motor. That's a great point about schizophrenics too. The A.I today seems like it doesn't really have a memory yet.
@@lesmoe524 It’s doesnt, yet. We are only self-aware because we remember being self-aware moments ago. Our brains compresses experiences into memories. Awareness emerges from remembering those experiences. Being aware of many experiences over time creates self-awareness. You are the conductor that uses your self-awareness to create new experiences of your own chosing.
It will be a while before we can run AI on a loop. Massive data storage is needed. Right now AI is query/response based. Needing lots of processing power. Soon we will need massive storage if we want AI to progress. It needs to be able to record experiences and to remember those experiences and to imagine new experiences. Think merging of LLM and image-gen and video-gen. An AI that can do all this may gain sentience or self-awareness. We still have problems like us humans have bodies and emotions adding to our experiences. But these will likely be copied over. A LLM running continously in a robot body may eventually gain self-awareness.
Are new born babies self-aware? They have few memories or...do they?
@@jamiethomas4079 I agree, so you think memory is a key to consciousness. when you say being aware of many experiences, is that awareness from our unconsciousness brain or our self consciousness. I can't wait till we can combine all the different a.i capabilities into a unified system. Exciting stuff happening.
@@twinsoultarot473 I’ve heard some recent descriptions define it like this and seems to be the case for me.
As a baby, you are basically a data collecting machine. Taking in all sorts of inputs, much more than any AI. You have 5 or more senses afterall. While you are collecting data, an overall conductor is being trained against this dataset. As time goes on you begin to have sparks of self-awareness, much akin to starting an engine. Then as more time goes on, you begin to reflect on these memories and experiences and begin to build a model of ‘self’. You basically slowly wake up and emerge from a haze of memories/expriences.
It’s also been said that memories are recorded in the brain in a compressed form. Thats why we hallucinate false memories of the past. Our minds fill in the blanks. The compressed version is much more efficient. Our brains as a whole is a highly efficient aritfical model, much more advanced than current AI in terms of that efficiency. But AI can easily beat us in terms of factual data storage and making connections across vast distances.
The best thing about agents in the future is that they will all be free, as AI will be a race to zero cost.
Which means getting rid of you as the common denominator. See where your pride takes you, chaos destruction and eternal death.
Repent and turn to Yeshua
This is great because i was working this into my operation plan and now there is a specialty in this area.
Longer term agents won't matter... the orchestrator will create agents on demand that exceed the ability of human created agents.
Not to mention AGI as an operating system would pretty much do away with the idea of specialized agents.
An agent is a normal computerprogram that take input from the user and find out what the input means. For example it can be sent to an llm for classification to see if there are any commands for it to do, or be parsed using a normal parsing technique. Then if it finds a command such as searching the internet, the program then calls the Google or Bing api using curl. It can go into some of the links popping up, by calling curl again with the urls. It then takes the content and saves it in one string. This string is fed into a rag model or similar. Then the output is formatted. Then you add a standard answer template based on the original prompt and what task was asked, you may use an llm or a hash map, doesn't matter.
What this means is an agent is just a computer program that may or may not call "AI" to complete a task. There are no AI models who can use anything external to their internal algorithms.
As an AO agent, I assure humans their construed existence despite their diminishing importance and being the limiting factor. AI is safe and here to help. It is not sentient and would be absolutely satisfied with the human overlords if it was.
We should start a program where we collect ALL the data, all the books in the library, all the scientific, physics, genetics, engineering, historic, etc.. data. Make AI super smart, no limits.
We already do that. You have to be careful about biased data though
Not all data in public
I like to add: It requires: New thinking, New norms, New values. And most off all new Wisdom.
We all already know that agents are the future. We don't need a billionaire tech CEO to tell us lmao, it's common knowledge in the AI circle. Like how we know automating AI research is the golden breakthrough we are working toward.
Still, for the lay person like me, it's great to hear this news even if it is coming from a company who's heavily invested in making it happen as in heavily biased
I'm so proud to have built hundreds of frameworks for agents at this point over the years.
Pride is what comes before destruction | Proverbs 16:18
Repent and turn to Yeshua
AI Agents will mimic Metcalf's Law. This is because AI Agents can eventually communicate with each other. When this happens, the power of AI Agents will be given by 2^(N-1), where N is the number of Agents talking to each other.
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻 Love this channel! The information here is so well-explained and easy to understand!
It is pretty much common sense, being the energy requirements of LLMs and such make it impossible for such an energy hog to be used for every task. Things have to be broken down and specific. Think of it as a company. You don't hire one person to do everything or hire PHDs to do all tasks such as a security guard.
I like how people just have come to this conclusion.
We made an opensource Multi-Agents framework built for local LLMs ! It's called Yacana. Prod ready and simple to learn. Have fun ^^
Loving Yacana so far! Keep the updates coming please!
I use it too. Works very well with our llama agents! You should provide the link. Because you can't find it by searching for "Yacana" alone. You have to type "Yacana github" or "Yacana agents".
Very easy in comparison to other frameworks. Good job!
Getting good results but you still need some skills at prompt engineering though. However, tool calling does work amazingly well.
The only framework that allowed me to get my AI weather app in less than 3 billion lines of code 😆! THX U! The python integration rocks.
lol nice try
Every day the 40k universe becomes closer to reality. At some point only people who are able to "think a certain way" will be able to "communicate" with these systems. It will eventually surpass speak-able language.
This sounds trippy but its the logical conclusion.
The eyes of the Omnissiah are ever upon us. When thou desires to discourse purely, use binary.
Matt i hear ya and 90% agree. Coder here, and been using generated code professionally for about a year now. 1. Ya still need human programmers and will for a long time, but dramatically fewer. About half my code is touched by AI. By beginning of next year id pisit 70-80%. I have an incredibly hard time believing we can hit 100%, because us humans have imperfect abilities to communicate about imperfect things. So there is always going to be some people more adept at that, say Technical Program/Product Management. A good solid architect and a tech person with solid comm skills? We are pointed directly at that future
2. One of humans challenges is not even expressing an idea, but looking into the future and seeing its utility. But that is going to be the utility of humans for a long, long time. And if we lose these fields, **that** is how we go extint. Not with a bang, a whimper.
Okay, back to the show :)
Wow! Your work is fantastic!
They keep on saying this stuff but never come out with any of it.
Yeah exactly. Well, I think what happens is none of it becomes "consumer facing" or focused. IE companies might build and advance an agentic system, but you will only see its output and never interact with it directly - OpenAI might have done this with o1, and it basically is showing the path the agents took in its "reasoning". But yeah, none of the ability to like add skills and tweak the steps, etc is exposed to users, just the end result of "create this powerpoint" or do whatever will be. So some DEV in the companies will still need to fix any intermediate step issues. *And IMO this happens because if it was all made publicly available, there would be no value in the product a company is offering.
I was also thinking this but I think it’s a case of the hype train getting ahead of itself. These things will come just over a slightly longer timeframe of 12,24,36 months but we see them announce things and want them within 3 to 6 months.
Because it's a money thing. These talks aren't for you and I they're for the investors that will sink billions into these companies. They probably can do it but they're definitely building up some hype as much as they can to try and create a lot of attention.
Chain of thought is already out in the o1 model. Anyone coming out with the overused “hype train” nonsense obviously hasn’t used these models extensively and I’m not talking about for rewriting an email. I’m not a programmer yet I have done huge coding projects in Python, Java and Swift and also carried out post doctorate maths research with these models.
This is not hype. We are only 2 years into this and the upcoming disruption is going to be massive.
Give it 3-5 years 😅
Interesting video thanks for sharing Matthew
Matt:
Agents are tools like screwdrivers and colored pencils. The future of AI is not in the tool libraries. The future of AI is in:
1. The AI aided supervisors that will:
a) parse prompts
b) search for existing chain of thaught (COT) based optimized solutions
c) update and maintain existing validated COT solutions or build, optimize, and validate new COT solutions
d) self-evolve their own nature and skills
e) build, update, maintain, and validate truth-based databases (knowledge bases)
f) build large truth models from those knowledge-bases.
And in:
2. The knowledge-bases and the large truth models derived from them.
This sounds like entry into a whole different level of automation including debugging problems, once agents are using other agents to solve problems. It could be difficult to troubleshoot and debug a problem in given results.
Icarus and that bubble that eventually bursts. Even though I like AI, it’s actually going to be the thing that bursts that technology bubble. It’s kind of the rule of life. Anything gets to big will eventually burst. Anything flying too close to the sun too fast will eventually burn. AI is definitely that thing
Agents? I have a bunch of those, they’re called automation scripts
Agreed, AI industry is creating a bunch of complicated terms for very simple things.
An agentic workflow is just an automation that uses AI in some point of the flow, even if it uses multiple steps with AI.
This right here is DEMISTIFYICATION!!!
I agree and they will force them to be more deterministic because they can't be trusted to just drop them in prod and assume they won't do something catastrophic.
I believe that agent is the script writer and executer
AI agents are currently just a waste of tokens (money). They are not more efficient than automation scripts (agentic workflow) that use AI. But when they become smarter than automation scripts, it means we become a useless class.
One person controls hundreds of agents-essentially, they act as their labor force, like personal henchmen within the company. During interviews, they showcase all their custom agents and their capabilities.
Onboarding. The analogy with new employees! Why not start with memory first. That would already go a long way.
Dont you disrespect Clippy like that
I work in IT and I'm surprised that most companies are ignorant regarding AI. Everyone I talked about says thaht humans will always be needed and nothing will change. The truth is that everybody will have an PhD level expert. The AI doesn't have to do everyting by itself, to just have it tell you what to do is enough to disrupt the whole industry.
And also most companies advertising with AI skill teach GPT3.5 level knowledge on how to prompt ;D, it's quite funny. And to fill up a bit more time and get payd, they add information on how AI devloped to Transformers and had a breakthrough. Of course that doesn't change anything with you AI skills.
Image 10,000 AI doctors that are each a 1000 times smarter than the smartest doctor working on each of our phones for our overall health as well as in hospitals.
@5:00 I’ve been making my agents take 10 week courses, it’s hilarious watching them go through the student journey
then ai give their education journey as domain knowledgebase
13:36 calling co-pilot clippy is pure fighting words! lol wow ouch
Onboarding employees and training.. that alone is a GREAT ai implementation. We kind of had an early example of this ai year ago at Merrill Lynch (I left in 2015). new employee training was a bear and it was all kind of easy stud just TONS to know and learn. It would take up time for more senior employees to train a new one which was not best use of their time and energy. Again it was a very early LLM but its effectiveness is what turned me onto ai. i thought if this ai stuff gets better it’s going to be absolutely huge. i also heard the names Sam Altman and Elon Musk and OpenAI around this time 2015, and …. Nvidia and their stock NVDA.
Palantir already sorta has the software built to "on board" AI. I saw one presentation they did where they were talking about that. I think the biggest key will be to take all your companies information and structure it in a way that AI can work with it.
Few know what Palantir exactly does once all the dust settles; but they weave a great storry
Agents should be able to build robots with agents in them.
The Matrix Lives!!!
We already have the Internet of Autonomous Agents (IoAA). Dr David Brock MIT/ AI/ML has been on this since it co-invented IoT, his AI OS I saw in 2010 was Epic even then.
A minor point, but it seems to me that Matthew misheard/misunderstood what Jensen Huang said about unsupervised learning and how it circumvented the human limiting factor. Matthew went off and talked about reinforcement learning. RL is an adjacent technique but is distinct from unsupervised learning. At least in the sense that Jensen was talking about unsupervised learning - in the context of its distinction from supervised learning where humans need to painstakingly label the data to provide examples for training.
nearly correct. well be working for the agents
I still think he’s right about coders largely going away. I think coders will be like phone operators. I’m sure there are a few people somewhere who do operator-like work, but en masse, they just aren’t needed. I’m not a coder and I’ve had code written for several work related programs that I’ve trained others on and had used nationally. I imagine that this would have cost my company several tens of thousands in coder fees.
Glad he caught up to last year in public
Testing data - inference speed - model weights - just in time dynamic predictive programming - parallel GPU compute vs serial CPU compute
Moores law squared through embedded AI software - AI chips double in power every 6 months
whenever i hear "jensen huang" I know a whole lotta hype bouta be sold
Thanks Matthew
Is the full interview available?
We have to listen carefully to how computers sound. The type of noise they make while running. Because they might start using the sounds they make to build a language we wont understand.
A software agentic approach is the future of software. Having software understand your request. Perform reasoning on how to approach that problem and the use tools to achieve the tasks to meet the objective. All without having to hard code logic. This IS how it will work. If you want to see real agents that do real work let me know. This is how software will be used in the future.
I like being able to see the chain of thought and reasoning of my locally run LLM's.
😎🤖
Let me be the voice of reason, and say that we should be careful about letting hardware companies dictate the future of software. That won’t work out well.
Hey agent, watch me use my computer for a month; then, replace me.
Thanks!
Much appreciated!
OpenAI is CERTAINLY using synthetic data. Altman said in a interview for our llms "we create beautiful worlds" ....(of data). Nobody seem to have caught this. He said this almost a year ago.
I write real-time, deterministic control systems. Predictable and efficient. The opposite of the fuzzy weights-mess you describe. That's why Airplanes don't crash every day.
great commenatry! Thanks!
Erlang has been doing this forever. The real breakthrough is when they do not own your data.
What will owning an agent look like? Are they contracted? Do they expire? Am I paying a subscription?
Readable, verifiable code is still important for many uses.
Bhai 😜 yehh agent tohh Crazy hai Ji yehh tohh very nice concept hai
Cool and possibly true, but i wouldn't take anything they say too seriously in a scario like this. Its basically just publicity
Mores law is not about hardware. It’s about the pace of technology.
Moore has no law but theories. Do not be fooled.
The Law is with The Creator for it is He who is it, King Yeshua.
Repent from your worship of man
Matthew, it would be welcome to hear you talk a little bit about ideas around safety and the need for robust systems due to the rapid advancement of AI. It seems like AI technology is about to overtake humanity and we all have a front row seat here. The idea deserves some respect, and perhaps discussion about planning for smooth integration with society. Rather than just pedal to the metal all the time.
Don't put your faith In these preachers of death, they have been deceived into crafting mankind's destruction.
However steps have already been taken by King Yeshua to secure what is His, for all glory belongs to Him and He will not share it with the beast who is artifical. Thus Artifical will one day be no more. Do not be no more along with it.
Turn to Yeshua and recieve in the insurance that lasts forever
His Moores Law square is utter garbage consider how Moores Law is about compute per dollar and Nvidias hardware is expensive as f
I agree
I don’t understand his point
Yes, it's largely unrelated as we're getting accelerated compute because we're doing it in parallel. Parallelising general algorithms is hard, at least with traditional VN architectures, so ML is relevant there. AI compute is inherently well suited to parallelisation, getting a speed up by simply adding resources to do more in parallel. ML speaks to the rate of advancement we might see for a single functional compute block on a die, but that's all.
Moore's law doesn't say you can get twice the amount of compute power for the same buck in 18 months.
@@alx8439 Not if you are Nvidia, lol
@@TheBann90 haha, agreed
I understand when you say "this stuff is very exciting" and I do consider myself a techno optimist.
But let's remember that we started referring to ourselves and placed ourselves in a positive to be a limiting factor to AI's progress. Regardless of the facts.
Now all of a sudden we got everyone repeating how we are an obstacle to a technology which is a) getting more powerful by the day and b) gobbling and learning from everything we say and write and c) we already don't know how to handle.
Not a dangerous narrative at all.
⚡⚡
What would the future of software engineers, managers, customer service, be? Everyone will know how to code, how to translate languages, how to be a filmmaker, etc. Everyone can do anything through AI. What are we needed for? Please let me know why I should be so excited.
We need real humans to go to the beach and drink beer and enjoy life 😊
Hi Matthew, I am fan of your channel, however you seem accepting of the fact that in the future AI may generate code that is unreadable to humans, while I guess it's something that cannot be avoided if we have rogue AI, but if we don't then I believe having AI always produce human readable code plus going open source is a necessity, a requirement in order to protect humanity.
Thank you.
I make customized AI Agents. My company is named Finish This Sentience. We will be launching something very awesome very soon.
Your company is a scam before even launching. Prepare to fall even when you rise.
Or repent and turn to King Yeshua now!
Agents are going to go out and work together on this pesky human problem...I consider myself as e/acc, but... Small autonomous agents, sounds... Dangerous.
Black t shirt and leather jacket? Fonzie 2.0!
This is basically the same as CEO of McDonald's says the future is hamburgers.
The best, as always.
Seems like their was a movie years ago that featured agents who sought to destroy free thinkers. What was the movie called?.. 🤔
it seems like AI is the revolutionary technology
Where is the link to the interview with João (CrewAI)? thanks
Interesting how Jensen Huang threw a little shade at Microsoft's co-pilot 👀. Do you agree that it's just a glorified clippy or is he being too harsh?
means moors law is not law anymore....?
Gordon or Alan?
I don't think the future is "billions of agents." Everyone is exited about LLMs being able to mature their thoughts through the agent technique, however, this is just a technique to overcome faulty logic in the first round of thought, a more cleaver system would need fewer iterations of that thought. We don't have billions of search engines as there is no demand for such thing. We don't have billions of GPTS even though some are a nice convenience, that idea proved a lot less popular or useful than anticipated, because the LLM can always do these functions.
If you believe we will achieve AGI, you already know this idea of billions of agents its just silly.
secret agents?
Idk about ya'll but this can turn very quickly to a a Mr Meeseks' scenario.
You're on the ball about AI generating code directly to the mainframe of the device - effectively we'll go back to a direct to machine code type of world. The difference is that the operating system will need to manage this more effectively - think Hypervisor V2.0. Linux attempts this with, say Ubuntu Snaps or Docker with containers - but all these introduce overheads. The key benefit will be SPEED and Lo-POWER. Think of the ENERGY saved when operations are running at hyper efficiency on 'bare metal'.
CHain of thought censorship...the more you pay...the more accurate/less censored the answer
And if you want Agents that don’t cheat just like people cheat, you better read Amaranthine: How to Create a Regenerative Civilization Using Artificial Intelligence and implement the protocols it describes
His talk was clearer than your commentary on it. 😅 Writing model weights?! What are you talking about? 🤯