*Rethinking AI Agents as Neuro-Symbolic Systems* * *0:00** Introduction:* James Briggs introduces his perspective on redefining AI agents beyond the current narrow view of LLMs interacting with tools. * *2:04** ReAct Agents as a Foundation:* He revisits the concept of ReAct agents, which involve an LLM going through reasoning and action steps, including calling external tools, to answer a question. * *7:28** Expanding the Definition of Agents:* Briggs argues that the common perception of agents as solely LLM-driven systems with tool interactions is too limited, both practically and in the broader AI literature. * *8:37** Neuro-Symbolic AI as a Framework:* He proposes the neuro-symbolic framework from the MRKL paper as a more comprehensive way to understand agents. * *9:27** Symbolic AI:* Symbolic AI represents the "good old-fashioned AI" approach using handwritten rules, logic, and ontologies, similar to the syllogistic logic example provided. * *12:48** Connectionist AI (Neural AI):* Connectionist AI, the foundation of modern neural networks, was inspired by the structure of the human brain. Rosenblatt's perceptron is highlighted as a key development. * *17:23** Neuro-Symbolic Systems Blend the Best of Both Worlds:* Neuro-symbolic systems combine the learning capabilities of neural networks with the reasoning and structure of symbolic AI. * *21:09** Agents Beyond LLMs:* Briggs emphasizes that agents can leverage various neural network models beyond just LLMs, citing semantic router as an example where embedding models guide routing to either RAG pipelines or LLMs. * *25:21** Practical Implications of a Broader Definition:* He demonstrates how a neuro-symbolic perspective enables more flexible agent design, where different intents can trigger specialized toolchains and workflows. * *27:34** Conclusion:* Briggs concludes by advocating for a more encompassing understanding of AI agents as neuro-symbolic systems, allowing for greater adaptability and pushing beyond the confines of purely LLM-centric approaches. I used gemini-1.5-pro-exp-0827 on rocketrecap dot com to summarize the transcript. Cost (if I didn't use the free tier): $0.03 Input tokens: 20966 Output tokens: 433
Hi James, a fantastic video. Thank you for the validation. We are building an AI system that uses the semantic router, and our system resembles your diagram. I look forward to the next video on this topic. Cheers Chris
awesome, yeah we almost always pair semantic router with our agents nowadays - I'm glad we spent the time on it initially and have had some great contributors we're working on some big upgrades for SR - I'll share more soon
@ I would enjoy learning more advanced techniques on how you and your team use the semantic router-for example, detecting single-hop vs multi-hop queries.
That’s a very keen observation about boxing yourself and if you’re only using an LLM. One of the things I’m curious is how do you envision the semantic router working with non-LLM based inquiries combined with LLM based function calling together to create a cohesive strategy that also incorporates API Potentially as micro services and how would you envision this working together in sort of an AI middleware similar to llama stack? Also, do you see if integration with your semantic router and the logic around the routes that you create using NLP and potentially knowledge graph frameworks? Also, I’m interested if you see any framework where there are spontaneous creations of agents based upon API request coming into an agent that doesn’t necessarily have all of the internal services to accommodate so it builds the agent on the fly creating a more dynamic workflow.
The way I am starting to think about neural logic circuits (logic = a dog has 4 legs, 2 ears, long nose, etc) is a matrix of real numbers which house the weights that makes up the "rule/logic". You would start off like, is this an object, that would be at a certain location in the matrix from 0.0 to 1.0. Is this object organic/in-organic next cell, is mammal, etc etc. This would look like a grey scale mxn image for example. This image would represent a dog not in the way a CNN would identify a dog, but in a graphical sense. A nxn Perceptron. It's another type of embedding, but it's not a word embedding. You could experiment with mnist autoencoder but instead of encoding graphics representations of numbers, it would be rules for things.
I've been saying this for months. Building LLM-centric agents is extremely limiting and dramatically reduces reliability. You don't need o1 to build good agents. GPT-4 level does a good job when you break The problem down into smaller pieces and only use LLMs in places where deterministic code can't do the job.
*Rethinking AI Agents as Neuro-Symbolic Systems*
* *0:00** Introduction:* James Briggs introduces his perspective on redefining AI agents beyond the current narrow view of LLMs interacting with tools.
* *2:04** ReAct Agents as a Foundation:* He revisits the concept of ReAct agents, which involve an LLM going through reasoning and action steps, including calling external tools, to answer a question.
* *7:28** Expanding the Definition of Agents:* Briggs argues that the common perception of agents as solely LLM-driven systems with tool interactions is too limited, both practically and in the broader AI literature.
* *8:37** Neuro-Symbolic AI as a Framework:* He proposes the neuro-symbolic framework from the MRKL paper as a more comprehensive way to understand agents.
* *9:27** Symbolic AI:* Symbolic AI represents the "good old-fashioned AI" approach using handwritten rules, logic, and ontologies, similar to the syllogistic logic example provided.
* *12:48** Connectionist AI (Neural AI):* Connectionist AI, the foundation of modern neural networks, was inspired by the structure of the human brain. Rosenblatt's perceptron is highlighted as a key development.
* *17:23** Neuro-Symbolic Systems Blend the Best of Both Worlds:* Neuro-symbolic systems combine the learning capabilities of neural networks with the reasoning and structure of symbolic AI.
* *21:09** Agents Beyond LLMs:* Briggs emphasizes that agents can leverage various neural network models beyond just LLMs, citing semantic router as an example where embedding models guide routing to either RAG pipelines or LLMs.
* *25:21** Practical Implications of a Broader Definition:* He demonstrates how a neuro-symbolic perspective enables more flexible agent design, where different intents can trigger specialized toolchains and workflows.
* *27:34** Conclusion:* Briggs concludes by advocating for a more encompassing understanding of AI agents as neuro-symbolic systems, allowing for greater adaptability and pushing beyond the confines of purely LLM-centric approaches.
I used gemini-1.5-pro-exp-0827 on rocketrecap dot com to summarize the transcript.
Cost (if I didn't use the free tier): $0.03
Input tokens: 20966
Output tokens: 433
Loved this video! Would be great to have more theoretical videos like this one.
Especially around the history of AI leading upto today.
Love your videos and wlakthroughs - really easy to understand!! thank you james
The GOAT is back !
good to see you're still here!
Hi James, a fantastic video. Thank you for the validation. We are building an AI system that uses the semantic router, and our system resembles your diagram.
I look forward to the next video on this topic.
Cheers
Chris
awesome, yeah we almost always pair semantic router with our agents nowadays - I'm glad we spent the time on it initially and have had some great contributors
we're working on some big upgrades for SR - I'll share more soon
@ I would enjoy learning more advanced techniques on how you and your team use the semantic router-for example, detecting single-hop vs multi-hop queries.
great, we have a few SR videos in the pipeline :)
By the way, do check out Max Tegmark's recent paper "The Geometry of Concepts: Sparse Autoencoder Feature Structure" - points in the same direction
very cool - probably I will try and reproduce something similar, thanks for sharing!
That’s a very keen observation about boxing yourself and if you’re only using an LLM. One of the things I’m curious is how do you envision the semantic router working with non-LLM based inquiries combined with LLM based function calling together to create a cohesive strategy that also incorporates API Potentially as micro services and how would you envision this working together in sort of an AI middleware similar to llama stack? Also, do you see if integration with your semantic router and the logic around the routes that you create using NLP and potentially knowledge graph frameworks? Also, I’m interested if you see any framework where there are spontaneous creations of agents based upon API request coming into an agent that doesn’t necessarily have all of the internal services to accommodate so it builds the agent on the fly creating a more dynamic workflow.
The way I am starting to think about neural logic circuits (logic = a dog has 4 legs, 2 ears, long nose, etc) is a matrix of real numbers which house the weights that makes up the "rule/logic". You would start off like, is this an object, that would be at a certain location in the matrix from 0.0 to 1.0. Is this object organic/in-organic next cell, is mammal, etc etc. This would look like a grey scale mxn image for example. This image would represent a dog not in the way a CNN would identify a dog, but in a graphical sense. A nxn Perceptron. It's another type of embedding, but it's not a word embedding. You could experiment with mnist autoencoder but instead of encoding graphics representations of numbers, it would be rules for things.
I've been saying this for months. Building LLM-centric agents is extremely limiting and dramatically reduces reliability. You don't need o1 to build good agents. GPT-4 level does a good job when you break The problem down into smaller pieces and only use LLMs in places where deterministic code can't do the job.
thanks - great video:)
you're welcome
Congrats on the son!
Yeah, that it's the way... But i do not agree with de symbolic treatment... It would be deeper using another "symbolic carrier"...
symbolic carrier in what sense? I'm curious
Congrats on the birth of your son 🎉
thanks! 🙏