AI Agents as Neuro-Symbolic Systems?

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ม.ค. 2025

ความคิดเห็น • 24

  • @wolpumba4099
    @wolpumba4099 2 หลายเดือนก่อน +4

    *Rethinking AI Agents as Neuro-Symbolic Systems*
    * *0:00** Introduction:* James Briggs introduces his perspective on redefining AI agents beyond the current narrow view of LLMs interacting with tools.
    * *2:04** ReAct Agents as a Foundation:* He revisits the concept of ReAct agents, which involve an LLM going through reasoning and action steps, including calling external tools, to answer a question.
    * *7:28** Expanding the Definition of Agents:* Briggs argues that the common perception of agents as solely LLM-driven systems with tool interactions is too limited, both practically and in the broader AI literature.
    * *8:37** Neuro-Symbolic AI as a Framework:* He proposes the neuro-symbolic framework from the MRKL paper as a more comprehensive way to understand agents.
    * *9:27** Symbolic AI:* Symbolic AI represents the "good old-fashioned AI" approach using handwritten rules, logic, and ontologies, similar to the syllogistic logic example provided.
    * *12:48** Connectionist AI (Neural AI):* Connectionist AI, the foundation of modern neural networks, was inspired by the structure of the human brain. Rosenblatt's perceptron is highlighted as a key development.
    * *17:23** Neuro-Symbolic Systems Blend the Best of Both Worlds:* Neuro-symbolic systems combine the learning capabilities of neural networks with the reasoning and structure of symbolic AI.
    * *21:09** Agents Beyond LLMs:* Briggs emphasizes that agents can leverage various neural network models beyond just LLMs, citing semantic router as an example where embedding models guide routing to either RAG pipelines or LLMs.
    * *25:21** Practical Implications of a Broader Definition:* He demonstrates how a neuro-symbolic perspective enables more flexible agent design, where different intents can trigger specialized toolchains and workflows.
    * *27:34** Conclusion:* Briggs concludes by advocating for a more encompassing understanding of AI agents as neuro-symbolic systems, allowing for greater adaptability and pushing beyond the confines of purely LLM-centric approaches.
    I used gemini-1.5-pro-exp-0827 on rocketrecap dot com to summarize the transcript.
    Cost (if I didn't use the free tier): $0.03
    Input tokens: 20966
    Output tokens: 433

  • @anshulbhide
    @anshulbhide หลายเดือนก่อน +1

    Loved this video! Would be great to have more theoretical videos like this one.

    • @anshulbhide
      @anshulbhide หลายเดือนก่อน

      Especially around the history of AI leading upto today.

  • @simonoliverhansen7307
    @simonoliverhansen7307 หลายเดือนก่อน

    Love your videos and wlakthroughs - really easy to understand!! thank you james

  • @billykotsos4642
    @billykotsos4642 2 หลายเดือนก่อน +5

    The GOAT is back !

    • @jamesbriggs
      @jamesbriggs  2 หลายเดือนก่อน

      good to see you're still here!

  • @chrismaley6676
    @chrismaley6676 2 หลายเดือนก่อน +1

    Hi James, a fantastic video. Thank you for the validation. We are building an AI system that uses the semantic router, and our system resembles your diagram.
    I look forward to the next video on this topic.
    Cheers
    Chris

    • @jamesbriggs
      @jamesbriggs  2 หลายเดือนก่อน +1

      awesome, yeah we almost always pair semantic router with our agents nowadays - I'm glad we spent the time on it initially and have had some great contributors
      we're working on some big upgrades for SR - I'll share more soon

    • @chrismaley6676
      @chrismaley6676 2 หลายเดือนก่อน

      @ I would enjoy learning more advanced techniques on how you and your team use the semantic router-for example, detecting single-hop vs multi-hop queries.

    • @jamesbriggs
      @jamesbriggs  2 หลายเดือนก่อน +1

      great, we have a few SR videos in the pipeline :)

  • @olimoz
    @olimoz 2 หลายเดือนก่อน +1

    By the way, do check out Max Tegmark's recent paper "The Geometry of Concepts: Sparse Autoencoder Feature Structure" - points in the same direction

    • @jamesbriggs
      @jamesbriggs  หลายเดือนก่อน

      very cool - probably I will try and reproduce something similar, thanks for sharing!

  • @GeorgeFoxRules
    @GeorgeFoxRules 2 หลายเดือนก่อน

    That’s a very keen observation about boxing yourself and if you’re only using an LLM. One of the things I’m curious is how do you envision the semantic router working with non-LLM based inquiries combined with LLM based function calling together to create a cohesive strategy that also incorporates API Potentially as micro services and how would you envision this working together in sort of an AI middleware similar to llama stack? Also, do you see if integration with your semantic router and the logic around the routes that you create using NLP and potentially knowledge graph frameworks? Also, I’m interested if you see any framework where there are spontaneous creations of agents based upon API request coming into an agent that doesn’t necessarily have all of the internal services to accommodate so it builds the agent on the fly creating a more dynamic workflow.

  • @stevecoxiscool
    @stevecoxiscool 2 หลายเดือนก่อน

    The way I am starting to think about neural logic circuits (logic = a dog has 4 legs, 2 ears, long nose, etc) is a matrix of real numbers which house the weights that makes up the "rule/logic". You would start off like, is this an object, that would be at a certain location in the matrix from 0.0 to 1.0. Is this object organic/in-organic next cell, is mammal, etc etc. This would look like a grey scale mxn image for example. This image would represent a dog not in the way a CNN would identify a dog, but in a graphical sense. A nxn Perceptron. It's another type of embedding, but it's not a word embedding. You could experiment with mnist autoencoder but instead of encoding graphics representations of numbers, it would be rules for things.

  • @practical-ai-engineering
    @practical-ai-engineering หลายเดือนก่อน

    I've been saying this for months. Building LLM-centric agents is extremely limiting and dramatically reduces reliability. You don't need o1 to build good agents. GPT-4 level does a good job when you break The problem down into smaller pieces and only use LLMs in places where deterministic code can't do the job.

  • @micbab-vg2mu
    @micbab-vg2mu 2 หลายเดือนก่อน

    thanks - great video:)

    • @jamesbriggs
      @jamesbriggs  2 หลายเดือนก่อน

      you're welcome

  • @natecodesai
    @natecodesai 21 วันที่ผ่านมา

    Congrats on the son!

  • @sleeplessforawhile
    @sleeplessforawhile 2 หลายเดือนก่อน

    Yeah, that it's the way... But i do not agree with de symbolic treatment... It would be deeper using another "symbolic carrier"...

    • @jamesbriggs
      @jamesbriggs  2 หลายเดือนก่อน

      symbolic carrier in what sense? I'm curious

  • @nikob4228
    @nikob4228 2 หลายเดือนก่อน

    Congrats on the birth of your son 🎉

    • @jamesbriggs
      @jamesbriggs  2 หลายเดือนก่อน

      thanks! 🙏