Graph-of-Thoughts (GoT) for AI reasoning Agents

แชร์
ฝัง
  • เผยแพร่เมื่อ 29 ธ.ค. 2024

ความคิดเห็น • 17

  • @wesleychang2005
    @wesleychang2005 ปีที่แล้ว +17

    🎯 Key Takeaways for quick navigation:
    00:26 🤯 Graph of Thoughts (GoT) is a non-linear approach to reasoning for AI agents, using interconnected nodes and edges to represent the thought process.
    01:18 📊 The Tree of Thoughts method suffers from inefficiency, requiring hundreds of queries to solve a single problem.
    02:36 🎯 An AI agent is defined as an entity that can perceive its environment, make decisions, and initiate actions based on a control cycle and a reward function.
    05:39 🌐 The latest research focuses on AI agents augmented by Large Language Models (LLMs) for more intelligent and autonomous behavior.
    08:43 🤖 LLM-augmented AI agents can interact with and learn from their environment, making them more adaptive and capable.
    12:45 📝 Explanation fine-tuning of LLMs (Large Language Models) is guided by GPT-4's own reasoning explanation, serving as a blueprint for development.
    13:34 🕸️ The "Graph of Thoughts" allows for a flexible approach to reasoning, where multiple chains of thoughts can be pursued and evaluated simultaneously.
    16:50 🎛️ The application of graph theory in AI involves the use of graph attention networks and various encoding techniques to manage both visual and textual data.
    22:46 📊 A scoring mechanism is used to assess the LLM's replies for accuracy and relevance, aiding in quality control of the model's output.
    24:16 🎮 A "Controller" manages the entire reasoning process, using a "Graph of Operations" (GoO) to dictate the execution plan for tasks, making the reasoning adaptable and structured.
    25:35 🌍 Graph-of-Thoughts (GoT) can be used for planet classification tasks. The speaker uses a simple example where an AI system decides whether a planet is habitable based on attributes like distance from the sun and atmospheric conditions.
    27:32 🛠️ In GoT, each node in the 'Graph of Operations' (GoO) represents a specific task (e.g., check distance from the sun). The 'Graph Reasoning State' (GRS) records and updates the system's understanding as nodes are executed.
    29:30 📝 The speaker describes a more complex example involving multiple types of planets and a list of features for classification. He emphasizes the need for a specialized Language Learning Model (LLM) trained in astrophysics.
    32:56 🎯 Scoring and validation are essential for assessing the reliability of the AI's responses. The system assigns a confidence score to its classification decision.
    35:48 🔄 The GoT system can incorporate human feedback, iterating through multiple loops to refine its reasoning process and improve classification outcomes.
    36:57 🛠️ The Graph-of-Operation (GoO) framework lays out how AI operations interact and depend on each other in a sequence, from initial query to final output.
    38:18 🙋‍♂️ Human domain expertise is essential for designing the reasoning flow within the GoO, as it's not automatically generated by the AI system itself.
    39:18 🤔 GPT-4 suggests that future AI systems like GPT-5 could potentially engage in meta-learning or self-improvement, opening the possibility for AI to design its own GoO structure.
    39:43 📊 Adequate training data is crucial for advanced AI systems to learn diverse tasks in multiple domains and potentially design complex GoO structures.
    40:07 📈 Mathematical graph theory could help in constructing multiple graphs for specific problems, setting the stage for training more advanced AI systems.

    • @kevon217
      @kevon217 ปีที่แล้ว +2

      Thanks!

  • @gustavstressemann7817
    @gustavstressemann7817 ปีที่แล้ว

    Thank you for this detailed immersion.Great!

  • @leding5475
    @leding5475 ปีที่แล้ว +1

    I wonder that if the LLM has risk to make mistake in the middle of the whole thought workflow, if so its hard to ensure the final result of the GoT

  • @andydataguy
    @andydataguy ปีที่แล้ว +1

    This is amazing! Thanks for this. I hope you go deeper into Graph 🙏🏾💜

  • @angstrom1058
    @angstrom1058 ปีที่แล้ว +1

    Unfortunately, the Action->Observation is attenuated and subject to external influences. That is, if the agent takes an Action and makes an Observation, the Observation made may have nothing to do with the Action, but to other things happening in the real world. The agent can arrive at the wrong conclusion and continue taking Actions that make no sense to the real world.

  • @kevon217
    @kevon217 ปีที่แล้ว +1

    Great breakdown!

  • @SamantShubham
    @SamantShubham ปีที่แล้ว +1

    thanks for the explainer video 🙂

  • @BoominGame
    @BoominGame 11 หลายเดือนก่อน

    Let's be intellectually challenging!! Let's!!! ROFL!

  • @project-asgard
    @project-asgard ปีที่แล้ว +1

    If we keep going like this we'll soon have an artificial human brain... or likely something way better 🤯

  • @gunterstrubinsky9452
    @gunterstrubinsky9452 ปีที่แล้ว +2

    i am not impressed or maybe misunderstood. This appears to be IMHO nothing more than the classic programming at a higher abstraction level and a step backward.
    it this then
    {if this then
    else}
    case: {1,2,3}
    You got my drift. Did i misunderstand so grossly?

    • @wesleychang2005
      @wesleychang2005 ปีที่แล้ว +5

      The Graph-of-Thoughts (GoT) approach to AI reasoning described in the video seems to be a way to improve machine learning models' ability to reason by structuring their thought process as a graph. The structure incorporates various components like a controller, prompter, parser, and scoring system to manage the whole reasoning process.
      It might look like "classic programming at a higher abstraction level," as you mentioned, but the intent seems to be about building a more dynamic, context-sensitive reasoning architecture for AI. While classic rule-based systems might follow a similar "if this then that" logic, they're generally not as adaptive or scalable as what GoT is purportedly aiming for.
      In classic rule-based systems, the rules are often hardcoded and don't adapt over time unless manually updated. The GoT approach seems to aim for a more nuanced and adaptive system capable of evaluating its own outputs and adjusting its reasoning paths dynamically. The use of graphs to connect various thoughts or reasoning steps also allows for more complex and flexible decision-making compared to a linear or tree-based model.
      You have a point that abstraction levels in programming can sometimes appear to be "steps backward," especially when they introduce complexity without clear benefits. However, this level of abstraction might be necessary for solving more complex reasoning tasks that simpler models struggle with.
      So, while the system incorporates elements of classic programming logic, it seems designed to be a step forward in terms of AI reasoning capabilities, rather than a step back.

    • @BAkedStakes
      @BAkedStakes ปีที่แล้ว

      man once used his feet to travel, then horse, then car. Same concept yet with a bigger impact it's not a subject it's a overall advancement at the way AI is used.

    • @acortis
      @acortis ปีที่แล้ว

      I am afraid I also failed to see the point. At some point I thought that we were going to see some Genetic Programming on the Graph but ... no :(

    • @KCM25NJL
      @KCM25NJL ปีที่แล้ว +2

      From what I understand, the process regresses LLM thoughts into arbitrary scoring matrices, effectively building a state of mind where the LLM or architecture incorporating the LLM, buildings a node based structure almost akin to an extra layer in the network, with weighted connections based on accuracy and relevance of output. It turns the simple prompt/response LLM into a "considered LLM". I can actually see a lot of potential in using the graph structure as a very real input for LLM fine-tuning.

    • @kevon217
      @kevon217 11 หลายเดือนก่อน

      @@KCM25NJLwell put