Agentic AI: Orchestrating Autonomous Agents for Complex Task Execution • Talk @ UofSC • Nov 8, 2024

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 พ.ย. 2024
  • "Agentic AI: Orchestrating Autonomous Agents for Complex Task Execution" • Invited Talk at the University of South Carolina ‪@UofSC‬ • Seminar in Advances in Computer Science (CSCE791) • November 8, 2024
    • Relevant Primers:
    agents.aman.ai
    rag.aman.ai
    llm.aman.ai
    • Overview:
    The talk explored the concept of Agentic AI, a growing area of research in the field of Artificial Intelligence. It covered how LLMs, when augmented with capabilities such as tool use, memory, and reflection, can effectively act as agents, enabling them to perform tasks in a more dynamic and sophisticated way.
    • Detailed Agenda:
    The definition of an AI agent, and its autonomous ability to combine decision-making and action-taking via an Agentic Workflow.
    A framework for different components of Agentic AI, including the Agent Core, Short-Term and Long-Term Memory, Planning, and Tool Use.
    Examples of Agentic workflows in different fields, such as software engineering, where the LLM can leverage external tools and resources to achieve a particular task.
    The the most common Agentic Design patterns across various applications:
    Reflection: The agent evaluates its work, identifying areas for improvement and refining its outputs based on this assessment. This process enables continuous improvement, ultimately leading to a more robust and accurate final output.
    Tool Use: Agents are equipped with specific tools, such as web search or code execution capabilities, to gather necessary information, take actions, or process complex data in real time as part of their tasks.
    Planning: The agent constructs and follows a comprehensive, step-by-step plan to achieve its objectives. This process may involve outlining, researching, drafting, and revising phases, as is often required in complex writing or coding tasks.
    Multi-agent Collaboration: Multiple agents collaborate, each taking on distinct roles and contributing unique expertise to solve complex tasks by breaking them down into smaller, more manageable sub-tasks. This approach mirrors human teamwork, where roles like software engineer and QA specialist contribute to different aspects of a project.
    • Relevant Links/Papers:
    ➜ Reflection
    Self-Refine: Iterative Refinement with Self-Feedback: arxiv.org/abs/...
    Reflexion: Language Agents with Verbal Reinforcement Learning: arxiv.org/abs/...
    CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing: arxiv.org/abs/...
    ➜ Tool Calling
    Gorilla: Large Language Model Connected with Massive APIs: arxiv.org/abs/...
    MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action: arxiv.org/abs/...
    Efficient Tool Use with Chain-of-Abstraction Reasoning: arxiv.org/abs/...
    ➜ Planning
    Chain of Thought Prompting Elicits Reasoning in Large Language Models: arxiv.org/abs/...
    HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace: arxiv.org/abs/...
    Understanding the Planning of LLM Agents: A Survey: arxiv.org/abs/...
    ➜ Multi-Agent Collaboration
    ChatDev: Communicative Agents for Software Development: arxiv.org/abs/...
    AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation: arxiv.org/abs/...
    APIGen: Automated Pipeline for Generating Verifiable and Diverse Function-Calling Datasets: arxiv.org/abs/...
    AutoAgents: A Framework for Automatic Agent Generation: arxiv.org/abs/...
    MetaGPT: Meta Programming for Multi-Agent Collaborative Framework: arxiv.org/abs/...

ความคิดเห็น •