Keep an eye on the repo for new examples! I'll probably add a RAG example really soon, but it'll just look similar to the web search or deep research examples, but with a vector DB instead of a web search
Very interesting development in Agent Based frameworks! 👍 Question: can I run Atomic Agents with full capabilities (such as downloading tools etc) from within a Google Colab notebook? That is usually where I start trying out things. I am not sure if the CLI of Atomic Agents will work from within a Colab notebook.
Heya, good question, thanks for asking... While the CLI will not work in a notebook, each of the tools is still a standalone thing - You can easily grab the code from the repo directly without the CLI as well. You can find them right here: github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-forge/tools
Cool! 1. What benefit do you see of having this kind of framework over to ordinary every day code that calls a few APIs? 2. I’ve seen agents described as “pipes, memory and tools” where memory enables support of long-running context - couldn’t see it but maybe it just wasn’t part of the demo? 🙏
Heya, sorry for the delayed reply here, things sure have been busy lately! 1) It depends, if you are working on your own and you are sure you'll only ever use a single provider, then go ahead and do your own thing. Atomic Agents, like most (good) frameworks, is mostly about getting people in teams to speak the same language, follow the same conventions, and write in a way that is well thought-out and maintainable by veterans and new hires alike. I like to believe that Atomic Agents provides this, as well as a way to easily switch between LLM providers like OpenAI, google, mistral, fully local with ollama, ... while also having a big focus on developer experience and maintainability 2) The best project to see every moving part in full action would be the deep research example here, I think: github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/deep-research Or this example with persistent agentic memory: github.com/KennyVaneetvelde/persistent-memory-agent-example
Sorry for the late reply on here though I answered you on Reddit about this, in case anyone else is wondering: Yes , through the use of a Union type, there is an Orchestrator example in the examples folder
If you really wanted to, you could simply wrap the tool into an Atomic Agents tool, won't take too much code github.com/BrainBlend-AI/atomic-agents/blob/main/atomic-forge/guides/tool_structure.md github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-forge/tools
Great work! I like so much all the ideas and motivations!
Great Work and demo
This is smart. Been waiting for a nice framework to contribute to.
Love your attitude and agree on the critique on the other multi agent framework framework. Curious to see how the journey goes on.
Great project. Congratulations.
How would adding RAG look in this workflow?
Keep an eye on the repo for new examples! I'll probably add a RAG example really soon, but it'll just look similar to the web search or deep research examples, but with a vector DB instead of a web search
@@KennyTheAIGuy just following along, any update on the RAG based example?
What do you think about flowise?
Very interesting development in Agent Based frameworks! 👍
Question: can I run Atomic Agents with full capabilities (such as downloading tools etc) from within a Google Colab notebook? That is usually where I start trying out things. I am not sure if the CLI of Atomic Agents will work from within a Colab notebook.
Heya, good question, thanks for asking...
While the CLI will not work in a notebook, each of the tools is still a standalone thing - You can easily grab the code from the repo directly without the CLI as well.
You can find them right here: github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-forge/tools
Cool!
1. What benefit do you see of having this kind of framework over to ordinary every day code that calls a few APIs?
2. I’ve seen agents described as “pipes, memory and tools” where memory enables support of long-running context - couldn’t see it but maybe it just wasn’t part of the demo? 🙏
Heya, sorry for the delayed reply here, things sure have been busy lately!
1) It depends, if you are working on your own and you are sure you'll only ever use a single provider, then go ahead and do your own thing. Atomic Agents, like most (good) frameworks, is mostly about getting people in teams to speak the same language, follow the same conventions, and write in a way that is well thought-out and maintainable by veterans and new hires alike. I like to believe that Atomic Agents provides this, as well as a way to easily switch between LLM providers like OpenAI, google, mistral, fully local with ollama, ... while also having a big focus on developer experience and maintainability
2) The best project to see every moving part in full action would be the deep research example here, I think: github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/deep-research
Or this example with persistent agentic memory: github.com/KennyVaneetvelde/persistent-memory-agent-example
Can the agents pick what tools they want to use by themselves?
Sorry for the late reply on here though I answered you on Reddit about this, in case anyone else is wondering: Yes , through the use of a Union type, there is an Orchestrator example in the examples folder
Is it compatible with langchain tools?
If you really wanted to, you could simply wrap the tool into an Atomic Agents tool, won't take too much code
github.com/BrainBlend-AI/atomic-agents/blob/main/atomic-forge/guides/tool_structure.md
github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-forge/tools
Agents should be able to use a computer. Can this do that? Can it go from installing pycharm to deploying a product to AWS?