Awesome video as always Dan. Love the no bullshit approach you have. 🎯 Key Takeaways for quick navigation: 00:00 *💡 Introduction to Two-Way Prompts* - Introduces the concept of two-way prompts for building agentic workflows. - Two-way prompting is a common occurrence in real collaborative environments where agents prompt each other to drive outcomes. - Explains the importance of incorporating two-way prompts into agentic tools to enhance communication between users and AI agents. 01:07 *🤖 Concrete Example with Ada* - Demonstrates a practical example using Ada, a personal AI assistant, to create example code based on a provided URL. - Shows the back-and-forth interaction between the user and the AI assistant, highlighting the value of two-way prompts in generating useful outputs. 03:40 *📝 Reviewing Generated Code and Workflow Overview* - Reviews the generated code from the example workflow, showcasing its accuracy and utility. - Provides an overview of the agentic workflow structure and the role of two-way prompts in driving the process. 05:06 *🛠️ Creating a View Component* - Illustrates another example of using two-way prompts to generate a view component based on user input. - Shows how the AI assistant interacts with the user to gather necessary information and make modifications based on user feedback. 07:52 *🔄 Comparison with Human in the Loop and Future Predictions* - Compares two-way prompts with traditional "human in the loop" approaches, highlighting the versatility and flexibility of two-way prompts. - Predicts the evolution of agentic workflows towards more complex interactions where AI agents proactively prompt users for input, leading to greater automation and efficiency. 09:43 *🔮 Guidance for Building Agentic Workflows* - Emphasizes the central role of users in agentic workflows and the importance of incorporating two-way prompts to drive results effectively. - Advises developers to focus on user-centric design and gradually integrate more advanced agentic features while leveraging user input to enhance automation. th-cam.com/users/sgaming/emoji/7ff574f2/emoji_u1f3af.png
I like your videos, because they are pretty short and very valuable in terms of practical use, without all those blah blah blah useless AI videos. Nothing new for me thou, but I'm subscribed and looking forward how you'll push it further. Great job dude. 👍
Maybe I missed something, but : factually how did you managed to make your assistant prompt you? Using human in the loop, okay, but is there something around it? Some prompts? Middle agents to think about what is missing for it to perform? Note : I almost only listened to your voice, so if it was on screen I probably missed it, but looking at the quality of your content you would have talk about it if you where showing it. Thank you for your massive work, can't wait to free up my time to dive into trying your techniques!
Thank you for the sharing., it seems to be missing the editor.py file. Can you please explain what is in the editor file or share the code please.thank you
What's the motivation behind building out everything from scratch for Ada instead of using something like LangGraph, CrewAI, or Phidata? From looking through the code, it seems like your "agents as functions" mental model is precisely what these tools landed on as well.
As someone also working on an ai framework (AAA+ Advanced Atomic Agents), I think it comes down to if you just want a supported solution out of the box, or want to actually create new types of flows building blocks & features. All these are great but tend to be very locking, you need to use their classes exclusively, and play by their rules. If you want something new that you control well , you gotta build it. Though I hope there will be more unification efforts to take all the great things from each foss framework and compile the ultimate global foss agi #WholesomeAGI 💜
Great Q, this topic is worth a whole video, but the tl;dr is (#3 is most important): 1. To avoid premature abstractions. 2. Control your prompts (lots of magic starts happening when you use LangGraph / CrewAI / Autogen tools). 3. Understand what's going on. Its too early in the "LLM game" to let a framework do the work for you. Do you know what your 50 libraries in requirements.txt are doing? Nope, neither do I. LLms, prompts, prompt chains, and agentic workflows as a whole are too valuable to hand off to a library. It's too soon. 4. I have strong opinions on simple, elegant, easy to read code. With abstractions, this always break down at some point and you have to read all the libraries (often not great) docs to know exactly whats happening. (&scrapForVideoAGWorkflow)
You have more control, can use private models, can use a mixture of models for optimized output for their specific task, your codebase is not dependent on a framework's abstraction and limitations or the direction it takes in the future, you can control the accuracy of your output for your use case, and lastly but most importantly, you get to learn how the technology works, which can be useful when new models come out.
help solve this problem, maybe you have a step-by-step installation with a solution to all problems (myenv) F:\assistent\2>python main9_ada_personal_ai_assistant_v02.py Traceback (most recent call last): File "F:\assistent\2\main9_ada_personal_ai_assistant_v02.py", line 19, in from modules import editor ImportError: cannot import name 'editor' from 'modules' (F:\assistent\2\modules.py) (myenv) F:\assistent\2>
Awesome video as always Dan. Love the no bullshit approach you have. 🎯 Key Takeaways for quick navigation:
00:00 *💡 Introduction to Two-Way Prompts*
- Introduces the concept of two-way prompts for building agentic workflows.
- Two-way prompting is a common occurrence in real collaborative environments where agents prompt each other to drive outcomes.
- Explains the importance of incorporating two-way prompts into agentic tools to enhance communication between users and AI agents.
01:07 *🤖 Concrete Example with Ada*
- Demonstrates a practical example using Ada, a personal AI assistant, to create example code based on a provided URL.
- Shows the back-and-forth interaction between the user and the AI assistant, highlighting the value of two-way prompts in generating useful outputs.
03:40 *📝 Reviewing Generated Code and Workflow Overview*
- Reviews the generated code from the example workflow, showcasing its accuracy and utility.
- Provides an overview of the agentic workflow structure and the role of two-way prompts in driving the process.
05:06 *🛠️ Creating a View Component*
- Illustrates another example of using two-way prompts to generate a view component based on user input.
- Shows how the AI assistant interacts with the user to gather necessary information and make modifications based on user feedback.
07:52 *🔄 Comparison with Human in the Loop and Future Predictions*
- Compares two-way prompts with traditional "human in the loop" approaches, highlighting the versatility and flexibility of two-way prompts.
- Predicts the evolution of agentic workflows towards more complex interactions where AI agents proactively prompt users for input, leading to greater automation and efficiency.
09:43 *🔮 Guidance for Building Agentic Workflows*
- Emphasizes the central role of users in agentic workflows and the importance of incorporating two-way prompts to drive results effectively.
- Advises developers to focus on user-centric design and gradually integrate more advanced agentic features while leveraging user input to enhance automation.
th-cam.com/users/sgaming/emoji/7ff574f2/emoji_u1f3af.png
You were quite right - obvious when you say it - but the example at the end had me constantly pausing to think about the applications. Awesome stuff.
I like your videos, because they are pretty short and very valuable in terms of practical use, without all those blah blah blah useless AI videos.
Nothing new for me thou, but I'm subscribed and looking forward how you'll push it further.
Great job dude. 👍
Maybe I missed something, but : factually how did you managed to make your assistant prompt you? Using human in the loop, okay, but is there something around it? Some prompts? Middle agents to think about what is missing for it to perform?
Note : I almost only listened to your voice, so if it was on screen I probably missed it, but looking at the quality of your content you would have talk about it if you where showing it.
Thank you for your massive work, can't wait to free up my time to dive into trying your techniques!
i think i heard that he has a video or the files somewhere.
Pro tip: use json outputs (basic example):
You must answer in JSON
{
response: ,
followups:
}
All the best
Powerful insights at the end. Focuses us on where the ball will be, vs. where it is. Good stuff.
High level quality like always
Thank you for the sharing., it seems to be missing the editor.py file. Can you please explain what is in the editor file or share the code please.thank you
This is pretty cool. I’ll bet this is pretty useful for exploratory tasks, such as API exploration, dataset analysis or research.
Great Job Dan!!
Love it, thanks for the video.
This is amazing. Thank you.
What's the motivation behind building out everything from scratch for Ada instead of using something like LangGraph, CrewAI, or Phidata? From looking through the code, it seems like your "agents as functions" mental model is precisely what these tools landed on as well.
As someone also working on an ai framework (AAA+ Advanced Atomic Agents), I think it comes down to if you just want a supported solution out of the box, or want to actually create new types of flows building blocks & features. All these are great but tend to be very locking, you need to use their classes exclusively, and play by their rules. If you want something new that you control well , you gotta build it.
Though I hope there will be more unification efforts to take all the great things from each foss framework and compile the ultimate global foss agi #WholesomeAGI 💜
Great Q, this topic is worth a whole video, but the tl;dr is (#3 is most important):
1. To avoid premature abstractions.
2. Control your prompts (lots of magic starts happening when you use LangGraph / CrewAI / Autogen tools).
3. Understand what's going on. Its too early in the "LLM game" to let a framework do the work for you. Do you know what your 50 libraries in requirements.txt are doing? Nope, neither do I. LLms, prompts, prompt chains, and agentic workflows as a whole are too valuable to hand off to a library. It's too soon.
4. I have strong opinions on simple, elegant, easy to read code. With abstractions, this always break down at some point and you have to read all the libraries (often not great) docs to know exactly whats happening.
(&scrapForVideoAGWorkflow)
You have more control, can use private models, can use a mixture of models for optimized output for their specific task, your codebase is not dependent on a framework's abstraction and limitations or the direction it takes in the future, you can control the accuracy of your output for your use case, and lastly but most importantly, you get to learn how the technology works, which can be useful when new models come out.
Great video as always dan.
Your Video thrumbnail is insane!! What Image generation model is that?
nlg it's some pretty deep iterative midjourney prompting. Lmk if you're interesting in image prompting content.
This was fire!
she sounds way too much like if shes not safe for work. i couldn't focus 🙃 btw glad i foubd this channel seems like gold💚
help solve this problem, maybe you have a step-by-step installation with a solution to all problems
(myenv) F:\assistent\2>python main9_ada_personal_ai_assistant_v02.py
Traceback (most recent call last):
File "F:\assistent\2\main9_ada_personal_ai_assistant_v02.py", line 19, in
from modules import editor
ImportError: cannot import name 'editor' from 'modules' (F:\assistent\2\modules.py)
(myenv) F:\assistent\2>
1st :)
I am working on 3-way prompts :)
(You/ManagerAgentClient)
Part of the Advanced Atomic Agents (AAA+) Framework
All in on opensource agi #WholesomeAGI
That's next, next level.
@@indydevdan would love to share my AGI Bingo Board
@@indydevdan where are you picking Aida’s voice from ? Thanks
TLDR? what the gist?
Dude, none of this works. This one or the first one which explains why you never want to go over the code. All smoke and mirrors.
What are you talking about?
This is awesome! Love it!😎🤩🦾