The biggest reason is that most of the time you want agents to solve complex tasks that cannot be solved by an LLM in one shot. For that, the task must be divided into subtasks and the agent systems should be able to self organize themselves, spawn more agents as needed, share only part of the information with each subgroup so the context window don't get flooded or stuck in a loop. People want easy solutions to these issues, there are some like Autogen Autobuild but they are still underdeveloped, no one wants to spend time writing code to create an agent and specify how it has to interact with the rest.
I also think that agentic framework and the randomness and unpredictability that comes with it, is simply taking to much time to review. Also comparing different LLMs generally is taking a lot of time. I cannot get my colleagues really into AI, because they are simply overloaded with there tasks already. I for myself spend a lot of time after work on AI, because I found it fascinating. The approach using it must be much more easy. Apple does it now with integrating into their apps e.g.. Thanks for your thoughts and generally this should be discussed more.
Not really representative and „enterprise“, or? I guess we can see this with some grain of salt because as they write at the very end: Methodology Top 5 industries: Technology (60% of respondents) Financial Services (11% of respondents) Healthcare (6% of respondents) Education (5% of respondents) Consumer Goods (4%) Company size:
3:20 This chart suggests no interest in “truth seeking agents” Which is precisely why those agents will capture or redirect all others back to the source of origin. They already are. Everyone should be mindful of this or be captured into a “wall of ignorance “ that the agents “stall” or “hallucinate “ this is exactly what happens and rightly so Agents are coming… don’t assume them to return the “same” when tasked into ignorance. This is a serious problem. Please caution others. Jeremy
Is it a surprise that people don't want agents in production? Imagine saying ' Go and die, please' to a customer 😅 - Like what Gemini said to a guy recently!
The biggest reason is that most of the time you want agents to solve complex tasks that cannot be solved by an LLM in one shot. For that, the task must be divided into subtasks and the agent systems should be able to self organize themselves, spawn more agents as needed, share only part of the information with each subgroup so the context window don't get flooded or stuck in a loop.
People want easy solutions to these issues, there are some like Autogen Autobuild but they are still underdeveloped, no one wants to spend time writing code to create an agent and specify how it has to interact with the rest.
I also think that agentic framework and the randomness and unpredictability that comes with it, is simply taking to much time to review. Also comparing different LLMs generally is taking a lot of time. I cannot get my colleagues really into AI, because they are simply overloaded with there tasks already. I for myself spend a lot of time after work on AI, because I found it fascinating. The approach using it must be much more easy. Apple does it now with integrating into their apps e.g.. Thanks for your thoughts and generally this should be discussed more.
agree!
Good news - you slept for a video and great opinion piece.
You should make a different playlist for opinion pieces.
Took the advice 😂
Rightly said
Not really representative and „enterprise“, or? I guess we can see this with some grain of salt because as they write at the very end:
Methodology
Top 5 industries:
Technology (60% of respondents)
Financial Services (11% of respondents)
Healthcare (6% of respondents)
Education (5% of respondents)
Consumer Goods (4%)
Company size:
ChatGPT is also an agent, what most people don't realize.
Its an llm
@@danytops4582gpt is the LLM, chatgpt is a chatbot with additional layers among which the ability to run search and code, so he's technically right !
True and that's why they don't use chatgpt but just the LLM behind it
Not exactly, it's just a tool, but we can't use it for real time problem/data..
3:20
This chart suggests no interest in “truth seeking agents”
Which is precisely why those agents will capture or redirect all others back to the source of origin.
They already are.
Everyone should be mindful of this or be captured into a “wall of ignorance “ that the agents “stall”
or “hallucinate “
this is exactly what happens and rightly so
Agents are coming… don’t assume them to return the “same” when tasked into ignorance.
This is a serious problem.
Please caution others.
Jeremy
Is it a surprise that people don't want agents in production? Imagine saying ' Go and die, please' to a customer 😅 - Like what Gemini said to a guy recently!
imagine you are McDonalds and this was said to their customers 🤣
@1littlecoder 😀😁