You mentioned that you feel overwhelmed by new technologies and features popping up every week or two. The problem is, these are all half-assed products for early adopters. Products like Claude Computer Use are just alpha versions for early adopters to have fun testing them. Not a single agentic product is actually ready to do things for us. I wonder why I should bother testing technology that will be replaced by something much, much better in 3 months.
Regarding Sam Altman and other CEOs of AI companies, I think we shouldn’t believe everything they say about AI development. Sam doesn’t care about people; he cares about investors. His job is to make sure they feel their money is safe. There are definitely problems with scaling because not a single AI company has managed to create anything significantly better than GPT-4. Not a single company has made a leap similar to the jump from GPT-3.5 to GPT-4.
I think it's normal to encounter "problems" or "good surprises" while continously evolving a new technology; it's not matter of slowing down, just a normal process. Keeping innovating requires many aspects and ideas, not all the time just improving the same thing, of course
I agree, having an agent calling people to retrieve data tat is hard to find or store, especially for later companies. I've worked for corporations that had a lot of trouble communicating the right data to clients because the systems did not contain the right systems. This could become a multi million dollar company of you nail this problem
AI progress has slowed from a 10x improvement to a 5x :) In my case, generative AI works well-I achieve great results, and I don’t care what others think or say. :)
There is 100% a wall. There is a literal correlation between compute/time used to make an AI and % accuracy which works as a calculus formula. I.E. As a limit approaches infinity. Never actually gets there, (or to 100%).
The "killer" app for me is code generation, but the biggest problem I am seeing is that it is not reliable enough for me to be comfortable to just let it loose on a code base.
Would be cool to have a script to just run and walk away, it spins up X amount of computer use docker containers and it has a list of things to do, saves it on host and then shutsdown automaticlly.
I agree that GPT paradigm slowing down (if true) would not be a bad thing at all, quite the opposite, we are not capable to adopt to such a pace of change, so it could be a bad thing for investors, not for the society. But is it? Isn't it surprising that the "signs of slowing down" signals come simultaneously from investors mainly precisely at the moment OpenAI decides to switch to test time compute as a "new paradigm" when they have limited acces to energy that allows them to train next generation models? I wouldn't be surprised if that would be a PR campaign that shifts money flow from those who want to scale further up towards Open AI that proposes a "new paradigm". And I am not a fan of this new approach: yes it increases reasoning capabilities (symbolic) but diminishes creative capabilities (associative) and the whole leap we made was thanks to the latter one.
This is like PS2 to PS3. The graphics didn’t get much better they just refined the details and they got bigger. No one is saying there is a graphics slowdown. Don’t get me wrong that things have not gotten better just the leaps haven’t been as substantial between the different generations which is why people jumping from the most recent generation to the new generation there isn’t a huge leap. It’s like Xbox one to Xbox one s. They aren’t even renaming them because the changes small and incremental.
Very interesting to think about the evil things one could do with this AI Agent. One could have the AI call in a fake hostage situation at someones house and have them killed by SWAT, and law enforcement probably wouldnt be able to trace it back. Scary
Altman is over hyping the velocity of advances in AI, as well as other participants, so any loss of confidence is on them. At websummit every investor wanted to find the unicorn, they were all depressed in the hype versus reality. Some were making bets on wrapper-tech companies, but only out of FOMO, a few might prove out, but all had fairly weak or no business models, many had no chance of scaling, table stakes are just too high currently, yet all seemed willing to ignore costs of execution if their product did scale AI will have to go through its peaks and valleys phase, and thats good, because that accelerates competition, and forces people tp focus.
Your views on general stuff is appreciated. I suggest you do them regularly. Maybe once every two weeks depending ont the speed of news? Thank you
You mentioned that you feel overwhelmed by new technologies and features popping up every week or two. The problem is, these are all half-assed products for early adopters. Products like Claude Computer Use are just alpha versions for early adopters to have fun testing them. Not a single agentic product is actually ready to do things for us. I wonder why I should bother testing technology that will be replaced by something much, much better in 3 months.
Regarding Sam Altman and other CEOs of AI companies, I think we shouldn’t believe everything they say about AI development. Sam doesn’t care about people; he cares about investors. His job is to make sure they feel their money is safe. There are definitely problems with scaling because not a single AI company has managed to create anything significantly better than GPT-4. Not a single company has made a leap similar to the jump from GPT-3.5 to GPT-4.
I think it's normal to encounter "problems" or "good surprises" while continously evolving a new technology; it's not matter of slowing down, just a normal process. Keeping innovating requires many aspects and ideas, not all the time just improving the same thing, of course
I agree, having an agent calling people to retrieve data tat is hard to find or store, especially for later companies. I've worked for corporations that had a lot of trouble communicating the right data to clients because the systems did not contain the right systems.
This could become a multi million dollar company of you nail this problem
Scaling Agi will slow down, power consumption on compute will play the most important part of scaling in frontier models
Total Recall ( infinity memory) in 2025 will be a game changer.
AI progress has slowed from a 10x improvement to a 5x :) In my case, generative AI works well-I achieve great results, and I don’t care what others think or say. :)
If it just stopped now, the tools it has given us has already lept us forward 50 years. Yes 50 years. In about 50 million niches
There is 100% a wall. There is a literal correlation between compute/time used to make an AI and % accuracy which works as a calculus formula.
I.E. As a limit approaches infinity. Never actually gets there, (or to 100%).
Another awesome example! Is this demo code available for us members? I didn't see it your Github yet.
The "killer" app for me is code generation, but the biggest problem I am seeing is that it is not reliable enough for me to be comfortable to just let it loose on a code base.
Would be cool to have a script to just run and walk away, it spins up X amount of computer use docker containers and it has a list of things to do, saves it on host and then shutsdown automaticlly.
I agree that GPT paradigm slowing down (if true) would not be a bad thing at all, quite the opposite, we are not capable to adopt to such a pace of change, so it could be a bad thing for investors, not for the society. But is it? Isn't it surprising that the "signs of slowing down" signals come simultaneously from investors mainly precisely at the moment OpenAI decides to switch to test time compute as a "new paradigm" when they have limited acces to energy that allows them to train next generation models? I wouldn't be surprised if that would be a PR campaign that shifts money flow from those who want to scale further up towards Open AI that proposes a "new paradigm". And I am not a fan of this new approach: yes it increases reasoning capabilities (symbolic) but diminishes creative capabilities (associative) and the whole leap we made was thanks to the latter one.
This is like PS2 to PS3. The graphics didn’t get much better they just refined the details and they got bigger. No one is saying there is a graphics slowdown. Don’t get me wrong that things have not gotten better just the leaps haven’t been as substantial between the different generations which is why people jumping from the most recent generation to the new generation there isn’t a huge leap. It’s like Xbox one to Xbox one s. They aren’t even renaming them because the changes small and incremental.
Growing. Hands down.
I wonder if recording a phone call and processing it through a cloud service without consent can be legal.
Very interesting to think about the evil things one could do with this AI Agent. One could have the AI call in a fake hostage situation at someones house and have them killed by SWAT, and law enforcement probably wouldnt be able to trace it back. Scary
Software advancements have surpassed the hardware, and that is the true bottleneck.
Altman is over hyping the velocity of advances in AI, as well as other participants, so any loss of confidence is on them.
At websummit every investor wanted to find the unicorn, they were all depressed in the hype versus reality.
Some were making bets on wrapper-tech companies, but only out of FOMO, a few might prove out, but all had fairly weak or no business models, many had no chance of scaling, table stakes are just too high currently, yet all seemed willing to ignore costs of execution if their product did scale
AI will have to go through its peaks and valleys phase, and thats good, because that accelerates competition, and forces people tp focus.
There is no Nvidias machines enought to imoprove the LLMs
Lots of comPANIES wanting the same thing
And not much competion in this market
why you are speaking with fake bot italian accent?