Absolutely. I am going to try this next week. I have a 4060 Ti (16Gb) GPU and am loving running Llama 3 8b uncompressed. So good! Hope I can get this running. I've not set up a python environment before by myself so it should be interesting.
No, but you should always post the link to the product or service your demoing. I know you think that by not posting the link it forces people to come to your website and then find the link, but in reality people just google the name and go directly to it.
Ollama tutorial would be amazing. Please consider doing it. Also can you go through the storm paper? That can be an interesting series of going through key AI papers like REACT Storm, etc.
🎯 Key points for quick navigation: 00:00 *🧠 Introduction to STORM AI* - New Stanford project for creating comprehensive Wikipedia pages on any topic - AI agents research and reference sources automatically - Can be run locally with limited functionality 02:05 *🛠️ Installation and Setup Process* - Cloning the GitHub repository - Creating and activating a Conda environment - Installing required dependencies - Setting up secrets.toml file with API keys 04:50 *🖥️ Running STORM AI* - Using the streamlit-based UI for easier interaction - Demonstrating the research process on the topic "dogs" - Showcasing the generated article with references 06:53 *🏠 Local Setup Options* - Possibility of running without OpenAI API - Support for VLM with Mistral - Potential for using Llama (not fully implemented) Made with HARPA AI
I found it awful. Asked it about a specific fish and it writes about another animal altogether yet claims it's about the animal I asked for, fakes its references which lead to 404 pages and adds articles which don't even mention the animal yet says it does. 😅 Nice idea terrible execution
I bet you could relatively easily attach a web front-end model to generate a better looking output. He was raving about the wiki output, and I was thinking it looks like a blob of text; but if the information is good, then formatting should be fairly easy to solve.
Your videos are always enjoyable. I have some critiques of the app related to authoritativeness, but guys smart enough to create the app are almost certainly also smart enough to anticipate such issues. Quibbles aside, it's potentially a wikipedia-killer.
Use this prompt to test reasoning with a known quantity with contextual inference John observed two snakes headed for his yard from the driveway. One was red and the other was blue. The blue snake passed his yardstick and was wider than the yardstick. The red one passed the yardstick and was longer but not wider. John caught one of the snakes and it measured 9.75 inches. Which snake did John catch ?
I ran into a problem that indicated a couple of modules were not available. I tried to install them but found they existed in python 3.10 when I restarted the conda environment with python=3.10 everything worked fine. Great results!
It was awful a few months back all incorrect info, their devs got very defensive for me highlighting them. Did they finally manage to get it working with a level of accuracy?
It would be extremely useful for literature review when writing article. I can't wait to see how AI is going to change the way journal papers are published.
It's not good try it on their website, looks good until you read the references content and find it's unrelated info it tries to claim as relevant (well the ones that actually exist and aren't 404 links to imagined ones)
Really bad just trying their website demo. Hallucinated references leading to 404 pages and gave incorrect info. I asked it to write about the lunare wrasse and it wrote most of the article copying a blog post about a six line wrasse 😅 then banged on about marine mammal conservation when this is a fish and stitched my fish into articles that don't mention it all all. 2 out of 10.
I would appreciate if you could disclose the rates as well of these API calls, you have an amazing hands on channel, costs are a important factor in production, with your entrepreneur background you understand 😉
But did anyone accomblished to point it to another "knowledge base"? This DSPy Framework under the hood is still a black box for me. But the approach is really awesome I like to cite STORM what Content can be created by AI.
While this is cool, what is the use case here? Just build another Wiki page that sources wiki pages? I think modifying this to web scrape and build blog posts or knowledge bases for internal company trainings would be interesting, but its needs a good bit of work to modify this.
Would be interesting to see how good the results are using a smaller local 7B - 9B parameter model. Do the smaller models affect the quality of what subtopics or reasoning the program uses to generate the research pieces or how well the paper is cleaned and deduplicated?
4:40 that is a problem. Sorry but I cant trust a API service that doesn't allow for the deletion of keys. That is a pretty basic functionality that should be there.
Hi sir can you please pin this message and create a video on how to use micro-agent, gptengineer and agency swarm with open source llm like groq, gemini and ollama.
Yt comment filter has been f'ing up a lot in recent times; there are some channels I'm not even allowed to comment at anymore despite the channel owner being complete fine with me; and in other channels I have to keep checking in an private mode window if my comments have survived or not (and to make matters worse, direct links to comments don't always work anymore, i gotta also try sorting by recent and scrolling thru to see if I can find it sometimes, and to make matters worse, sometimes there's a delay before they show, or before they're gone)...
As a conspiracybuff, I´m interestied in everything from pseudoscience to the more cringefringe theories. Think how cool to scrape the internet, categorizing and sorting information. It's almost like panning for hidden clues, searching for clues. Cant wait! :D
Not sure but this generated me an article full of hallucinated 404 references and incorrect info. I asked it about a specific fish lunare wrasse and it wrote about another fish entirely mostly from one person's blog on that other animal yet claimed otherwise.
Fix your audio please, it's very quiet in last few videos. When I put it in Davinci and set it to standard level, I have to add +10 or +12db to make it sound normal like other videos.
I tried the demo and even if I asked to do the research with an Italian term, it replied me in English. The contents where excellent, but there's a way to maintain the original language?
Worked today but article full of inaccuracies when read the referred articles which are often not related to what it's paraphrasing, if he checks some of his the articles links might actually be about cats not dogs. Also imagines links which go to 404
Not impressed. Of course, I only used the online demo and I have very high standards (30+ years as a magazine writer and editor) but it failed to follow my prompt detail and went its own way on the general topic. I'll install locally (please do that tutorial!) and keep testing but the output I got wouldn't pass a high school English class. Maybe good enough for the average blog but not much more. Eager to see how it develops
Should I do a follow up tutorial and show how to do everything locally?
Of course, now you have a powerful AI machine i don't understand why you even bother using paid api.
Full local would be awesome sauce
Absolutely. I am going to try this next week. I have a 4060 Ti (16Gb) GPU and am loving running Llama 3 8b uncompressed. So good! Hope I can get this running. I've not set up a python environment before by myself so it should be interesting.
Of course , we would like to see detailed demo.
No, but you should always post the link to the product or service your demoing. I know you think that by not posting the link it forces people to come to your website and then find the link, but in reality people just google the name and go directly to it.
Ollama tutorial would be amazing. Please consider doing it. Also can you go through the storm paper? That can be an interesting series of going through key AI papers like REACT Storm, etc.
yes, please
Nice! It would be good to know how much the API calls cost for the dogs wiki just for a sense of scope.
🎯 Key points for quick navigation:
00:00 *🧠 Introduction to STORM AI*
- New Stanford project for creating comprehensive Wikipedia pages on any topic
- AI agents research and reference sources automatically
- Can be run locally with limited functionality
02:05 *🛠️ Installation and Setup Process*
- Cloning the GitHub repository
- Creating and activating a Conda environment
- Installing required dependencies
- Setting up secrets.toml file with API keys
04:50 *🖥️ Running STORM AI*
- Using the streamlit-based UI for easier interaction
- Demonstrating the research process on the topic "dogs"
- Showcasing the generated article with references
06:53 *🏠 Local Setup Options*
- Possibility of running without OpenAI API
- Support for VLM with Mistral
- Potential for using Llama (not fully implemented)
Made with HARPA AI
Incredible, what he was able to archive in a short time is 99% usable, as always, you have to read it and correct it, but it was damn good.
I've been using it for a couple of days and it's amazing. You need to rewrite it of course but it can write amazing articles
I found it awful. Asked it about a specific fish and it writes about another animal altogether yet claims it's about the animal I asked for, fakes its references which lead to 404 pages and adds articles which don't even mention the animal yet says it does. 😅 Nice idea terrible execution
Amazing!!! Thesis reseachs are gonna never be the same!!!
I have been loving STORM. Although, I wish I could modify that format further and not just generate a wikipedia style output.
I bet you could relatively easily attach a web front-end model to generate a better looking output. He was raving about the wiki output, and I was thinking it looks like a blob of text; but if the information is good, then formatting should be fairly easy to solve.
This is awesome! I have a goal to get some AI tools running locally, but still learning how to use them on web
OooooooOOOoooo yeah need this local. This structure is really helpful for a few other greater implementations.😮
Stop asking and DO FULL TUTORIALS !!!
Yea, full local tutorials please. Even thou Mat has a beast comp and I don't.
Hehe, it's easy engagement call to action fodder... Carrot on a stick if you will. If he doesn't ask I am let down. 😂
Engagement... Blame the Google smooth brains for this useless algorithm.
The smooth brains are the ppl like y'all who encourage it. This is a cancer.
At least pretend to have some self respect
This is great. Amazing technology and a wonderful quick video showing it in action.
Your videos are always enjoyable. I have some critiques of the app related to authoritativeness, but guys smart enough to create the app are almost certainly also smart enough to anticipate such issues. Quibbles aside, it's potentially a wikipedia-killer.
Мне очень нравиться как ты говоришь, и всегда стараешься быть максимально объективным и беспристрастным. Благодарю!😅
Yes a local tutorial for this would be great!
Super cool but a wiki is not a Wikipedia, Wikipedia is a wiki, please don't mix these terms.
Difference?
Use this prompt to test reasoning with a known quantity with contextual inference
John observed two snakes headed for his yard from the driveway. One was red and the other was blue. The blue snake passed his yardstick and was wider than the yardstick. The red one passed the yardstick and was longer but not wider. John caught one of the snakes and it measured 9.75 inches. Which snake did John catch ?
Thanks Matthew, appreciate your kind efforts. Hope to see the local video made by you and it would be great❤️
Thanks, Great as usual, I sure like to see the full local setup (local model) of that project.
This is fantastic! I can already think of using this for blogs and articles. Hopefully the source it referenced isn't full of misinformation! 🤣
I ran into a problem that indicated a couple of modules were not available. I tried to install them but found they existed in python 3.10 when I restarted the conda environment with python=3.10 everything worked fine. Great results!
There is a good bit of overlap with this and Perplexity AI. I use Perplexity non stop!
Same pro is a beast I stop paying gpt sub and rock with perplexity and claude sonnet and claude api with claude engineer non stop
Perplexity is awesome
i talk to perplexity more than i do humans nowadays lmao
It was awful a few months back all incorrect info, their devs got very defensive for me highlighting them. Did they finally manage to get it working with a level of accuracy?
Excellent data, I'm impressed by SmythOS's pre-built AI templates' versatility. It is like to having a personal helper! #SmythOS
Hi bro, I am watching the Groq+CrewAI Rerun and didn't know where to comment, but just saying THANK YOU!
Thanks! Do you know approximately what the total token usage or API cost is to run each research instance?
if they make a tool that uses papers from top journals as references, it can be valuable
full local tutorial would be great!
Race ya to a fully local version :)
this is great, id love to know how to create an agent swarm workflow with this included
Thanks for this video, looking forward to using ollama and local models.
It would be extremely useful for literature review when writing article. I can't wait to see how AI is going to change the way journal papers are published.
It's not good try it on their website, looks good until you read the references content and find it's unrelated info it tries to claim as relevant (well the ones that actually exist and aren't 404 links to imagined ones)
Yes, please do local with ollama... I want to then see it running as a tool for a mixture of agents within something like AutoGPT
Great find! Lets see if you can do a local, using Ollama with something like llama3 would be amazing!
Thanks! Please do local installation!
Great video Matt, can you please do a video on storm working on Mixtral and Ollama ?
Please create more videos about STORM. Thank you
What I want to know is if it can factcheck itself so its output is reliable? How much does it suffer from hallucinations and errors?
If an article is wrong, the result might be wrong too I guess.
Really bad just trying their website demo. Hallucinated references leading to 404 pages and gave incorrect info. I asked it to write about the lunare wrasse and it wrote most of the article copying a blog post about a six line wrasse 😅 then banged on about marine mammal conservation when this is a fish and stitched my fish into articles that don't mention it all all. 2 out of 10.
I would appreciate if you could disclose the rates as well of these API calls, you have an amazing hands on channel, costs are a important factor in production, with your entrepreneur background you understand 😉
wow impressive, the most valuable work is done yet by another ai tools
Yes please do the local tutorial!
Yea local Thanks for all the great wk U do. all the best
Matt, please do a ollama or llamacpp tutorial, I really want to run this totally locally and I don't have openai api anyway.
Extremely helpful! Thank you! 😎🤖
FULL LOCAL, including SCRAPE pretty please
But did anyone accomblished to point it to another "knowledge base"? This DSPy Framework under the hood is still a black box for me. But the approach is really awesome I like to cite STORM what Content can be created by AI.
Thanks for the video! Let's try to run it with LMStudio)
While this is cool, what is the use case here? Just build another Wiki page that sources wiki pages? I think modifying this to web scrape and build blog posts or knowledge bases for internal company trainings would be interesting, but its needs a good bit of work to modify this.
Would be interesting to see how good the results are using a smaller local 7B - 9B parameter model. Do the smaller models affect the quality of what subtopics or reasoning the program uses to generate the research pieces or how well the paper is cleaned and deduplicated?
how much openAI credits did you end up spending for the dog research wiki?
I suspect as many credits he spend on the meth research article
In for the full tutorial
Absolutely..Local!!
4:40 that is a problem. Sorry but I cant trust a API service that doesn't allow for the deletion of keys. That is a pretty basic functionality that should be there.
Imagine trusting a LLM specially with GPT-4o and compare it with Wikipedia, thats mental. You guys trusting LLMs too much.
Nah we are good. Just remember: “LLMs can make mistakes, double check important info.”
Hi sir can you please pin this message and create a video on how to use micro-agent, gptengineer and agency swarm with open source llm like groq, gemini and ollama.
why are my comments getting deleted that mention that the search engine requires to provide a card and says free for 60 days? What happens after that?
not sure why...and after 60 days my guess is you have to pay
@@matthew_berman I guess so, but how much? Paranoid me says 1 million, still... I had to try it, so I am in the 60 days now. 🙈
Yt comment filter has been f'ing up a lot in recent times; there are some channels I'm not even allowed to comment at anymore despite the channel owner being complete fine with me; and in other channels I have to keep checking in an private mode window if my comments have survived or not (and to make matters worse, direct links to comments don't always work anymore, i gotta also try sorting by recent and scrolling thru to see if I can find it sometimes, and to make matters worse, sometimes there's a delay before they show, or before they're gone)...
AIpedia is cool. )))
Please, make a full tutorial. It'll be interesting.
hi Matthew, how was your visit to QUALCOMM? any video over that?
Hi Matthew. Do a tutorial using Gemma 2 9B
I would like to see a full tutorial for example with groq
Amazing! what a time to be alive
If they do searxng, you can even self host the web search.
Local tutorial yes
I created a blog from your video using my AI agent platform in minutes...
Amazing!
FULL LOCAL NOW
As a conspiracybuff, I´m interestied in everything from pseudoscience to the more cringefringe theories. Think how cool to scrape the internet, categorizing and sorting information. It's almost like panning for hidden clues, searching for clues. Cant wait! :D
“Computer, show me the Early Life section for the 100 largest donors to the RNC”
nice. I would run this in a heartbeat if it works with ollama
Wow! How much ram would this take please??
Thanks again. Local version if you can please.
How does it compare to GPT Researcher? I would say that this is probably more comprehensive.
Not sure but this generated me an article full of hallucinated 404 references and incorrect info. I asked it about a specific fish lunare wrasse and it wrote about another fish entirely mostly from one person's blog on that other animal yet claimed otherwise.
How does the research this performs compare to doing research with Perplexity AI’s Pro Search with a paid plan?
Hey, why would i need it locally? I just tried it - it works as is. Do i get better results when it's local?
is there a tool like this but for creating code documentation for your applications?
A comment for Ollama tutorial!😄
would you happen to know if it handles serpapi as opposed to you?
Does it works with groq ??
Please, full local tutorial!
LOCAL! LOCAL! LOCAL!
pls make a deep local tutorial
I feel like you should've shown making an article that doesn't exist already
Fix your audio please, it's very quiet in last few videos. When I put it in Davinci and set it to standard level, I have to add +10 or +12db to make it sound normal like other videos.
Some black magic shit we got coming down the pipeline 😂 let’s gooo baby
Please upload a local LLM support with Ollama set-up
the ironic thing is that installing any AI project is something back from the 80s.
I know, right? It's pathetic
hey! what about VectorVein ?
I tried the demo and even if I asked to do the research with an Italian term, it replied me in English. The contents where excellent, but there's a way to maintain the original language?
What is your terminal setup?
Amazing,
ADD THE THE PUBLIC TEST LINK TO DESCRIPTION
Instead of running STORM locally, you can try it out on the website, storm.genie.stanford.edu/
tried the demo site, it never did complete my request after waiting all night.
Worked today but article full of inaccuracies when read the referred articles which are often not related to what it's paraphrasing, if he checks some of his the articles links might actually be about cats not dogs. Also imagines links which go to 404
I wonder if this could work with lm studio and firecrawl. 🤔
Dope
I think it needs to output in paragraph form
Where's the Ewok you promised?
Not impressed. Of course, I only used the online demo and I have very high standards (30+ years as a magazine writer and editor) but it failed to follow my prompt detail and went its own way on the general topic. I'll install locally (please do that tutorial!) and keep testing but the output I got wouldn't pass a high school English class. Maybe good enough for the average blog but not much more. Eager to see how it develops
You did not say how much it will cost on OpenAI?
local would be great yes please
please create full local tutorial.