I've tried a lot of the projects you've featured. Thank you so much for staying on top of this stuff and making these presentations, I really appreciate it!
Perplexity is awesome, but I still prefer Krawl AI over Perplexity. It's more work-focused and I can get lot more things done using it because of the "tools" concept they introduced. Plus they are adding new tools to it almost every other week
UPDATE: Now there is a "app/config.tsx" file where you can specify what AI engines to use and ollama can be set as a default as I can see if you enable it. please READ comments in the file for setting/using other models/services.
Thank you very much! I love your open source tutorials! ^^ Let's hope this gets momentum and with some community help they manage to get rid of some of the paid APIS. By the way, since your video release the github repo almost doubled its stars! :D
1:57 So it's not actually open source, it uses serper. The barrier to entry on any kind of search is always the search api Until we have a reliable approach to breaking google terms of service with open source search bots, open source search will not be a thing.
Yep, you got it right. However, if someone has a few days of free time, you can take the code and fix it to use ollama + duckduckgo search + local embeddings. [edit] Perhaps turning off streaming in ollama would help.
actually there is a flag that you can change if you want to run the embeddings with ollama as well The configuration file is located in the app/config.tsx file. You can modify the following values useOllamaInference: false, useOllamaEmbeddings: false,@@endlessvoid7952
@@endlessvoid7952 any guess what the economics would work out to be? What percent of the load is taken by the local model and what percent by the embeddings?
Off topic but I cancelled my GPT Plus subscription for Perplexity Pro. Until GPT5 is released. You have GPT4 access too in Perplexity, the only thing missing is Code interpreter. But Claude's Opus is a bit better than GPT4 for reasoning and planning. ..Great vid Matthew, thanks
Other channels have warned that Mistral is sketchy because they don't provide enough clear information about it's sources. I wouldn't want to use it as part of my query.
"all the apis"? it's ONLY ONE API and you can get a groq api for free (you would know it if had seen the video about it in this channel). Also you can run it trough your own LM installed locally. Are you actually sure you saw the video? Because i think you only saw a small piece of it and trow lies in comments.
@@rootor1 serper api is paid. OpenAI api is paid. Grok api you have to be an X premium subscriber. If I am wrong then point me in the right direction. You obviously have no idea what a lie is. Above was my opinion.
Oh I love this so much, our only other alternative’s before this was Searx and Godaddy. Which are not built for LLM’s just hacked into LLM’s this is awesome.
💥 Matthew, I'm super grateful that you used LM STUDIO as the interface to install your latest projects and LLMs. As a person who don't know how to code, as there are billions others, I'm not even a developer, I'm a musician, but I like to try these new AI tools, and test for myself. Some other methods are too complex for beginners like me. I think the more you approach to the beginners, the better to grow your channel. I think your channel shouldn't be only for the developers, but if you want success, you need to teach and explain everything even to a beginner. And thank you for that. 🎉❤❤
It's truly impressive how much effort the developers are pouring into this project. When considering the scale and depth of their work, providing an installer seems like it would be a reasonably small addition by comparison. Yet, it's perplexing why they haven't opted to do so. It makes one wonder about the challenges or considerations they might be facing that we, as the audience, are not privy to. Is it a matter of prioritizing resources, or perhaps there are technical hurdles that complicate the inclusion of an installer? It would be insightful to understand their perspective or any potential limitations they might be navigating. Nonetheless, incorporating an installer could significantly enhance the user experience, making the project more accessible to a broader audience. It’s an aspect that, albeit small in the grand scheme, could fundamentally shift how users interact with their work from the outset. I hope this feedback reaches the developers and that they might consider the immense value such an addition could bring to their already commendable project.
Using this project already requires you to setup 4 other API keys, so you need to open a few pages in browser and create 4 accounts. As simple as this is, nontechnical people would find this too complicated and would not even bother. And that's just going through a simple sign up. Technical people don't have problems with it but then they also don't really need an installer. What you want is for the devs to "productize" this. But I don't think their goal is to make money. Turning an idea into product is a whole lot of work and there already are existing products in this space (perplexity, iask, consensus etc). Usually, the "installer" in open source projects is as simple as a list of commands in the readme file. Often people opensource their work because they are interested in sharing their ideas and they want to get feedback and be involved in a like-minded community. I'm building a similar project myself and my main goal is to learn how these tools work, peek under the hood. I don't really need anyone using my tools but since money isn't my goal, I also don't mind sharing my code with the world. But that sharing comes with caveats - I've spent a lot of time writing this and I expect others to spend some time learning as well if they want to use it and don't know how. It boils down to your interests. If you want to be on the cutting edge, you need to have some fundamental programming and engineering skills. At least so you can follow the readme steps or be able to look inside the code and figure out how to run this on your own. If you don't want to learn these skills, that's completely fine but your only option is to use the products that are maintained on a higher level. Same way like I depend on Ikea for my furniture because I have no carpenting skills.
Super excited about open source agents for automating repetitive tasks like Rabbit R1 does. I mean we all want an autopilot for windows, doing the same tasks again and again is just too draining for the human soul...
Thanks! Great tutorial. I have a question tho! I'm running a web server on ubuntu (HestiaCP) how would I go about installing this on that server and accessing it though a domain name? If that is even possible?
😍🤟Thanks as always for your thorough explanations-you really are the best! I noticed that Perplexity has either changed its operations or introduced a new feature that makes it resemble a multi-agent model. Could you explore this change in an upcoming video? 🧠😎
/v1 probably will not work with ollama. Try /api ...which probably won't work for a different reason. I don't think that it supports openai format directly. I heard that it was going to, so maybe still try. ps - I would love to get your take on the privacy policies of the big API services. They appear to take an unlimited license to use anything you put in there. That they would even keep the contents of your prompts (rather than perhaps a token count, which I would still begrudge) should be alarming in its own right.
Could it be that the context window of Mistral is too small? To display useful text the code pulls info from the web first, it could be that the context window assumed for this is bigger? Personally, the times where I had an llm receiving requests but not returning, with Mistral, has been with prompts exceeding the context window
So cool, thanks for sharing. Running completely locally, except for web search would be really nice. The ollama issue, not explored yet, could this be on the response setting JSON or not?
People should type 5 like they write capital letters and they write full stop at the end of each sentence. Tone is evwrything.Tone can be imporved in a few hours.Tone is what makes diffwrmexe. Tone can be improved.
This is a dumb question. When you say you're using mixtral on grok What do you mean? Aren't those two different models that you just run independently? I'm new to this I don't understand it very well.
I bet the issue is that the LLM hasn't been given a max_token or any terminal condition. So it keeps answering answering answering, and for some reason it doesn't flush the output.
@@pedrocintra8915 right. unless they can pull traffic from somewhere else. Social media, links on other sites, ads. Seems the entire landscape of search is about to change.
It's the birth of a new model (quite literally). I suspect SEO will become even more important than it is now since it's based on similar algorithms to LLMs. The fight will be to show up in LLM searches instead of paginated engines like Google.
@@ezeepeezee v interesting and I agree, do you think LLM search will veer towards citing their sources? The clickthrough on the sources will be low if the meat of the query answer is in the results, page, though, so it does not incentivize content creation. Also the LLM based search engine will itself need to use a crawled index in order to pull up to results. ps. My comments are being deleted, will this one stick?
You can expect Boomers+ to continue using Google just like they continue to watch legacy corporate media for news. The shift will take 3-5 years or so.
Appreciate this. Not working for me as it's trying to use Azure. Presumably they're in the process of adding azure support at the moment: ⨯ Error: Azure OpenAI API instance name not found at eval (./app/action.tsx:52:18) at (rsc)/./app/action.tsx (/Users/j/src/llm-answer-engine/.next/server/app/page.js:4561:1) at __webpack_require__ (/Users/j/src/llm-answer-engine/.next/server/edge-runtime-webpack.js:37:33) at fn (/Users/j/src/llm-answer-engine/.next/server/edge-runtime-webpack.js:318:21) at eval (./app/layout.tsx:14:65) at (rsc)/./app/layout.tsx (/Users/j/src/llm-answer-engine/.next/server/app/page.js:4583:1) at __webpack_require__ (/Users/j/src/llm-answer-engine/.next/server/edge-runtime-webpack.js:37:33) at Function.fn (/Users/j/src/llm-answer-engine/.next/server/edge-runtime-webpack.js:318:21)
Without this knowledge (is it even possible?) its pointless declaring it open source if everyone is using multiple APIs from Microsft and Google owned companies.
Few you tubers are keen to about actual open source AI, because their sponsors won't be happy. This 'open source' is rather pointless since all the API's it uses are closed source and owned by Google and Microsoft. Clickbait on a technicality. Following the logic used in the title you could say all AI and the entire internet s open source if you use firefox or chromium to view it.
Won't install on my Pi5. One day. When AI runs well on low power SBC's then worry about robots taking your jobs, but they all need power. Solar and wind?
@@joefawcett2191 No, actually they are 2 different projects, they aim to do the same but coming from different people. Perhaps they could join efforts to evolve faster.
@@joefawcett2191 Sorry, but no way in the world "it's called X" can be said as "i mean X is similar to Y". If you want to change your mind you can start by editing your own comment (and then i would be pleased to delete my own response). But what you said in your thread first comment simply isn't true.
Serper api and embeddings are not free whereas perplexity is free. So don't fully understand what the buzz is about. This is local but paid afaik. Perplexity is not, only the pro version. Open Source embeddings and DuckDuckGO would make it fully free and open source. Or am I missing something?
I have been using perplexity for awhile now last two weeks I’ve noticed my search results are very ideologically leaning left results…very disappointing
The results look to busy, they should put the sources not at the top but at the bottom and only if you click on something to show them to you to clean it up some
Content creators, especially those that write, should put their content behind paywalls then. Sounds like embedded ads are dead. Platforms like Medium and Substack are for people that want real writing that a shallow AI cannot produce. New ideas, new interpretations of abstract concepts, life lessons through personal experience, etc. Informational articles to compliment those should be put behind paywalls as well (ex: quitting alcoholism life story and the complimentary "top ten foods to repair liver" article). It was already going this direction, but it should be accelerated. I'm biased, as I do write and generative AI written content is so obviously bad to anyone with a shred of sophistication in their taste. I've been envisioning the hypetrain of generative AI to come to a halt, and I feel it starting to happen. People are starting to see through this stuff. The diversity in "ideas" it comes up with are starting to flatline. These things are eventually going to have to create something from nothing. Engage in negative entropy of information without an outside causal force to order it into something new (i.e. new human-made content will be put behind a paywall and made unavailable to the AI). I suppose abstract art generated with AI image models are pretty cool. They're especially useful to generate throwaway thumbnails for written articles. But how can that be monetized for anyone long term? There are only so many ways to mashup abstract art before it is like shovelware. Maybe product design? Abstract product design is about as useful as abstract writing. Only so much utility in transforming existing ideas and concepts. The whole industry is looks more and more like a tulip bulb mania with each passing day. Mainly designed to sell GPUs, justify layoffs, and (whether intentional or not) dumb down society even more than it has been since social media has created digital echo chambers of ideas. I think they're ultimate goal is to get people complacent to the idea of a machine making their decisions for them so that they give into a court system and government being run by an infallible AI. A ruse, since that "perfectly rational" AI will be directly controlled by an oligarchy of human operators. And you'll never be able to question them and their decisions, because everyone will be convinced that the machine is always right.
One thing that sorta annoys me with these projects is they are basically taking content that someone else produced and then giving it directly to the user. How is anyone supposed to make money and produce the content if an AI is just going to come along and take it? I have a feeling there will need to be some sort of protocol to stop these types of projects from being able to scrape your website. This will end up killing off content creation.
Well that's was the problem with early age patency respected humans value .. but we can see how such a perspective stops progress in every field ... Just see after in control of AI look at the amount of content created in so less time it just shows we need other ways to respect creators than having them get creator cuts .. cause in true reality it's possible a creator could not have another great idea and it's stuck with them untill shared
as per building a fake openai api you could reference this (th-cam.com/video/voHTS9Nk5VY/w-d-xo.html&pp=ygUcaG93IHRvIGJ1aWxkIGZha2Ugb3BlbmFpIGFwaQ%3D%3D) but wouldnt lm studio work just fine?
A man actually providing value in a sea of click baiters. Thank you sir
embeddings are already updated to be local. This project is moving fast!
Really
Thank you so much for featuring this, Matthew. Another amazing video as always, so well done! ❤
For those interested in Ollama support - I made a push with some updates yesterday that should make setting up local inference more clear! ❤
@@DevelopersDigest is that include the Embeddings that Matt is reffering to at 6:33
@@DevelopersDigest Still not working
absolutely fantastic job
I've tried a lot of the projects you've featured. Thank you so much for staying on top of this stuff and making these presentations, I really appreciate it!
This is LONG overdue. Thanks!
Thank you! I love your videos about new open-source projects! I got bored of the AI news content lol. Hope you have more to come!
Perplexity is awesome, but I still prefer Krawl AI over Perplexity. It's more work-focused and I can get lot more things done using it because of the "tools" concept they introduced. Plus they are adding new tools to it almost every other week
Use LM studio instead. That might be better, it also uses the openai API endpoint
Matthew you are on fire! Really. Love your videos!
Great instruction as always Matthew. Thanks 😀
Useful. Not click bait. Thank you. Much appreciated
UPDATE:
Now there is a "app/config.tsx" file where you can specify what AI engines to use and ollama can be set as a default as I can see if you enable it.
please READ comments in the file for setting/using other models/services.
So near Mathew and yet so far, good luck getting sorted
wow, I didn't know Perplexity has advanced to do web search now like that
Thank you very much! I love your open source tutorials! ^^
Let's hope this gets momentum and with some community help they manage to get rid of some of the paid APIS. By the way, since your video release the github repo almost doubled its stars! :D
We need an open source project of Chromium or Firefox that allow us to use a local LLM
It's langchain, you can do nomic or sfr embeddings.
THANK YOU!!! Please keep making videos!
1:57
So it's not actually open source, it uses serper.
The barrier to entry on any kind of search is always the search api
Until we have a reliable approach to breaking google terms of service with open source search bots, open source search will not be a thing.
Editing the Configuration file `app/config.tsx`. You can modify the following values
useOllamaInference: true,
useOllamaEmbeddings: true,
Thanks!
Thank you!!
So I still need OpenAI and Groq for this? Paying to surf the web? Then what's the advantage over Perplexity? What am I missing here?
Yep, you got it right. However, if someone has a few days of free time, you can take the code and fix it to use ollama + duckduckgo search + local embeddings.
[edit] Perhaps turning off streaming in ollama would help.
You can run it with local models. The embeddings are the only thing that cost money
No idea, you could do RAG in Langchain forever, search the internet as well.
actually there is a flag that you can change if you want to run the embeddings with ollama as well
The configuration file is located in the app/config.tsx file. You can modify the following values
useOllamaInference: false,
useOllamaEmbeddings: false,@@endlessvoid7952
@@endlessvoid7952 any guess what the economics would work out to be? What percent of the load is taken by the local model and what percent by the embeddings?
Wouldn’t it be better just to use a custom search engine (CSE) with Gemini? It’s free that way.
Off topic but I cancelled my GPT Plus subscription for Perplexity Pro. Until GPT5 is released. You have GPT4 access too in Perplexity, the only thing missing is Code interpreter. But Claude's Opus is a bit better than GPT4 for reasoning and planning. ..Great vid Matthew, thanks
Other channels have warned that Mistral is sketchy because they don't provide enough clear information about it's sources. I wouldn't want to use it as part of my query.
open source but costly with all the apis you have to subscribe too. Its not a search enging replacement in my mind because of this.
yeah, this is what we see in localllama subreddit... peope dont get the keyword local...
"all the apis"? it's ONLY ONE API and you can get a groq api for free (you would know it if had seen the video about it in this channel). Also you can run it trough your own LM installed locally. Are you actually sure you saw the video? Because i think you only saw a small piece of it and trow lies in comments.
@@rootor1 serper api is paid. OpenAI api is paid. Grok api you have to be an X premium subscriber. If I am wrong then point me in the right direction. You obviously have no idea what a lie is. Above was my opinion.
@@binaryatlasGroq, not Grok completely different.
@@Joel_M yeah my spelling leaves room for improvement
Oh I love this so much, our only other alternative’s before this was Searx and Godaddy. Which are not built for LLM’s just hacked into LLM’s this is awesome.
Great video. Please consider making more long form videos like you did with CrewAi. Keep it up Sir
💥 Matthew, I'm super grateful that you used LM STUDIO as the interface to install your latest projects and LLMs. As a person who don't know how to code, as there are billions others, I'm not even a developer, I'm a musician, but I like to try these new AI tools, and test for myself. Some other methods are too complex for beginners like me. I think the more you approach to the beginners, the better to grow your channel. I think your channel shouldn't be only for the developers, but if you want success, you need to teach and explain everything even to a beginner. And thank you for that. 🎉❤❤
But LM Studio is sadly still closed source, so that is a drawback from full openness.
Another excellent video.
Thank you very much. it is extremely helpful
Yeah ❗ Open Source "All the Things" 👍
It's truly impressive how much effort the developers are pouring into this project. When considering the scale and depth of their work, providing an installer seems like it would be a reasonably small addition by comparison. Yet, it's perplexing why they haven't opted to do so.
It makes one wonder about the challenges or considerations they might be facing that we, as the audience, are not privy to. Is it a matter of prioritizing resources, or perhaps there are technical hurdles that complicate the inclusion of an installer? It would be insightful to understand their perspective or any potential limitations they might be navigating.
Nonetheless, incorporating an installer could significantly enhance the user experience, making the project more accessible to a broader audience. It’s an aspect that, albeit small in the grand scheme, could fundamentally shift how users interact with their work from the outset. I hope this feedback reaches the developers and that they might consider the immense value such an addition could bring to their already commendable project.
Using this project already requires you to setup 4 other API keys, so you need to open a few pages in browser and create 4 accounts. As simple as this is, nontechnical people would find this too complicated and would not even bother. And that's just going through a simple sign up.
Technical people don't have problems with it but then they also don't really need an installer. What you want is for the devs to "productize" this. But I don't think their goal is to make money. Turning an idea into product is a whole lot of work and there already are existing products in this space (perplexity, iask, consensus etc). Usually, the "installer" in open source projects is as simple as a list of commands in the readme file.
Often people opensource their work because they are interested in sharing their ideas and they want to get feedback and be involved in a like-minded community. I'm building a similar project myself and my main goal is to learn how these tools work, peek under the hood. I don't really need anyone using my tools but since money isn't my goal, I also don't mind sharing my code with the world. But that sharing comes with caveats - I've spent a lot of time writing this and I expect others to spend some time learning as well if they want to use it and don't know how.
It boils down to your interests. If you want to be on the cutting edge, you need to have some fundamental programming and engineering skills. At least so you can follow the readme steps or be able to look inside the code and figure out how to run this on your own. If you don't want to learn these skills, that's completely fine but your only option is to use the products that are maintained on a higher level. Same way like I depend on Ikea for my furniture because I have no carpenting skills.
honestly, for tech people this is as simple as it gets
Super excited about open source agents for automating repetitive tasks like Rabbit R1 does. I mean we all want an autopilot for windows, doing the same tasks again and again is just too draining for the human soul...
Thanks a lot for the video ,can we run this with Serper API alone ? as brave API is costly ! Please help on this query.
Use litellm to expose ollama as OpenAI compliant API
Thanks! Great tutorial. I have a question tho! I'm running a web server on ubuntu (HestiaCP) how would I go about installing this on that server and accessing it though a domain name? If that is even possible?
Can you do a video that explains in a bird-eye-view fashion all the AI ecosystem and possible combinations that one can do?
Open Source for the world !! No more proprietary corporate crap please..
So no more OpenAI? No Claude3? Sometimes you get what you pay for.
I just wonder when Mathew will discover pinokio, that's the ultimate open source AI tool.
😍🤟Thanks as always for your thorough explanations-you really are the best! I noticed that Perplexity has either changed its operations or introduced a new feature that makes it resemble a multi-agent model. Could you explore this change in an upcoming video? 🧠😎
/v1 probably will not work with ollama. Try /api ...which probably won't work for a different reason. I don't think that it supports openai format directly. I heard that it was going to, so maybe still try.
ps - I would love to get your take on the privacy policies of the big API services. They appear to take an unlimited license to use anything you put in there. That they would even keep the contents of your prompts (rather than perhaps a token count, which I would still begrudge) should be alarming in its own right.
Thank you for your videos ❤
Could it be that the context window of Mistral is too small? To display useful text the code pulls info from the web first, it could be that the context window assumed for this is bigger? Personally, the times where I had an llm receiving requests but not returning, with Mistral, has been with prompts exceeding the context window
makes sense
So cool, thanks for sharing. Running completely locally, except for web search would be really nice. The ollama issue, not explored yet, could this be on the response setting JSON or not?
Matt, have a look at Perplexica (open Source).
People should type 5 like they write capital letters and they write full stop at the end of each sentence.
Tone is evwrything.Tone can be imporved in a few hours.Tone is what makes diffwrmexe.
Tone can be improved.
I think openai could not be easy replaced with mistral via ollama. For example pythagora nedds a proxy. Try LM Studio instead. Thanks for you work!
This is a dumb question. When you say you're using mixtral on grok
What do you mean? Aren't those two different models that you just run independently? I'm new to this I don't understand it very well.
Mixtral on groq, not grok. Groq is an inference engine
@@abinzacharia thanks. That makes sense now
I bet the issue is that the LLM hasn't been given a max_token or any terminal condition. So it keeps answering answering answering, and for some reason it doesn't flush the output.
nah, you could do opensource free search... something like searxng for search aggregation would be free and privacy respecting.
thanks for sharing! can you pls explain how answer engine can be better than crewai/autogen for getting questions/tasks like market research ?
So still uses google search and you pay via api instead, so you have to like the interface I guess? And why OpenAI and groc?
could be CORS Issue, try sending proper headers from ollama serve
It's not free. each api call cost $$$$
No you can use local model on your Computer
not if you use local ollama
@@Fiqure242 no, serp cost money, brave cost money, use local model and results suck. good luck
@@joefawcett2191 local models cost energy, that is money.
All has a free plan with limited offer try it
Matthew you boosted the stars 😂
why do we need embeddings?
so is this a nail in the coffin for informational websites that relied on search engine traffic for their sustenance?
yeah those are 100% dead
@@pedrocintra8915 right. unless they can pull traffic from somewhere else. Social media, links on other sites, ads. Seems the entire landscape of search is about to change.
It's the birth of a new model (quite literally). I suspect SEO will become even more important than it is now since it's based on similar algorithms to LLMs. The fight will be to show up in LLM searches instead of paginated engines like Google.
@@ezeepeezee v interesting and I agree, do you think LLM search will veer towards citing their sources? The clickthrough on the sources will be low if the meat of the query answer is in the results, page, though, so it does not incentivize content creation. Also the LLM based search engine will itself need to use a crawled index in order to pull up to results. ps. My comments are being deleted, will this one stick?
You can expect Boomers+ to continue using Google just like they continue to watch legacy corporate media for news. The shift will take 3-5 years or so.
Appreciate this. Not working for me as it's trying to use Azure. Presumably they're in the process of adding azure support at the moment: ⨯ Error: Azure OpenAI API instance name not found
at eval (./app/action.tsx:52:18)
at (rsc)/./app/action.tsx (/Users/j/src/llm-answer-engine/.next/server/app/page.js:4561:1)
at __webpack_require__ (/Users/j/src/llm-answer-engine/.next/server/edge-runtime-webpack.js:37:33)
at fn (/Users/j/src/llm-answer-engine/.next/server/edge-runtime-webpack.js:318:21)
at eval (./app/layout.tsx:14:65)
at (rsc)/./app/layout.tsx (/Users/j/src/llm-answer-engine/.next/server/app/page.js:4583:1)
at __webpack_require__ (/Users/j/src/llm-answer-engine/.next/server/edge-runtime-webpack.js:37:33)
at Function.fn (/Users/j/src/llm-answer-engine/.next/server/edge-runtime-webpack.js:318:21)
What’s the hardware specifications to be able run it in a local PC?
Without this knowledge (is it even possible?) its pointless declaring it open source if everyone is using multiple APIs from Microsft and Google owned companies.
Do we obligatorily have to use OpenAi or can we roll with Groq only?
npm install didn't work well. throwed lots of errors. I tried bun install and it worked 100%
So How is it different from Phind
You have to use OpenAI service. How come this is opensource
Does it work on Windows as well? If so, does it have a lot of issues?
api key for ollama is NA
we need full disclosure before the video starts if there is any paid keys etc required. but other than that good video
googles to afraid to loose advertisers to help people
too afraid to lose you mean
this is not going to happen, it cost money for all these APIs, google is free.
Can't we use LM studio for the server instead of Ollama?
Yes
missing the last 'e' for the code link
thank you!
@@matthew_berman you should pin this 😉
I love these projects but I'm so tired of setting myself up to be beholden to openai
How to integrate in laravel php website? Please guide me 🙏🙏
what's the best way to install each of these FOSS LLMs in a 'PC' on the cloud, instead of on my local machine?
Few you tubers are keen to about actual open source AI, because their sponsors won't be happy. This 'open source' is rather pointless since all the API's it uses are closed source and owned by Google and Microsoft. Clickbait on a technicality. Following the logic used in the title you could say all AI and the entire internet s open source if you use firefox or chromium to view it.
@@mq1563 still it is good to experiment different projects on a VM no?
@@Nuninecko closed source AI , nearly all owned by Google and Microsoft or Amazon isnt exciting, its terrifying.
Fully local and Im in 🎉
tell us what you did
how?
Highlight the text area.The text is the same color as the background. That is what happened to me.
Make this easier
LOL what channel did you think you were watching?
you can try lm studio openai api emulator instead of ollama
Won't install on my Pi5. One day. When AI runs well on low power SBC's then worry about robots taking your jobs, but they all need power. Solar and wind?
Do you think all this can be dockerised ?
There's a dockerfile in the source code now 👍
OpenDevin? Is this an incoming video? Or that sponsor from a some videos ago?
its called Devika
@@joefawcett2191 No, actually they are 2 different projects, they aim to do the same but coming from different people. Perhaps they could join efforts to evolve faster.
@@rootor1 i meant devika is an open source version similar to devin
@@joefawcett2191 Sorry, but no way in the world "it's called X" can be said as "i mean X is similar to Y". If you want to change your mind you can start by editing your own comment (and then i would be pleased to delete my own response). But what you said in your thread first comment simply isn't true.
@@rootor1 they said "open Devin" so i said its called Devika because Devin isn't open source. Get over it
Perplexity has quietly stoped providing Unlimited Claude Opus, It is not limited to 50 per 24h.
Should search via presearch instead
Serper api and embeddings are not free whereas perplexity is free. So don't fully understand what the buzz is about. This is local but paid afaik. Perplexity is not, only the pro version. Open Source embeddings and DuckDuckGO would make it fully free and open source. Or am I missing something?
He didn’t say its free. Its just open source
He did say free. “You can essentially have perplexity for free”
you can use ollama locally free
So it's LLM, RAG and search, likely with agents... Whats new here?
Following that train of thought there is nothing "new" since big bang.
'Open Source' is misleading, it uses multiple proprietary AI APIs . Might as well say Perplexity is open source because i'm running it in Firefox.
You can use local alternatives which would probably freeze your computer. Can you run the entirety of llama 70b on your computer?!?
Chroma dB is the answer and ollama to make it free
the free version of perplexity is already very code so i don't know why i will use this
Everything is code, also what you and i write here.
I have been using perplexity for awhile now last two weeks I’ve noticed my search results are very ideologically leaning left results…very disappointing
Same
AI in general is left leaning, ya know because they're trying to be ethical? A right wing AI would spell doom for the planet
Ah finally, the end of Google has arrived.
Nice haircut!
The results look to busy, they should put the sources not at the top but at the bottom and only if you click on something to show them to you to clean it up some
im not understand why every your video need to subscription to somewhere and okay if it's free but all subscription is premium :/
Well, saying that it is completely free is misleading, because for open ai, serper you have to pay for the usage of those services.
Content creators, especially those that write, should put their content behind paywalls then. Sounds like embedded ads are dead. Platforms like Medium and Substack are for people that want real writing that a shallow AI cannot produce. New ideas, new interpretations of abstract concepts, life lessons through personal experience, etc. Informational articles to compliment those should be put behind paywalls as well (ex: quitting alcoholism life story and the complimentary "top ten foods to repair liver" article). It was already going this direction, but it should be accelerated. I'm biased, as I do write and generative AI written content is so obviously bad to anyone with a shred of sophistication in their taste.
I've been envisioning the hypetrain of generative AI to come to a halt, and I feel it starting to happen. People are starting to see through this stuff. The diversity in "ideas" it comes up with are starting to flatline. These things are eventually going to have to create something from nothing. Engage in negative entropy of information without an outside causal force to order it into something new (i.e. new human-made content will be put behind a paywall and made unavailable to the AI).
I suppose abstract art generated with AI image models are pretty cool. They're especially useful to generate throwaway thumbnails for written articles. But how can that be monetized for anyone long term? There are only so many ways to mashup abstract art before it is like shovelware.
Maybe product design? Abstract product design is about as useful as abstract writing. Only so much utility in transforming existing ideas and concepts.
The whole industry is looks more and more like a tulip bulb mania with each passing day. Mainly designed to sell GPUs, justify layoffs, and (whether intentional or not) dumb down society even more than it has been since social media has created digital echo chambers of ideas.
I think they're ultimate goal is to get people complacent to the idea of a machine making their decisions for them so that they give into a court system and government being run by an infallible AI. A ruse, since that "perfectly rational" AI will be directly controlled by an oligarchy of human operators. And you'll never be able to question them and their decisions, because everyone will be convinced that the machine is always right.
Why someone would use it if there's a better tool than Chat GPT - Bard
You are change the url inside the OpenAi class. Never will work.
One thing that sorta annoys me with these projects is they are basically taking content that someone else produced and then giving it directly to the user. How is anyone supposed to make money and produce the content if an AI is just going to come along and take it? I have a feeling there will need to be some sort of protocol to stop these types of projects from being able to scrape your website. This will end up killing off content creation.
Well that's was the problem with early age patency respected humans value .. but we can see how such a perspective stops progress in every field ... Just see after in control of AI look at the amount of content created in so less time it just shows we need other ways to respect creators than having them get creator cuts .. cause in true reality it's possible a creator could not have another great idea and it's stuck with them untill shared
as per building a fake openai api you could reference this (th-cam.com/video/voHTS9Nk5VY/w-d-xo.html&pp=ygUcaG93IHRvIGJ1aWxkIGZha2Ugb3BlbmFpIGFwaQ%3D%3D)
but wouldnt lm studio work just fine?
Open Source doesn't equate to safe
Censored
This is nothing new. Classic search is not going away anytime soon.