I Switched to this AI Coding Assistant with Open Source Models
ฝัง
- เผยแพร่เมื่อ 14 ต.ค. 2024
- #ai #aiassistant #copilot #codeeditor
🎉Our Newsletter is live! Join thousands of other developers
islemmaboud.co...
🐦 Follow me on Twitter: / ipenywis
-- Special Links
✨ Join Figma for Free and start designing now!
psxid.figma.co...
👉 ✨ Join Figma For Professionals And Start Designing with your Team ✨
psxid.figma.co...
-- Watch More Videos
🧭 Build Login/Register API Server w/ Authentication | JWT Express AUTH using Passport.JS and Sequelize
• Build Login/Register A...
🧭 Turn Design into React Code | From prototype to Full website in no time
• Turn Design into React...
🧭 Watch Tutorial on Designing the website on Figma
• I Design a onecolor We...
🧭 Watch Create a Modern React Login/Register Form with smooth Animations
• Create a Modern React ...
🧭 Debug React Apps Like a Pro | Master Debugging from Zero to Hero with Chrome DevTools
• Debug React Apps Like ...
🧭 Master React Like Pro w/ Redux, Typescript, and GraphQL | Beginner to Advanced in React
• Master React Like Pro ...
🧭 Learn Redux For Beginners | React Redux from Zero To Hero to build a real-world app
• Debug React Apps Like ...
🧭 Introduction to GraphQL with Apollo and React
• Introduction to GraphQ...
🐦 Follow me on Twitter: / ipenywis
💻 Github Profile: github.com/ipe...
Made with 💗 by Coderone
I've been using it to deepen my understanding of every kind of code. Taking advantage of my own GPU. It's absolutely killer
I really like the idea of using a local AI model. I’m wondering if this set up allows you to find tune a model and run it locally? I would also be interested in perhaps using a RAG scenario to help with more specific or use case driven scenarios.
$100/year is not a point of contention for me. If it's worth it, it's worth it. I am testing copilot after spending the last few months copy/pasting stuff to gp2t. So far, happy with the results. Given that I use the tool for work, I need to run it on a company-provided laptop, which is not a platform for LLMs :) Unless you have a way to set up a local server?
Not obsessed with the local stuff because I'm more interested in battery life and ease of use, but I set up Continue with an Anthropic and Mixtral API key and this baby is great.
I was coming from Copilot but this setup has a much better UX and it's way smarter. Definitely sticking with this
Can you guide me please? How do I do this, please?
Nice. Which models are you using from Mixtral and Anthropic ? I was thinking of using lama3.1 70b from open router.
@@96144 I actually switched to using Zed but it was Mixtral's autocomplete and Anthropic's top model, now I'm using Claude and Copilot in Zed
Ah! I knew my Ryzen 9 would come in handy soon enough!
I have one already. With that, I feel myself lucky.
I pray with whole of my Heart for you to get one quite soon!🙏🙂
I need one but no money
For this stuff you’d be better off investing in a good GPU with 16GB+ VRAM.
why we should try a locally hosted AI model while there are cloud solutions available for the IDEs?
Cost
Try "Qwen 2" model. It is very good.
Thank you for this tutorial. This will be helpful for my commercial projects.
Are there benchmarks for any of this?
LM studio does not work on my windows PC. the error message after trying to load is "not conpatable with the system. any Ideas ?
Has anyone tried using openai apis and calculated if the cost were less then copilot?
Since it's pay as you go, it would really depend on how much you use code with it.
what is your browser?
use this instead of vscode, this is faster & better, i'm using only this in my daily job.
Thx for sharing this! This is very handy! Now I might quit using co-pilot or ChatGPT!
Nicely explained inside 👍
This is good. But this puts a heavy load on system
Not if you run a small model
Very true. Just used cloud solutions , like the Codeium plugin for VS Code
I'll stick to IDX
By Google? 😂 I am sure they will kill it in three years time.
Try OpenDevin
It makes many errors.