100% Offline ChatGPT Alternative?

แชร์
ฝัง
  • เผยแพร่เมื่อ 29 ธ.ค. 2024

ความคิดเห็น • 812

  • @AmazingArends
    @AmazingArends ปีที่แล้ว +356

    Wow, I have a lot of saved documents, articles, and even e-books on my computer. The idea of my own local chatbot being able to reference all of this and carrying on a conversation with me about it is almost like having a friend that shares all my own interests and has read all the same things that I've read! Amazing how the technology is advancing! I can't wait for this!

    • @uberdan08
      @uberdan08 ปีที่แล้ว +17

      In its current state, i think you will be underwhelmed by its performance unless you have a pretty powerful GPU

    • @Raylightsen
      @Raylightsen ปีที่แล้ว +2

      ​@@lynngeek5191In simple English, what are you even talking about dude?

    • @fluktuition
      @fluktuition ปีที่แล้ว +4

      @@Raylightsen I think what he meant is that his GPU is not good enough, since he couldn't use the 8k token version.

    • @fluktuition
      @fluktuition ปีที่แล้ว +9

      @@lynngeek5191 You said the desktop version is limited to 2000 but that is not true, you have the option for 8000, however you need a gpu that can handle it(like a RTX 4090, or RTX A6000, or a quadro rtx 8000 card..)

    • @MiaX-l2b
      @MiaX-l2b ปีที่แล้ว +6

      @@Raylightsen ok dude, here it is for you : He's trying to say that this shit ain't free homie. Apprently far from it according to @lynngeeks5191

  • @LaHoraMaker
    @LaHoraMaker ปีที่แล้ว +171

    I didn't knew that you were working for h2o, but I am happy for you all. You're doing a great work making open source LLM more accesible and friendly!

    • @robmulla
      @robmulla  ปีที่แล้ว +25

      Thanks! I just started there. Still will make my normal youtube content but this open source project is exactly the sort of stuff I would normally cover. Glad you liked the video.

    • @lionelarucy4735
      @lionelarucy4735 ปีที่แล้ว +2

      I’ve always liked H2O, I used to use their deep learning framework a lot. Will definitely check this out.

    • @sylver369
      @sylver369 ปีที่แล้ว

      @@robmulla You work for this?... Sad.

    • @MattinShoarinejad-i9q
      @MattinShoarinejad-i9q ปีที่แล้ว +3

      @@sylver369 get off his back

    • @Aaronius_Maximus
      @Aaronius_Maximus ปีที่แล้ว

      ​@@sylver369if you worked for anyone "better" then why are you here? 🤔

  • @chillcopyrightfreemusic
    @chillcopyrightfreemusic ปีที่แล้ว +18

    This is amazing and very well put together! You have one of my favorite channels on all of TH-cam and I’ve been able to follow what you teach. It’s an honor to learn from you! Thank you!!

    • @robmulla
      @robmulla  ปีที่แล้ว +1

      Wow. Thanks for such kind words. I appreciate the positive feedback.

  • @mb345
    @mb345 ปีที่แล้ว +28

    Great video. I hope this gets a lot of views because it is relevant to so many different use cases that need to protect source data. Love the demo of how easy it is load vectorize your own local docs with Langchain.

    • @robmulla
      @robmulla  ปีที่แล้ว

      Thanks for the feedback! Glad you liked it. Please share it anywhere you think other people might like it.

  • @nash......
    @nash...... ปีที่แล้ว +1

    This is awesome! Definitely going down this rabbit hole

  • @Mark_Rober
    @Mark_Rober ปีที่แล้ว

    Dude, you are not even being biased THIS IS THE BEST INVENTION EVER!!!
    Open source??? AND it runs locally???? even without the file access feature this would've been the coolest piece of software I've ever encountered!

  • @WeTheLittlePeople
    @WeTheLittlePeople ปีที่แล้ว +15

    Awesome!!! Here I was losing hope about AI / GPT being more transparent about biases getting trained "baked" into popular chatbots already & the lack of real privacy users have about what they discuss "there is no privacy". And blammo you guys already had this out in just a few months. Super cool!! Thanks to all who went in on this!

    • @Charlesbabbage2209
      @Charlesbabbage2209 ปีที่แล้ว

      The last thing they want is AI “noticing” patterns in modern western society.

  • @johnzacharakis1404
    @johnzacharakis1404 ปีที่แล้ว +6

    YES PLEASE make another video where you sey up all of these in a cloud environment instead of local. Excellent video, thank you very much

  • @friendfortheartists
    @friendfortheartists ปีที่แล้ว

    As many searches I've done on TH-cam, you're Channel came up today. I'm really impressed. Great Job!

  • @siva_geddada
    @siva_geddada ปีที่แล้ว +2

    I love the Content, even though no one doesn't know about this, Very very useful content we are expecting a cloud version demo also. Thank You

  • @surfernorm6360
    @surfernorm6360 ปีที่แล้ว +2

    This is one of the best discussions of building an AI locally I have seen Bravo!! BTW the tutorial is excellent. its clearly enunciated the print ia veru big and readable for old foggies like me and he goes slow enough to follow and describes and shows what he is doing so noobs like me can follow,. also don't forget the transcription button gives details about every minute and a half . Very welll done anybody who is patient will like this. thank you Rob Mulla

  • @irfanshaikh262
    @irfanshaikh262 ปีที่แล้ว +1

    5:30 yes rob yes. Please. It will be all round approach if you start teaching python on a cloud environment. Much awaited and thanks for everything ur channels has offered me till date.
    Explicitly love your YT SHORTS.

  • @pancakeflux
    @pancakeflux ปีที่แล้ว +2

    This was great! I’m in the process of setting up langchain locally with openllm as a backend but to think I’ll try this as a next step. Thanks for sharing!

    • @robmulla
      @robmulla  ปีที่แล้ว

      Glad you enjoyed the video! Thanks for the feedback.

  • @eliotwilk
    @eliotwilk ปีที่แล้ว

    Great work. I was looking for a tutorial like this for a long time.

  • @WilliamCaban
    @WilliamCaban ปีที่แล้ว +1

    Thanks!

    • @robmulla
      @robmulla  ปีที่แล้ว +1

      Thanks a ton! Glad you liked it.

  • @onandjvjxiou
    @onandjvjxiou ปีที่แล้ว +4

    I would also like to see another video from you about setting up all a cloud environment. Thanks for sharing your knowledge.

  • @MrNemo-un2fn
    @MrNemo-un2fn ปีที่แล้ว

    this is the first video ive noticed that highlighted the actual subscribe button. it looked really clean.

  • @MeatCatCheesyBlaster
    @MeatCatCheesyBlaster ปีที่แล้ว +1

    This is really cool. I just installed it and tried it. actually runs pretty fast on my CPU

    • @spacegirlkes
      @spacegirlkes ปีที่แล้ว

      How did you get it to work with your CPU? I keep getting token limitations on the answers. Did you follow the documentation?

  • @MacarthurPark
    @MacarthurPark ปีที่แล้ว +1

    Thanks for all your efforts teaching brotha

  • @BryanEnsign
    @BryanEnsign ปีที่แล้ว +3

    This is exactutly what i was looking for to develop a model for internal use at my compnay. Thank you!

    • @tacom6
      @tacom6 ปีที่แล้ว +1

      any specific use-case?

    • @BryanEnsign
      @BryanEnsign ปีที่แล้ว +2

      @@tacom6 I write Inspection and test Plans, inspection test reports and standard operating procedure for industrial engineering and industrial construction. For instance if we need to write a procedure and checks for the correct way to flush HDPE or terminate and electrical panel etc. Currently I can paste SOPs or our checklists and ask if it was missed anything, ask about new ideas to add or new things entirely. It's great at asking for ASTM codes instead of looking it up or buying the PDF. I'm using Claude currently and perplexity. My company does everything internal it doesn't want the data hosted with another company. I'd like to make something for us to use internally. I believe using AI language models has sped up our procedure drafting and checksheet drafting about 40% so far. It's been game changing. But, I'm using 3d party sites and I have to scrub out client details, names etc. If I had an in house model I could let it retain all client or confidential data and others could ask requests against it also. I have a bot running that I've made through Poe using Claude as the base but I can't make it public for colleagues to use.

    • @tacom6
      @tacom6 ปีที่แล้ว +1

      @@BryanEnsign sounds fantastic. my interest is similar but with focus on cybersecurity. thanks for sharing!

    • @BryanEnsign
      @BryanEnsign ปีที่แล้ว +2

      @@tacom6 That's awesome. So many possibilities. Luck to you brother!

  • @AmericanSystems0101
    @AmericanSystems0101 ปีที่แล้ว

    Great video, your voice and pace is perfect. Thank you

  • @jannekallio5047
    @jannekallio5047 ปีที่แล้ว +15

    been using Chatbots to write a tabletop GPG campaign for my friends, but having the main story in separate files has been a problem. If I can use the material I already have as my own training material it might be way easier! This chatbot might be exactly what I need! Cool, I will give it a go!

    • @XX-yu4zz
      @XX-yu4zz ปีที่แล้ว +1

      update?

  • @Krath1988
    @Krath1988 ปีที่แล้ว

    Thanks for the Video I really appreciated and support the open source community! .

  • @voiceofreason5893
    @voiceofreason5893 ปีที่แล้ว

    Fascinating stuff. So important to figure out ways of using these tools in a way that allows us to retain some privacy. Subscribed.

  • @tenkaichi412
    @tenkaichi412 ปีที่แล้ว

    Amazing! Thanks for the detailed guide. Will definitely be using this for future projects!

  • @henrikgripenberg
    @henrikgripenberg ปีที่แล้ว

    This was a really transformative experience and I really appreciate that you did this video!

  • @ΝίκοςΜόσαλος
    @ΝίκοςΜόσαλος ปีที่แล้ว

    you got the like only for just including the last part 14:15 and after.
    the whole video is decently good.
    keep up the good work,the last part of info is really what eh people should get in their heads.
    BRAVO!!!
    ευχαριστω που το επες.

  • @neoblackcyptron
    @neoblackcyptron ปีที่แล้ว +1

    Wow an OpenSource GPT model. This is freaking awesome. I am working on building some AI products this is a life saver. I am excited to play with this big time. Throwing in some Vector Memory Databases to add context on top of this and I can get my first AI product out real soon. I can easily build some text to speech and Computer Vision models of my own on tensorflow to get something big to happen. Man Christmas has really come early for me this year.

    • @Runefrag
      @Runefrag ปีที่แล้ว

      It does have many limitations such as requiring fairly beefy hardware (or get really restrictive questions) and a ton of storage space.

    • @neoblackcyptron
      @neoblackcyptron ปีที่แล้ว

      @@Runefrag do you think nvidia rtx 4090 can handle this? When you say beefy hardware what do you have in mind?

  • @cerebralshunt2677
    @cerebralshunt2677 ปีที่แล้ว

    Yes, as requested, I am letting you know that I am interested in any of the potential future videos you mention in this video! You are giving gold away for free!

  • @AnimousArchangel
    @AnimousArchangel ปีที่แล้ว +20

    The best explanation so far. I've experience using GPT4All, self hosted whisper, Wav2lip, stable diffusion and also tried few others that I failed to run succesfully. The AI community is growing so fast and is fascinating. I'm using RTX3060 12GB and the speed is okay for chatbot use case but for realtime AI game engine character response it is slow to get response. I recently get a hand of RTX3080 10GB and in this video I see you are using RTX 3080TI which has 10240 CUDA vs mine 8960. It is first time i see that you can use additional cards which in your case GTX1080 vs mine GTX1060 to run the LLM. Very informative video!

    • @hottincup
      @hottincup ปีที่แล้ว +2

      You should refit the model using Lora to get a smaller size, more narrow for in game usage, that way it's more optimize

    • @CollosalTrollge
      @CollosalTrollge ปีที่แล้ว

      Would AMD cards work or is it a headache?

    • @AnimousArchangel
      @AnimousArchangel ปีที่แล้ว

      @@CollosalTrollge currently they are using cuda technology which require nvidia cards.

    • @AnimousArchangel
      @AnimousArchangel ปีที่แล้ว

      @@hottincup good suggestion, i will try.

    • @ScandalUK
      @ScandalUK ปีที่แล้ว

      If you’re trying to shoe-horn a full fledged LLM to power some NPCs then you’re doing it wrong.
      All that you need is basic chat personalities and the ability to train based on in-game events, this requires very little processing power!

  • @smartstudy286
    @smartstudy286 ปีที่แล้ว

    Awesome knowledgeable video this is so useful. Keep making videos like this

  • @kevinmctarsney36
    @kevinmctarsney36 ปีที่แล้ว +11

    Could you do a video on the “fine tuning” you talk about near the end? I like the privacy attribute of running open source locally and the fine tuning would be really beneficial.

  • @Robert-vj6up
    @Robert-vj6up ปีที่แล้ว +1

    Fantastic tutorial and superb framework! Congratulations for you and the H2O team! 🔥🔥🔥

  • @humanist3
    @humanist3 ปีที่แล้ว

    Excellent content, especially the LangChain part, thank you!

  • @bentomo
    @bentomo ปีที่แล้ว

    Finally a video that gives me a "hello world" for an attainable local gpt alike chat bot. Now I can actually LEARN how to fine tune a model.

  • @TranslatorsTech
    @TranslatorsTech ปีที่แล้ว

    Mighty nice video, very useful! Giving local context is interesting. The question about roller coasters was a clever way to demo the feature. Thanks! 😊

  • @davida199
    @davida199 ปีที่แล้ว

    This is extremely helpful, Thank you for sharing this.

  • @shukurabdul7796
    @shukurabdul7796 ปีที่แล้ว

    you got me subscribe !!! wow thanks for explaining in step by step for anyof your content keep it up

  • @logicnetpro
    @logicnetpro ปีที่แล้ว

    Very informative video ion how to create your own private chat bot and have it learn from your context. Genius! I look forward to further development.

  • @IItiaII
    @IItiaII ปีที่แล้ว +1

    Very interesting, I stood it up on a VPS with 10 CPUs, it's painfully slow but it works!

  • @incometunnel7168
    @incometunnel7168 ปีที่แล้ว

    Nice work buddy. Keep it up

  • @malsmith3806
    @malsmith3806 ปีที่แล้ว +2

    Thank you!! I’ve been stumped on building a model to generate an entire script for a Seinfeld episode based on my personal scores of each episode and I think this video just unlocked it for me!!

    • @olivierdulac
      @olivierdulac ปีที่แล้ว +2

      You are now master of your domain!

  • @jamesrico
    @jamesrico 11 หลายเดือนก่อน

    Great content, I enjoyed your video.

  • @kearneyIT
    @kearneyIT ปีที่แล้ว

    I will be trying this, can't wait!!

  • @doughughes257
    @doughughes257 ปีที่แล้ว

    Thank you so much. This is a clear guide for us to begin experimenting with our vision for a new application, and the last 4 minutes is a great executive summary for me to show to my management.

  • @Chaicowski
    @Chaicowski ปีที่แล้ว +1

    I can't say THANKS enough. This is exactly what my team is looking for!

    • @robmulla
      @robmulla  ปีที่แล้ว

      So glad you found it helpful.

  • @lionellvp_
    @lionellvp_ ปีที่แล้ว

    Thank you very much, finally a reasonably reasonable documentation. The topic of Cuda is also somehow a single cramp. I wanted to realize the whole thing as a Docker to keep the system more scaled, just cramp.

  • @dimitriskapetanios294
    @dimitriskapetanios294 ปีที่แล้ว +9

    Rob, the video is Awsome! Great content as usual 🤩
    Would love to watch a version utilizing a spinned up instance from a cloud provider too ( for those of us without a gpu 😊)

    • @robmulla
      @robmulla  ปีที่แล้ว +8

      Thanks for watching! Will def look into a video with steps to setup on the cloud.

    • @deucebigs9860
      @deucebigs9860 ปีที่แล้ว

      @@robmulla definitely interested in the cloud provider video too

  • @kshitizshah7482
    @kshitizshah7482 ปีที่แล้ว +1

    Great video! I'm a fan of H20. Really impressed with the driverless performance. Helps me benchmark my own code! Gonna try this out later this week, thanks Rob!

  • @TylerMacClane
    @TylerMacClane ปีที่แล้ว +45

    Hello Rob,
    I liked your video very much. I wanted to suggest that you consider making a video on how a translator or voice-to-text transformation can become a tool for everyone based on an open language model. It would be an interesting topic to explore and could benefit many people. Thank you for your great content!

    • @pirateben
      @pirateben ปีที่แล้ว

      there are plugins for this fine an open source one and use that

    • @andysPARK
      @andysPARK ปีที่แล้ว

      @@pirateben what's it called Ben? Linkey pls.. :)

    • @musp3r
      @musp3r ปีที่แล้ว

      Look for "Faster WhisperAI", maybe it could help you in creating transcriptions and translations from audio-to-text, I've had great success in using it to transcribe youtube videos and create subtitles for them.

  • @yourockst0ne
    @yourockst0ne ปีที่แล้ว

    Great work, thanks for the video 🎉

  • @KBProduction
    @KBProduction ปีที่แล้ว

    Really enjoy the video

  • @Zeroduckies
    @Zeroduckies ปีที่แล้ว +2

    Can’t imagine what sort of Ai we could build if we had all them ethereum miners ^^

  • @syedshahab8471
    @syedshahab8471 ปีที่แล้ว

    You explained everything very clearly, Thanks

  • @marioschroers7318
    @marioschroers7318 ปีที่แล้ว +2

    Just checked the thing out, as soon as it was mentioned (luckily, this video was suggested to me by TH-cam).
    Being a tech enthusiast and a translator, I ended up spending an hour discussing the technical dimensions of my profession. Loved this!
    I will definitely look into this in more detail later. The thing is also pleasantly polite. 😊

  • @kompila
    @kompila ปีที่แล้ว

    That was good presentation.
    Thanks !

  • @SkateTube
    @SkateTube ปีที่แล้ว +7

    I am interested more about how this can turn into features for 3d model making, music making, 3d/2d game making or even software programming. Some tests of what can it generate and other stuff could be nice.

  • @AhaAiPro
    @AhaAiPro ปีที่แล้ว

    great i have learn so much about how use the open source ai and ai modules ,i am glad to do my self to build the project on my local computer ,the read PDF ability is so good i well try it!

  • @DiegoSilva-dv9uf
    @DiegoSilva-dv9uf ปีที่แล้ว +1

    Valeu!

    • @robmulla
      @robmulla  ปีที่แล้ว

      🙏 I appreciate it!

  • @thefrub
    @thefrub ปีที่แล้ว +3

    These language models are getting improved so fast, by the time you have it installed and working there's 3 better ones

  • @electrobification
    @electrobification ปีที่แล้ว +2

    THIS IS A GAME CHANGER!! FOSS FOR THE WIN!

  • @richardblanchard2743
    @richardblanchard2743 10 หลายเดือนก่อน

    Very clear explanation of the program. Great video. I wonder how people create these open source programs and still can put food on the table. They must have day jobs.

  • @brunaoleme
    @brunaoleme ปีที่แล้ว

    Amazing video.
    Thank you so much.
    I also would to love to see a video about the setup in a cloud provider.

  • @pedrojesusrangelgil5064
    @pedrojesusrangelgil5064 ปีที่แล้ว +5

    Rob thanks a lot for this video. Please make a video on how to get the gpu in the cloud

    • @robmulla
      @robmulla  ปีที่แล้ว +1

      Thanks for asking. Others have been asking for this too so I am planning to work on it.

  • @RahulCarnaliousSnal
    @RahulCarnaliousSnal 6 หลายเดือนก่อน

    Man ! Having own version of GPT will definitely help in doing personal research or study by providing it accurate resources. Since we already know, online chat-gpt often gives inaccurate result. But by giving this offline gpt accurate resource like in the end of this video, may be this problem can be tackled......

  • @heck0272
    @heck0272 ปีที่แล้ว

    This is cool and brilliant. Great tutorial

  • @erikbahen8693
    @erikbahen8693 ปีที่แล้ว

    Nice! It's working.. I'm excited

  • @HometownHustle
    @HometownHustle ปีที่แล้ว

    Thank you very much for providing the beautiful resource..

  • @EckhartAurelius
    @EckhartAurelius ปีที่แล้ว

    Very helpful video. Thank you!

  • @WvlfDarkfire
    @WvlfDarkfire ปีที่แล้ว

    Crickin SCARY! I love it! Hail our new overlords!

  • @fredimachadonet
    @fredimachadonet ปีที่แล้ว +3

    Awesome content! I noticed the audio dropouts from time to time. I had a similar issue this week when recording some videos myself, and the culprit for me was the Nvidia Noise Removal filter in OBS. I changed it back to RNNoise and it worked like a charm. Don't know if yours is related, but if it helps, then happy days! Cheers!

  • @c3rb3ru5d3d53c
    @c3rb3ru5d3d53c ปีที่แล้ว +3

    One suggestion here, this would be more popular for everyone if there was an installer like GPT4All has, as those who have no command-line experience can still use it.

  • @WifeWantsAWizard
    @WifeWantsAWizard ปีที่แล้ว +24

    You should explain up front that it's an ad.

  • @aneeshusman1129
    @aneeshusman1129 ปีที่แล้ว

    Thanks

  • @RasmusSchultz
    @RasmusSchultz ปีที่แล้ว +2

    I decided to try this out, and I don't feel like the document feature really works? I uploaded a few smaller markdown files, and I wanted a summary of everything that was discussed in those documents - instead, it picks two of those documents, and ignores everything else. It's not clear to me how this was implemented, or even how this would work? Best guess, it creates a *separate* index (using BERT or something similar) and queries that for relevant files - then includes the contents of those files in the actual query? Or however much it can, based on the max context length and the length of the query? Even after explicitly selecting all my documents, it only picks two of them. What I was hoping for was some kind of deep integration with the LLM itself - but I don't suppose that's really possible at this time? While this feature is probably useful for some things, it doesn't really help solve my problem, which is trying to distill a lot of long conversations into conclusions. I'm still waiting for an LLM that can actually handle larger context. It sounds like they're on the horizon with something like LongLlama? But it doesn't look like they're here yet? In the mean time, this is better than nothing, I suppose. But the real killer feature would be very large context, enabling it to read and ingest larger volumes of content, and then make queries. Maybe I'm too impatient. 😅

  • @nicholsonjay4724
    @nicholsonjay4724 ปีที่แล้ว

    It has potential. I hate all the subscription models coming at us all the time. Hopefully more of this will come about.

  • @wildyato37
    @wildyato37 ปีที่แล้ว +1

    11:00 I have a question this model..type and its quantity of the size..is based on how much accurate the information will be on basis of that?..and do we need update it?

  • @larabassabah202
    @larabassabah202 10 หลายเดือนก่อน

    Keep good work. I'm very interested

  • @kgoblin5084
    @kgoblin5084 ปีที่แล้ว

    On the why you would want this question: I actually think 1 of the most compelling answers is "because it works without the internet". Most of the interesting potential applications I can think of for an LLM are not copacetic with mandated internet access... eg. using it for dynamic NPC dialogue in a video game, people don't like always-on connections for very good reasons.

  • @major-skull-gaming
    @major-skull-gaming ปีที่แล้ว

    thanks for the information Rob i just subscribed to you too

  • @Listen_bros
    @Listen_bros ปีที่แล้ว

    Man, you're just awesome ❤

  • @jjam3774
    @jjam3774 11 หลายเดือนก่อน

    I agree with you on the llm’s… eventually you will probably even have certifications… like ai solutions engineer… different flavors or llm’s just like the different flavors of Linux… Every small to medium company will want their own private ai setup when they see the benefits.

  • @aketo8082
    @aketo8082 ปีที่แล้ว +6

    Thank you! Nice. What is the difference to GPT4ALL? Where are the strengths and weaknesses? Can h2oGPT understand associations? Or can it be trained accordingly?
    Will there be more videos with how-to’s for basics, using your own files, training and corrections?

  • @vsmcreatives3625
    @vsmcreatives3625 ปีที่แล้ว

    Great Content Rob. I like this one more, Can you please make a video of how to customize this chatbot Look and features.

  • @jasonshub
    @jasonshub ปีที่แล้ว +2

    Ran through this then when trying to generate I get that the module fire does not exist. I'm in the Conda environment we made. not sure what I missed.

  • @stevendonaldson1216
    @stevendonaldson1216 ปีที่แล้ว

    Extremely excellent explanations

  • @harrisonleong4283
    @harrisonleong4283 ปีที่แล้ว

    Excellent stuff, thank you. I just followed your instruction, plus the README in the GIT, and spin up an instance on a Cloud VM, as my notebook has not GPU. It is fun... and I wish you can further teach us how to do the langchain things when we have a lot of documents that we want to feed them in.
    Thank you once again.

  • @timeless8285
    @timeless8285 ปีที่แล้ว +4

    You mentioned your machine needed a bigger gpu for a bigger model, I think it would be great to mention what gpu u are using so we can have some sort of reference.

    • @robmulla
      @robmulla  ปีที่แล้ว +1

      Good point. It’s a 3080ti

  • @onlyvitalnetworks2994
    @onlyvitalnetworks2994 ปีที่แล้ว

    Amazing work!

    • @robmulla
      @robmulla  ปีที่แล้ว

      Thank you! Cheers!

  • @giovannisardisco4541
    @giovannisardisco4541 ปีที่แล้ว

    Do you know the Cheshire-Cat (Stregatto) project? It provides a straightforward method for implementing models and interacting with them.
    The project is owned by Piero Savastano; I'm not sure if you're already familiar with him.

  • @BrainSlugs83
    @BrainSlugs83 ปีที่แล้ว +5

    This is a great breakthrough, but can you please show us how to do it on Windows? It would be nice if there was just an msi like normal software. Having a convoluted 25 step command line based install process makes it really hard to ship software that uses these models locally. It just makes this stuff really inaccessible and unappealing for Windows developers.

    • @ernesttan8090
      @ernesttan8090 ปีที่แล้ว +1

      same here. I gave up installing with that Windows installer. now trying by Docker.

  • @BanibrataDutta
    @BanibrataDutta ปีที่แล้ว +4

    Thanks for the really informative and well paced video. The video seems to glance-over a page showing the timeline vs linkage/evolution of models at about 5:36. What is the source of that ? Would be nice to have a link to it. Definitely interested in the video on how to use cloud-GPU to run the larger models. In fact (but not surprisingly, as a newbie to generative AI and AI in general), I was under the impression that you didn't need a GPU for producing generative AI content after the foundational model or specialized model was ready. Would be nice if you could cover, what could be done on systems without a GPU (or say an iGPU only) and those that need H110/A110 type GPUs (and how many).

    • @methodermis
      @methodermis ปีที่แล้ว

      You need a huge GPU to train the model and a big GPU to run it. You can train a smaller model from a bigger model, like Alpaca, but it is not as smart. The GPU has to host the entire brain of the AI so the smaller the worse.

  • @mr-nobodyz-g7q
    @mr-nobodyz-g7q ปีที่แล้ว

    This is awesome I installed it on my windows 10 machine I just need to figure out how to optimize it like you did for your linux box

  • @juanbrekesgregoris4405
    @juanbrekesgregoris4405 ปีที่แล้ว +186

    PLEASE show us how to install this on a Cloud provider

    • @robmulla
      @robmulla  ปีที่แล้ว +34

      Thanks for the feedback. Will do! What cloud provided do you prefer? AWS? GCP?

    • @shionnj
      @shionnj ปีที่แล้ว +23

      ​@@robmullaGCP please😘

    • @KimchiOishi1
      @KimchiOishi1 ปีที่แล้ว +25

      @@robmulla would you mind also doing AWS too afterwards? Also, possibly rreally dumb question: is it possible to run this at all w/o GPU/CUDA?

    • @robmulla
      @robmulla  ปีที่แล้ว +8

      @@KimchiOishi1 h2oGPT does support some models that run on CPU. You just need to follow the insturctions for gpt4ALL or llama.cpp. They are much less powerful and slower than the falcon model running on GPU but they still work.

    • @jai_v
      @jai_v ปีที่แล้ว

      @@robmulla put a railway template already

  • @neuflostudios
    @neuflostudios ปีที่แล้ว

    Yes Rob ,, would love to see your implementation in the cloud space for these models.

  • @pointandshootvideo
    @pointandshootvideo ปีที่แล้ว +3

    Great video! Thank you! I do agree that it's better to have your own local model running open source software if your machine can run it. What GPU do you need??!!! lol The biggest issue I have with ChatGPT - open source or otherwise, are incorrect responses. That makes it next to worthless because you can't trust the responses 100% of the time. Can it also respond incorrectly if you train it with your own data?? And how much of your own data do you need to train it? So if I try to train it on all the PDF's on Raman Microscopy, what's the percent likelihood that a response is going to be incorrect? Thanks in advance. Cheers!

  • @0AThijs
    @0AThijs ปีที่แล้ว

    This works great on a collab notebook!

  • @BradleyKieser
    @BradleyKieser ปีที่แล้ว

    Absolutely brilliant. Thank you.

  • @peterhu3362
    @peterhu3362 ปีที่แล้ว

    I like this! U just earned yourself a new sub!! Would be interesting to know how to fine tune or train the installed LLMs

  • @deeplearning7097
    @deeplearning7097 ปีที่แล้ว

    Thank you very much. It would be great to see something on spinning up higher memory remote GPUs.