NVIDIA'S NEW OFFLINE GPT! Chat with RTX | Crash Course Guide

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 พ.ย. 2024

ความคิดเห็น • 275

  • @Tarangot
    @Tarangot 8 หลายเดือนก่อน +265

    Just used Chat with RTX to summarize your video in about a minute worth of reading. What a crazy time to be alive. I'll leave your video running in a tab so you're credited for the view and watch time.

    • @BabySisZ_VR
      @BabySisZ_VR 8 หลายเดือนก่อน +4

      lol

    • @GumboRyan
      @GumboRyan 8 หลายเดือนก่อน +18

      Efficient AND considerate.

    • @looseman
      @looseman 8 หลายเดือนก่อน +1

      It is reading from subtitle, not from Video.

    • @KIaKlaa
      @KIaKlaa 8 หลายเดือนก่อน +3

      just used chat with rtx to create a thingmabob to make yo wife bald and yo dog fat, watch out m blud

    • @ekot0419
      @ekot0419 7 หลายเดือนก่อน

      I have been doing that using Chatgpt for a long time already.

  • @MrErick1160
    @MrErick1160 8 หลายเดือนก่อน +79

    Wow this is AMAZING. A non-cloud chat that we can use with our local documents!!! Freaking cool and very useful product, NVIDIA def knows what people need

    • @DrakeStardragon
      @DrakeStardragon 8 หลายเดือนก่อน +3

      Uhh, they are not the first, but ok.

    • @merlinwarage
      @merlinwarage 8 หลายเดือนก่อน

      LMStudio is out for almost 8 months what does the same and 10x more.

    • @KillFrenzy96
      @KillFrenzy96 8 หลายเดือนก่อน

      Well we already have many solutions for this. It's running Mistral 7B which has been available for many months now. It's nowhere near ChatGPT quality though.
      However if you have a 24GB GPU, I would suggest running the more powerful Mixtral 8x7B model using EXL2 3.5 bpw quantization. I use the oobabooga WebUI for this. It's about as powerful as ChatGPT free, but is much less restrictive.

    • @adrianzockt5347
      @adrianzockt5347 8 หลายเดือนก่อน +1

      GPT4ALL also exists and supports multiple chats, like chatgpt does. However it crashes when reading large documents and doesn't have the youtube feature.

    • @chromefuture5561
      @chromefuture5561 8 หลายเดือนก่อน +2

      And it adds finally another real reason for the 40. gen RTX cards

  • @sb-knight
    @sb-knight 8 หลายเดือนก่อน +9

    This is really cool and one of the biggest missing pieces in the whole equation. Being able to run these models locally and be able to highly curate your own will be very valuable. GPT4All is really neat, does a decent job with this as well, so I am glad to see something similar from Nvidia who makes the GPUs. Crazy times!

  • @19mitch54
    @19mitch54 8 หลายเดือนก่อน +27

    After exhausting the free trials of DALL-E and Midjourney, I bought my new computer with the RTX3070 to run Stable Diffusion. I love this AI stuff. Chat with RTX was a LONG download and it downloaded more dependencies during install but was worth it. I didn’t bother exploring the included dataset and started with my own documents. This works great! I want to build a big library of references and put this thing to work.

    • @jimmydesouza4375
      @jimmydesouza4375 8 หลายเดือนก่อน +1

      How good is it for automatically generating things? For example if you stick a bunch of PDFs for a roleplaying game ruleset and setting and then ask it to generate DM prompts from that, can it do it?

    • @19mitch54
      @19mitch54 8 หลายเดือนก่อน +9

      I don’t know much about role playing games. The program is good at answering questions. I pointed it to some manuals including my car’s owners’ manual and it was able to answer technical questions like “how do I reset the service interval?” I want to test it with some microcontroller programming manuals next.

    • @Vysair
      @Vysair 8 หลายเดือนก่อน

      @@19mitch54This is wicked. Your usage is hella perfect for programmer and alike

    • @AvtarSingh1122
      @AvtarSingh1122 7 หลายเดือนก่อน

      Nice👌🏻

    • @amumuisalivedatcom8567
      @amumuisalivedatcom8567 5 หลายเดือนก่อน

      @@jimmydesouza4375 i'm late but yup, consider using RAG (Retrieval Augmented Generation) to pass docs to the LLM.

  • @no_the_other_ariksquad
    @no_the_other_ariksquad 8 หลายเดือนก่อน +11

    It's really useful when you have a folder full of documentations for different apis and all things, very good for that.

  • @invisiiso
    @invisiiso 8 หลายเดือนก่อน +16

    I’m curious… If you’re comparing between models with the same amount of VRAM (e.g. 3050, 3060 8GB, 4060) will the quality of the outputs improve if the card is better or will it only just have a faster/slower response time?

    • @ahmetemin08
      @ahmetemin08 8 หลายเดือนก่อน +1

      no, only the interference speed will differ.

    • @Unknown-xm8ll
      @Unknown-xm8ll 8 หลายเดือนก่อน +4

      See the weights in a neutral network are present by Nvidia so no change in response the model is fitted with the most optimal neutral weights which determine the accuracy and precision of the model. A better faster GPU like 4070, 4080 or the 4090 can improve the speed of the results but the jump till 4080 is not significant. Only 4090 performs faster and more noticeable compared to other GPUs. And fun fact you can run the chat with RTX on AMD gpu 😂 with slight tweeks or just copy the model data and paste it into the lalama interface.

    • @PrintScreen.
      @PrintScreen. 8 หลายเดือนก่อน +1

      @@ahmetemin08 isn't it "inference" ?

    • @ahmetemin08
      @ahmetemin08 8 หลายเดือนก่อน

      @@PrintScreen. you are correct

  • @tbarczyk1
    @tbarczyk1 6 หลายเดือนก่อน

    Awesome tutorial! This is the first one of yours that I've watched, but between this one and few others I've looked at since, your tutorials are the best I've seen anywhere. Thanks for getting into all the interesting details and dumbing it down like your viewers are idiots.

  • @IzanamiNoMikotoo
    @IzanamiNoMikotoo 8 หลายเดือนก่อน +11

    The reason Llama 2 doesn't show is that it "requires" 16GB of VRAM. It will only let you install it if your card has at least 16GB... Unless you change the setting in the llama13b.nvi file. If you set the value to, say, 10GB then you can run it on a 3080 10GB. Idk if it will work perfectly but you can try.

    • @codeblue6925
      @codeblue6925 8 หลายเดือนก่อน

      where is that file located?

    • @codeblue6925
      @codeblue6925 8 หลายเดือนก่อน

      nvm i found it

    • @crobinso2010
      @crobinso2010 8 หลายเดือนก่อน

      @@codeblue6925 Did it work? I have a 12GB 3060

    • @rockcrystal3277
      @rockcrystal3277 7 หลายเดือนก่อน

      how do you change the setting in the llama13b.nvi file to 10gb for it to work?

    • @IzanamiNoMikotoo
      @IzanamiNoMikotoo 7 หลายเดือนก่อน

      @@rockcrystal3277 Go to the file llama13b.nvi located in the installation directory
      “\NVIDIA_ChatWithRTX_Demo\ChatWithRTX_Offline_2_11_mistral_Llama\RAG”. Then change the "MinSupportedVRAMSize" value however many GB of VRAM your card has.

  • @ashw1nsharma
    @ashw1nsharma 8 หลายเดือนก่อน

    Thanks for this new discovery! Hope you're having a nice day! 🌻

  • @elpideus
    @elpideus 8 หลายเดือนก่อน +1

    Definitely much easier to set up compared to your average text-generation-webui, however still has a long way to go when it comes to features and control.

  • @tonymerasty
    @tonymerasty 6 หลายเดือนก่อน

    yup a solid demo for an intro with your pc and an Ai model thats local

  • @Tore_Lund
    @Tore_Lund 8 หลายเดือนก่อน +5

    System requirements are minimum requirements? Is Win11 needed or does Win10 work?

    • @Vysair
      @Vysair 8 หลายเดือนก่อน

      Isnt Win11 are just Win10 under the hood? Why wouldnt it work

  • @johncollins9263
    @johncollins9263 5 หลายเดือนก่อน +2

    I am having an issue with installing this as it comes up with chat with rtx failed to install however hardware is not an issue as everything i have is new but it decides not to work?

  • @AlbertoPirrotta
    @AlbertoPirrotta 3 หลายเดือนก่อน

    Thanks for your video tutorial !

  • @RedVRCC
    @RedVRCC 5 หลายเดือนก่อน

    Thanks! I just downloaded and installed it but I'm not too sure how to get it running. Working with these complex LLMs is still new to me but I really want my own AI so your video really helps. I hope this runs well enough on my entry level af 3060. This seems simple enough. Will it at least remember everything it learned so I can keep training it more and more?

  • @beetrootlife
    @beetrootlife หลายเดือนก่อน +1

    you guys better pray im not your future doctor after i feed all my lecture slides into the chatbot

  • @handsonlabssoftwareacademy594
    @handsonlabssoftwareacademy594 4 หลายเดือนก่อน

    Man, I really like your analysis great work. So ChatRTX can be used with any cpu and graphics card including Intel HD Graphics once there's sufficient RAM like 16GB?

    • @christerjohanzzon
      @christerjohanzzon 4 หลายเดือนก่อน +1

      No, you need an RTX card from at least 3000-series. It's the tensor cores that is important. Luckily these cards aren't expensive.

  • @jonmichaelgalindo
    @jonmichaelgalindo 8 หลายเดือนก่อน +10

    Thanks for the video. Very informative. GPT4All and LMStudio are probably easier for most users though, and they support more models, more OSs, and more features. I wonder what NVidia thought was so special about this...

    • @NippieMan
      @NippieMan 8 หลายเดือนก่อน +4

      Offline AIs can be useful since companies such as OpenAI place in very restrictive rules. While there are already programs that can do what NVIDIA is offering, most consumers are too stupid to set it up themselves

    • @AntonChekhoff
      @AntonChekhoff 8 หลายเดือนก่อน

      Which GPU-accelerated model would you recommend? For translation for instance?

    • @bigglyguy8429
      @bigglyguy8429 8 หลายเดือนก่อน +2

      Well I love Faraday and LM Studio, but getting it to understand my own docs is hard,

    • @jonmichaelgalindo
      @jonmichaelgalindo 8 หลายเดือนก่อน +1

      @@AntonChekhoff I haven't done any translation. I use Mistral raw for my D&D solver system, and for creative writing (mostly for generating large lists, like a thesaurus but for abstract topics).

    • @crobinso2010
      @crobinso2010 8 หลายเดือนก่อน

      I'm hoping for that too -- a comparison btw LM Studio and Chat with RTX, which do the same things.

  • @shadowcaster111
    @shadowcaster111 8 หลายเดือนก่อน +10

    is the non C drive install fixed yet ?
    I tried it on my P drive and it failed to install

    • @Green_Toast
      @Green_Toast 8 หลายเดือนก่อน

      no, badly not, they talked about it at the nvidia forum

    • @jackflash6377
      @jackflash6377 8 หลายเดือนก่อน +1

      I just installed it to my F: drive under a folder named RTXChat and it's working as normal.

  • @hairy7653
    @hairy7653 7 หลายเดือนก่อน +2

    the TH-cam option isn't showing up on my rtxchat

  • @IndieAuthorX
    @IndieAuthorX 8 หลายเดือนก่อน +11

    I was excited to use this, but I got it up and running and things did not work so good. I realized that it wasn't technically made to run on Windows 10, according to the requirements page, and I think that might be why. I think that this kind of thing has potential, but I want a chatbot that is completely released for commercial use before getting too comfy with it.

    • @acllhes
      @acllhes 8 หลายเดือนก่อน +2

      Windows 11 is one of the requirements listed

    • @IndieAuthorX
      @IndieAuthorX 8 หลายเดือนก่อน

      @@acllhes yeah, I saw that after. I could have sworn I'd seen both systems. I might have read a non Nvidia page first and then just installed.

    • @fontenbleau
      @fontenbleau 8 หลายเดือนก่อน

      i'm not sure what you mean "commercial", none of this allowed such by license, it's only allowed for research use by original llama license (except if it based on llama 2 where something allowed but limited by installations). If you want just chatbot right away - easiest way is LLAMAFILE by Mozilla, just click and it works, their small model container is kinda 1,5 Gb but can analyse images

  • @violentvincentplus
    @violentvincentplus 8 หลายเดือนก่อน +21

    35GB goes crazy

    • @Flashback_Jack
      @Flashback_Jack 8 หลายเดือนก่อน +9

      About the same size as a triple A game.

    • @pedro.alcatra
      @pedro.alcatra 8 หลายเดือนก่อน +3

      Exactly. The size is absolutely fine. The problem is having to download it thru the browser instead of a download manager

    • @arsalanganjeh198
      @arsalanganjeh198 8 หลายเดือนก่อน +4

      Lighter than cities skylines 2😂

    • @gamingballsgaming
      @gamingballsgaming 8 หลายเดือนก่อน +2

      ​@pedro.alcatra im fine with that for archival purposes. If i want to install it in the future, i can as long as i have the exe, even if the nvidia servers shut down

    • @Javier64691
      @Javier64691 8 หลายเดือนก่อน +5

      @@Flashback_Jackan old triple a, most nowadays are 60gb plus

  • @cmdr.o7
    @cmdr.o7 8 หลายเดือนก่อน +15

    I hope this software doesn't just snoop around your file system and documents, scraping it all back to nvidia with telemetry
    wouldn't be surprised at all if it did, people have little respect left for privacy
    if it turns out it does, well, just hope video author has done research and not just blindly enabling nvidia
    that said, we are each responsible for our own security and fighting back against invasive big tech, malware root kits etc

    • @Jet_Set_Go
      @Jet_Set_Go 8 หลายเดือนก่อน +8

      They have Nvidia Experience for that already

    • @jordanturner7821
      @jordanturner7821 8 หลายเดือนก่อน

      They already do that with telemetry data. he absolutely does know what he is talking about.@@jeffmccloud905

    • @cmdr.o7
      @cmdr.o7 8 หลายเดือนก่อน +3

      @@jeffmccloud905 that's right, that is the troubling part
      clearly you don't know either or you would have enlightened us - but you are a man of few words
      scraping user data is not a big mystery, it happens everywhere, i think most people have a pretty good idea about that
      and i do actually know quite a lot about ai systems - and nvidia xD

    • @AndrewTSq
      @AndrewTSq 8 หลายเดือนก่อน +2

      I think Microsofts AI already does that in Win11

    • @goldmund22
      @goldmund22 7 หลายเดือนก่อน

      I'm glad I finally found someone commenting on the privacy aspect of this. Since you mentioned you are experienced with AI and Nvidia, do you think there is a good chance this is happening, even though it is "local"?
      I am considering using it for analyzing specific folders and PDFs related to my work. I guess the only way to be sure it doesn't also have access to everything else is to literally use this on a different PC and on a different network. I don't know. Then I think about Microsoft OneDrive, and well it already is connected most of everything we have on our PCs by default. Just insane.

  • @minty87
    @minty87 8 หลายเดือนก่อน

    would love to see a photo generator on it id definitely get on it in that case . nice video

  • @IIHydraII
    @IIHydraII 8 หลายเดือนก่อน

    Can you make a video about different presentation modes and how to set them? I’m trying to get my games to run in Hardware Composed: Independent flip, but I’ve only been successful when running games in non native resolutions and also forcing windows to use that resolution. If I try to run native, I end up with Hardware: Independent Flip. I’m aware the only difference between HWCF and HWI is that the former uses DirectFlip optimisations, but I can’t figure out why they’re not working at native resolution. Kinda stumped here. 😅

  • @arooman3194
    @arooman3194 8 หลายเดือนก่อน +2

    Min 6:56, can not understand the tools you suggest, would you mind to post the link to that tools?

    • @carlossalgado9075
      @carlossalgado9075 8 หลายเดือนก่อน

      same isue

    • @sky37blue
      @sky37blue 6 หลายเดือนก่อน +1

      It is in the video description
      [CHAT] Oobabooga Desktop:
      • NEW POWERFUL Local ChatGPT 🤯 Mindblow...

  • @siddharthmishra8283
    @siddharthmishra8283 8 หลายเดือนก่อน

    Waiting for your 12gb SUPIR version installation guide for A1111 Sdxl 😊

  • @Jascensionvoid
    @Jascensionvoid 8 หลายเดือนก่อน +1

    I keep getting this error when trying to upload some PDF's into my Dataset.
    [02/23/2024-19:42:28] could not convert string to float: '98.-85' : Float Object (b'98.-85') invalid; use 0.0 instead

    • @MTX1699
      @MTX1699 7 หลายเดือนก่อน

      So, is there a solution to this?

  • @yuro1337
    @yuro1337 8 หลายเดือนก่อน +3

    it looks like Whisper AI with chat and some additional models

  • @LaminarRainbow
    @LaminarRainbow 8 หลายเดือนก่อน

    Originally I thought it didn't work, but turns out I just have to wait.. :P

  • @Jcorella
    @Jcorella 8 หลายเดือนก่อน

    6:57 What was that model? Couldn't understand you

    • @zslayerlpsfmandminecraftan367
      @zslayerlpsfmandminecraftan367 8 หลายเดือนก่อน

      oobabooga desktop, wich in itself is a gui similiar to this. but it lets you use custom models. but its more complicated to set up with python 3.10.9

  • @erkinox1391
    @erkinox1391 8 หลายเดือนก่อน +2

    I really don't get it; i have all of the requirements (VRAM, RAM, OS, Latest Driver, I got plenty of storage), but whenever I launch the installation, it stops and say Chat with RTX Failed and Mistral Not Installed

    • @jaderey467
      @jaderey467 8 หลายเดือนก่อน

      Are you windows 11 it doenst work on 10

    • @ben9262
      @ben9262 8 หลายเดือนก่อน

      I'm getting the same thing

    • @AlecksSubtil
      @AlecksSubtil 8 หลายเดือนก่อน

      Disable completely your antivirus, also check the dock icon to disable it from there. avast for example has to be disabled in the tray icon, only in the gui is not enought. Also install it on the default folder. Maybe necessary run it with admin privileges. It is safe to install btw

  • @Subarashi77
    @Subarashi77 6 หลายเดือนก่อน +2

    they removed youtube url option

  • @moonduckmaximus6404
    @moonduckmaximus6404 7 หลายเดือนก่อน +1

    THE TH-cam OPTION DOES NOT EXIST IN THE DROP DOWN MENU

  • @TheMangese
    @TheMangese 6 หลายเดือนก่อน

    I'm interested in having an interactive AI chatbot in my chat channel on Twitch. Can this do that?

  • @_B.C_
    @_B.C_ 8 หลายเดือนก่อน +1

    Will it do this for yt videos in another language?

  • @thanksfernuthin
    @thanksfernuthin 8 หลายเดือนก่อน +4

    You finally made another video I'm interested in! 😃I was just on the verge of letting you go. My main interest is the AI stuff.

    • @TroubleChute
      @TroubleChute  8 หลายเดือนก่อน +6

      Always happy to cover new stuff when I hear about it ~ A friend let me know of this. I also saw the new OpenAI video stuff... but nobody has access to that yet...

    • @RentaEric
      @RentaEric 8 หลายเดือนก่อน +3

      You do know a subscribe is free. If you leave 10 others will replace you 😅

    • @thanksfernuthin
      @thanksfernuthin 8 หลายเดือนก่อน +2

      @@RentaEric Unless he doesn't create content they want. You understand how consensual interactions work, right? Or do you have ten thousand subscriptions and you can't pick out what you want to see from all the crap?

    • @RentaEric
      @RentaEric 8 หลายเดือนก่อน +5

      @@thanksfernuthin you act like you support him financially or even through liking every video and commenting. Do you? If not your opinion is irrelevant cause you are talking about leaving if he doesn't give you what you want but have you gave him anything besides taking his free content?

    • @thanksfernuthin
      @thanksfernuthin 8 หลายเดือนก่อน +3

      @@RentaEric So it's a bad thing to give feedback in your mind? You think he doesn't want to know when people like what he does or doesn't like what he does? Have you ever produced something of value for another human being in your life?

  • @girinathprthi
    @girinathprthi 8 หลายเดือนก่อน

    interesting
    started downloading this app

  • @LaminarRainbow
    @LaminarRainbow 8 หลายเดือนก่อน

    Thank you!!

  • @elgodric
    @elgodric 8 หลายเดือนก่อน +3

    How many pages of the document can Mistral 7B handle?

    • @ScottMyers-l1z
      @ScottMyers-l1z 7 หลายเดือนก่อน

      Until you run out of RAM and VRAM.

  • @monkshee
    @monkshee 8 หลายเดือนก่อน +4

    hey man i don't see the llama option when installing i already have an install how would i add it to the list of models?

    • @haseef
      @haseef 8 หลายเดือนก่อน +1

      same issue here even though I ticked clean install

    • @N1h1L3
      @N1h1L3 8 หลายเดือนก่อน

      win 10 ?

    • @zslayerlpsfmandminecraftan367
      @zslayerlpsfmandminecraftan367 8 หลายเดือนก่อน

      llama 2 needs 16gb vram not quantizized, so if you have 8gb it doesn't install it

  • @MarkAnthonyFernandez-q7y
    @MarkAnthonyFernandez-q7y 6 หลายเดือนก่อน +1

    I don't have the youtube Url option

  • @Vimal_S_Thomas
    @Vimal_S_Thomas 7 หลายเดือนก่อน

    will it work on my laptop with RTX 2050

  • @arsalanganjeh198
    @arsalanganjeh198 8 หลายเดือนก่อน +2

    Nice

  • @ubaidfayaz1989
    @ubaidfayaz1989 6 หลายเดือนก่อน

    Sir how can we bypass the nvidia check that occurs prior to installation?

  • @JoyKazuhira
    @JoyKazuhira 8 หลายเดือนก่อน

    wow maybe in the future, this will be added in a game. will definitely use instead of turning on ray tracing.

  • @boro057
    @boro057 8 หลายเดือนก่อน +4

    Pretty cool that the setup is so simple. I wonder if there’s any telemetry going on in the background. GeForce experience has loads which is why I avoid it.

  • @leeishere7448
    @leeishere7448 8 หลายเดือนก่อน

    How can I get the lama 13b model? I don't have it.

  • @banabana4691
    @banabana4691 8 หลายเดือนก่อน

    i think its make nvidia graphic crad more valuable

  • @vulcan4d
    @vulcan4d 8 หลายเดือนก่อน +7

    This is a demo which clearly means Nvidia wants to see how many people will use it so they can release a subscription based service later for your AI offline needs.

    • @ozz3549
      @ozz3549 8 หลายเดือนก่อน +1

      That's only UI for llama 2 model, you can find any another ui and this will work same

    • @gavinderulo12
      @gavinderulo12 8 หลายเดือนก่อน

      ​@@ozz3549it's also something you can build in a week.

  • @KenZync.
    @KenZync. 7 หลายเดือนก่อน

    i just download this and it can't be run can you try remove and redownload it ? i think nvidia cooked something failed

  • @abdiel_hd
    @abdiel_hd 6 หลายเดือนก่อน

    Mine didn't come with TH-cam as a dataset/source... can someone help me? I have a laptop with a 3070

  • @TonTheCreator
    @TonTheCreator 6 หลายเดือนก่อน

    I installed and used it bu after I closed it I can't use/open it again. I mean i don't know how to

  • @TazzSmk
    @TazzSmk 8 หลายเดือนก่อน

    is RTX A4000 supported? should be Ampere generation card I believe

    • @skym1nt
      @skym1nt 8 หลายเดือนก่อน +1

      yes, it can.

  • @mayorc
    @mayorc 8 หลายเดือนก่อน

    Does it support custom models like using OpenAI api endpoint local servers?

    • @JA_BRE
      @JA_BRE 8 หลายเดือนก่อน +2

      Its only Demo, no way it supports it yet...

  • @kathiravan_vj
    @kathiravan_vj 8 หลายเดือนก่อน

    Does RTX 2060 super supports this with 16gb ram?

    • @xXXEnderCraftXXx
      @xXXEnderCraftXXx 8 หลายเดือนก่อน +1

      Well no. Atleast not without some bypass programs.

  • @OpenSourceGuyYT
    @OpenSourceGuyYT 8 หลายเดือนก่อน +2

    Yea. With Ollama, you don't need to have an RTX GPU. And it's offline too.

  • @glucapav
    @glucapav 8 หลายเดือนก่อน

    It is saying I don't have 8 GB of GPU memory. Is it checking my integrated GPU instead of my NVDA? How do I fix this? I'm using an Asus Pro Duo so the BIOS isn't letting me change it.

    • @queless
      @queless 8 หลายเดือนก่อน

      What card do you have?

  • @juanb0609
    @juanb0609 7 หลายเดือนก่อน +2

    I dont have the option for TH-cam videos

    • @hairy7653
      @hairy7653 7 หลายเดือนก่อน

      same here

  • @blitzguitar
    @blitzguitar 8 หลายเดือนก่อน

    Can I use it to overclock my 3070

  • @dioghane231
    @dioghane231 5 หลายเดือนก่อน

    I have a 3050 rtx and it won’t let me install it? Why?

  • @faa-
    @faa- 8 หลายเดือนก่อน

    this is so cool

  • @MiNombreEsEscanor
    @MiNombreEsEscanor 8 หลายเดือนก่อน +1

    I downloaded this, it works pretty good locally, but I want to create a web application and use this chatbot in my application. Currently chat with rtx doesn't offer api to send questions and retrieve answers. Is there any way to achieve this? Or maybe they will add api feature in the future? What do you guys think?

    • @Hypersniper05
      @Hypersniper05 8 หลายเดือนก่อน +1

      Text generation webui

    • @voidsh4man
      @voidsh4man 8 หลายเดือนก่อน +1

      at scale it would cost you more to run an ai chatbot on your own hardware than using openai's api

    • @anispinner
      @anispinner 8 หลายเดือนก่อน +1

      Considering it runs a local node I suppose one of the folders should contain plain .js files, otherwise it might be packed as an electron which you can unpack and inject your API into.

    • @fontenbleau
      @fontenbleau 8 หลายเดือนก่อน

      nvidia never made any great software, they're only hardware. Don't count on that, why do you think we use Afterburner made by MSI (why Nvidia can't made such tool is a puzzle), even this they could make a year ago by hiring any student on Ai faculty

    • @anispinner
      @anispinner 8 หลายเดือนก่อน

      Puzzle? Why would you make an overclocking soft that goes against your business model? Your goal (as a business) should be to sell the product, not to extend its lifespan.

  • @KrishnVallabhDas
    @KrishnVallabhDas 8 หลายเดือนก่อน

    i am getting this error
    ModuleNotFoundError: No module named 'torch'
    how to fix this??

    • @CindyHuskyGirl
      @CindyHuskyGirl 8 หลายเดือนก่อน +1

      pip install torch (put this into your terminal)

    • @OpenAITutor
      @OpenAITutor 7 หลายเดือนก่อน

      You should go through the installer. It has all the stuff build in. It also creates it's virtual python environment in a folder called env_vnd_rag

  • @lolxgaming7993
    @lolxgaming7993 5 หลายเดือนก่อน

    I tried downloading it but the download is too slow and this is normal?

  • @rionix88
    @rionix88 8 หลายเดือนก่อน +1

    gemini will use this technology. you can chat with 1 hour video

  • @ahmetrefikeryilmaz4432
    @ahmetrefikeryilmaz4432 8 หลายเดือนก่อน

    One question: is that HHKB I have been hearing?

    • @fsocy1
      @fsocy1 8 หลายเดือนก่อน

      wdym HHBK?

  • @kingofsimulators3242
    @kingofsimulators3242 8 หลายเดือนก่อน +3

    zip corrupted?

    • @0AThijs
      @0AThijs 8 หลายเดือนก่อน +2

      It seems... 😢 35GB!

    • @aalejanddro2328
      @aalejanddro2328 8 หลายเดือนก่อน +1

      there is a fix?

    • @kingofsimulators3242
      @kingofsimulators3242 8 หลายเดือนก่อน

      Is it because I have windows 10?

    • @0AThijs
      @0AThijs 8 หลายเดือนก่อน

      @@kingofsimulators3242 no, should be fixed, I haven't tried it, redownload 🥲

  • @SpudHead42
    @SpudHead42 8 หลายเดือนก่อน

    Does it support other models, like Mixtral?

    • @zslayerlpsfmandminecraftan367
      @zslayerlpsfmandminecraftan367 8 หลายเดือนก่อน

      at the current time no... for that you need a gui like oobabooga or KoboldCPP wich supports custom models

  • @MaiderGoku
    @MaiderGoku 8 หลายเดือนก่อน +1

    Answer this properly, download size and how much space does it take on your hard drive?

    • @IMABADKITTY
      @IMABADKITTY 8 หลายเดือนก่อน +1

      35gb download size

    • @MaiderGoku
      @MaiderGoku 8 หลายเดือนก่อน +1

      @@IMABADKITTY how much for rtx remix?

  • @jomymatthews
    @jomymatthews 8 หลายเดือนก่อน

    What is Ub boo boogie desktop ?

  • @Lp-ze1tg
    @Lp-ze1tg 8 หลายเดือนก่อน

    How slow will it be if I run it with 4gb or even 2gb vram?,
    Will it even run with less than 8gb vram?

    • @Baconator119
      @Baconator119 8 หลายเดือนก่อน +1

      It requires a 30 or 40 Series GPU, the weakest of which iirc is a 3050 with 6GB of VRAM. So, will it run with less than 8? Yeah. It might be slow, though.

    • @MARProduction24434
      @MARProduction24434 8 หลายเดือนก่อน

      Tried it. The installer just block it if requirement not met ;(

  • @rockcrystal3277
    @rockcrystal3277 8 หลายเดือนก่อน

    I noticed llama didn't install for you also, found anyway to install it?

    • @queless
      @queless 8 หลายเดือนก่อน

      It requires an RTX card with 16gb vram or more

    • @rockcrystal3277
      @rockcrystal3277 7 หลายเดือนก่อน

      @@queless how do you change the setting in the llama13b.nvi file to 10gb for it to work?

    • @queless
      @queless 7 หลายเดือนก่อน

      @@rockcrystal3277 don't know, I have a 4070ti super OC 16gb, it worked for me without anything extra. Uninstalled it and hour later because the AI is super basic, like chatgpt 1 but dumber

  • @arsalanganjeh198
    @arsalanganjeh198 8 หลายเดือนก่อน +1

    Us there any chance that use this with a 4GB graphics card?

    • @VGHOST008
      @VGHOST008 8 หลายเดือนก่อน +1

      You can install oobabooga locally and use a relatively small model like Tiny-Llama 1B or some other 3B~ model. NVidia uses a 7B model (requires exactly 8Gb of VRAM at medium~ accuracy settings) as a low end solution so there is no way you'd be able to run it with decent performance on 4Gb of VRAM.

    • @galaxymariosuper
      @galaxymariosuper 8 หลายเดือนก่อน +1

      a much better option is LM studio. there you can offload layers from the NN to the GPU as you wish. and the installation and usage is even easier than this RTX stuff

    • @VGHOST008
      @VGHOST008 8 หลายเดือนก่อน

      @@galaxymariosuper Yeah, stablity is also an issue with LM Studio. It often crashes and the results it produces are very shallow. Same with GPT4ALL and any other relatively small client (kobold UI would be the only exception, it just crashes often).

    • @fontenbleau
      @fontenbleau 8 หลายเดือนก่อน

      even better easier solution is Llamafile container by Mozilla, runs on Win 8 on very old hardware. I personally use obabooga but it's annoying how every new update breaks there previous function and not fixed for months, always back up these before updates

    • @mayday2011
      @mayday2011 8 หลายเดือนก่อน +1

      I have 6gb vram 3060

  • @mr.bekfast9744
    @mr.bekfast9744 8 หลายเดือนก่อน +1

    Am I the only one that is downloading this and Setup.exe is not in the Zip file?

    • @victornpb
      @victornpb 8 หลายเดือนก่อน

      same problem, zip seems corrupted

    • @pillowism
      @pillowism 8 หลายเดือนก่อน

      Same issue here

    • @0AThijs
      @0AThijs 8 หลายเดือนก่อน

      For many 😔

    • @mr.bekfast9744
      @mr.bekfast9744 8 หลายเดือนก่อน

      @@victornpb Okay good to know that im not the only one. Is there anyway for us to report it or get an older version where the zip isnt messed up?

  • @MousePotato
    @MousePotato 6 หลายเดือนก่อน

    AI voice. Us Brits never say anyway with a plural.

  • @GKGames2018
    @GKGames2018 7 หลายเดือนก่อน

    mine does not have youtube

  • @heyguyslolGAMING
    @heyguyslolGAMING 8 หลายเดือนก่อน +2

    What is the fastest animal on the planet?

    • @DeepThinker193
      @DeepThinker193 8 หลายเดือนก่อน +2

      The slug.

    • @Spectrulight
      @Spectrulight 8 หลายเดือนก่อน +1

      Idk probably a falcon

    • @N1h1L3
      @N1h1L3 8 หลายเดือนก่อน +1

      @@Spectrulight The peregrine falcon is the fastest bird, and the fastest member of the animal kingdom, with a diving speed of over 300 km/h (190 mph).

    • @TenOfClub
      @TenOfClub 8 หลายเดือนก่อน

      airborne Microbes👌👌

    • @bgill7475
      @bgill7475 8 หลายเดือนก่อน

      Me when I need to pee

  • @bensoos
    @bensoos 8 หลายเดือนก่อน

    Now real interlegend bots in games.

  • @083-cse-sameerkhan3
    @083-cse-sameerkhan3 8 หลายเดือนก่อน

    does it will work on GTX 1650

  • @im_Dafox
    @im_Dafox 8 หลายเดือนก่อน

    everything was fine until "windows 11" 😄
    Shame, looks really cool and useful

  • @Waldherz
    @Waldherz 8 หลายเดือนก่อน

    Downloading dependencies for hours and hours and hours.
    Zero network activity. Anti virus checked, admin mode checked, network checked. No user error.

  • @Xandercorp
    @Xandercorp 8 หลายเดือนก่อน

    So how private is it?

    • @notram249
      @notram249 8 หลายเดือนก่อน

      Very
      Since it runs on your pc

  • @spicymaggi1853
    @spicymaggi1853 8 หลายเดือนก่อน +1

    I only have 4GB vram (dedicated) is there any workaround for this?

    • @mascot4950
      @mascot4950 8 หลายเดือนก่อน

      If you are not aware of LM Studio, then you might want to check that out as it doesn't require a GPU (but it does support using them, and you can partially offload however many layers the GPU has vram to hold). Assuming sufficient ram+vram, you can download and use the same model. But, there's no ability for ingesting local files as far as I am aware.

  • @muruganmurugan507
    @muruganmurugan507 8 หลายเดือนก่อน

    Its cool does it support 2gb single pdf with 4000 pages😂

  • @blueyf22
    @blueyf22 6 หลายเดือนก่อน

    my teachers will never know what hit em

  • @bensoos
    @bensoos 8 หลายเดือนก่อน

    Finaly my own virtual ai girlfriend.

  • @andyone7616
    @andyone7616 8 หลายเดือนก่อน +1

    Can you make a video on how to uninstall chat with rtx?

  • @flurit
    @flurit 8 หลายเดือนก่อน

    Nvideas really making me regret getting an amd card

  • @mhvdm
    @mhvdm 8 หลายเดือนก่อน

    Very buggy, tested it myself and I must say I'm impressed, but darn they need to fix bugs. It was very bad at responding to stuff in general.

  • @itxaddict7503
    @itxaddict7503 8 หลายเดือนก่อน

    C'mon Skynet. You need us to hand you the world on a silver platter?

  • @Spengas
    @Spengas 8 หลายเดือนก่อน

    That sucks that it is windows 11 only... never upgrading from 10

  • @CrudelyMade
    @CrudelyMade 8 หลายเดือนก่อน

    6:57 using WHAT kind of desktop? lol

  • @Grim_Cyanide
    @Grim_Cyanide 8 หลายเดือนก่อน +6

    was pretty excited to try it, then saw the window 11 requirement. shame :(

    • @Otakugima
      @Otakugima 8 หลายเดือนก่อน

      You can use windows 10!

    • @AndrewTSq
      @AndrewTSq 8 หลายเดือนก่อน

      @@Otakugima are you sure? other comments say win11 only also. I want to try it, but I will not install windows 11

  • @XiangWeiHuang
    @XiangWeiHuang 8 หลายเดือนก่อน

    can we make a erotic roleplay chatbot with this? I use openai API solely for those.

  • @_vr
    @_vr 8 หลายเดือนก่อน

    Llama is Facebook's chat model

  • @sriaakashsrikanth8622
    @sriaakashsrikanth8622 7 หลายเดือนก่อน

    Nvdia getforce gtx 1650 can be used ?

  • @luizmourabr
    @luizmourabr 8 หลายเดือนก่อน

    Me with a 2060: 💀

  • @nosinfantasia
    @nosinfantasia 8 หลายเดือนก่อน

    anyone with installer failed , with no reason...

    • @OpenAITutor
      @OpenAITutor 7 หลายเดือนก่อน

      This only works for RTX 4000 series min with 8GB of VRAM.

  • @Vvilvid
    @Vvilvid 7 หลายเดือนก่อน

    I have 4 pcs and none of them can run it 😭😭
    Custom Pc1:amd
    Custom Pc2:amd
    Laptop1:rtx3050ti(4gb)
    Laptop2:amd