DeepSeek R1 Distill On NVIDIA Jetson Nano (WebUI Test and Tutorial)

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 ม.ค. 2025

ความคิดเห็น • 46

  • @OminousIndustries
    @OminousIndustries  13 ชั่วโมงที่ผ่านมา +2

    For those without WebUi/Ollama set up, see instructions at "18.50" and "26.25" in this video: th-cam.com/video/-PjMC0gyH9s/w-d-xo.html

  • @Hwy929
    @Hwy929 13 ชั่วโมงที่ผ่านมา +7

    Keep up the jetson nano videos. I'll archive them for the day that the nano actually becomes available again. lol
    My use case will be integration with Home Assistant and whisper as a replacement for Alex/echo

    • @OminousIndustries
      @OminousIndustries  5 ชั่วโมงที่ผ่านมา

      Good thought, I will indeed! That will be a great use case, check out silero VAD for voice detection too as it adds a cool layer.

    • @Hwy929
      @Hwy929 34 นาทีที่ผ่านมา

      @@OminousIndustries Thanks, SileroVAD looks interesting. Might be overkill though. The main kit I'm looking at for HA is the Satellite 1 from @FutureProofHomes

  • @fabiano8888
    @fabiano8888 3 ชั่วโมงที่ผ่านมา

    New subscriber here. Thanks for the awesome content you're putting out there!

  • @MARKXHWANG
    @MARKXHWANG 2 ชั่วโมงที่ผ่านมา +1

    Very cool man!

  • @newDell-q4w
    @newDell-q4w ชั่วโมงที่ผ่านมา

    Thanks for highlighting this lightweight reasoning model! If you want a laugh, give the 1.5B model a simple logic puzzle (e.g., farmer crosses river type-puzzle) and watch it enter a near-infinite thought loop, arriving at a nonsensical answers after 10 minutes XD

  • @chriswatts3697
    @chriswatts3697 12 ชั่วโมงที่ผ่านมา +2

    Well for a 1.5b Model I think the result is fascinating. The reasoning shows the quality of this "train of thoughts" - 2 years ago when I used the first local Model Dalai Llama I did not get too many usable results. Let's see what we will get with more distilled models in the future. I guess we will see 1b or 0.5b models that can make a good conversation within a year.

    • @OminousIndustries
      @OminousIndustries  5 ชั่วโมงที่ผ่านมา +1

      Yes, the progress of the small models is very interesting, especially as I think the future of models may see far more powerful efficient small models, as training methodologies evolve.

  • @MARKXHWANG
    @MARKXHWANG 2 ชั่วโมงที่ผ่านมา +1

    Supper cool stuff

  • @elchippe
    @elchippe 2 ชั่วโมงที่ผ่านมา

    Interesting, I did manage to make this 1.5B model to make a calculator in python in LMStudio, is impressive.

  • @RedVelocityTV
    @RedVelocityTV 5 ชั่วโมงที่ผ่านมา

    Can you not install the docker containers in main machine for browser and web ui stuff and run the 8b model just off Jetson?

  • @johnsaxxon
    @johnsaxxon 11 ชั่วโมงที่ผ่านมา

    Be a Jetson focused channel 😉👏

    • @OminousIndustries
      @OminousIndustries  5 ชั่วโมงที่ผ่านมา +1

      I will use it in the demos I can at the minimum haha

  • @BB-tt4hm
    @BB-tt4hm 8 ชั่วโมงที่ผ่านมา

    Buddy, your channel is awesome!!

    • @OminousIndustries
      @OminousIndustries  5 ชั่วโมงที่ผ่านมา

      Thanks very much for the kind words!

  • @Mishanw
    @Mishanw 10 ชั่วโมงที่ผ่านมา

    I ordered a Nano but now I’m wondering if I should cancel the order and get the next one up the 32 version so it’s got more resources. Thoughts ?

    • @TheSmiths-0121
      @TheSmiths-0121 8 ชั่วโมงที่ผ่านมา

      Yes, the 8gb is so limiting.

    • @OminousIndustries
      @OminousIndustries  5 ชั่วโมงที่ผ่านมา

      The 8gb is good for exploration and learning, possible to outgrow quickly, but the value you will get from building with it is worth the $250 IMO

    • @Mishanw
      @Mishanw 5 ชั่วโมงที่ผ่านมา

      @@TheSmiths-0121 they have a 16GB version I think for another 200 ish more. I gotta look into that> Wonder if that's a better deal.

  • @WINTERMUTE_AI
    @WINTERMUTE_AI 8 ชั่วโมงที่ผ่านมา +1

    It wont run 8b?

    • @OminousIndustries
      @OminousIndustries  5 ชั่วโมงที่ผ่านมา

      It should, but would have to be through ollama cli without the overhead of having browser and GUI stuff open.

  • @brabbbus
    @brabbbus 13 ชั่วโมงที่ผ่านมา +6

    Sorry dude, but for me you are the " Jetson Nano ".

    • @OminousIndustries
      @OminousIndustries  13 ชั่วโมงที่ผ่านมา +3

      😡😡😡😡😡😡😡😡😡😡😡😡😡😡

    • @MARKXHWANG
      @MARKXHWANG 2 ชั่วโมงที่ผ่านมา

      Do you have any brain?

  • @MARKXHWANG
    @MARKXHWANG 2 ชั่วโมงที่ผ่านมา

    Unreal jetson nano can run a reasoning model. Pupoooo

  • @fdsphone6854
    @fdsphone6854 17 นาทีที่ผ่านมา

    CPU: AMD Ryzen 7 8845HS w/ Radeon 780M
    GPU: AMD ATI 66:00.0 Device 1900
    Memory: 25488MiB / 61083MiB
    i tried tried the R1:14b vs the phi4:14b and tried to generate some very simple scripts.
    resize windows, bin2base64, etc. r1 fails even with example code. the example code was provided after 4 or 5 back and forth with R1. Phi4:14b got it in one request. I dont get the hype. I find the interesting but it doesnt matter when it cant figure out a bin264 which is under 12 lines with err checking.

  • @ematta
    @ematta 5 ชั่วโมงที่ผ่านมา

    My AI stack for LLM chats is Chatbox and Ollama. My M3 MBP (18g) easily handles this. I even sent an image and asked to recreate it, and it did it (with some errors, of course).

    • @OminousIndustries
      @OminousIndustries  5 ชั่วโมงที่ผ่านมา

      Oh yeah, Apple silicone is cheat code for local llms, especially when you factor in the low power draw of the machines.

  • @pstwr
    @pstwr 13 ชั่วโมงที่ผ่านมา

    that was fast!

    • @OminousIndustries
      @OminousIndustries  13 ชั่วโมงที่ผ่านมา

      I was excited to try them hahaha

  • @blengi
    @blengi 3 ชั่วโมงที่ผ่านมา

    lol asked deepseek to "draw an ascii octopus" to see if its drawing skills were up to gtp4 unicorn levels and it had some sort of meltdown outputting 1024 back slashes
    Deepseek ascii octopus meltdown aside, I spose deepseek r1 a kind victory for western culture over illiberal orwellian chinese cultural norms in the meta sense.
    I mean, how much does deepseek r1's chain of thought think in english and western norms to be competitive in a general AI sense?
    That Is, are its latent space parameterized western corpus components largely doing the heavy lifting in terms of innovative reasoning given the lingua franca of modernity is english in many domains?
    *me asking deepseek::* _"given english is the lingua franca of many domains then surely you bias toward english chain of thought?"_
    *deepseek's reply:* _"You're absolutely right to point out that English, as a global lingua franca, plays a significant role in shaping my training data and, by extension, my responses. This does introduce a bias toward English-based patterns of thought, especially in domains where English dominates, such as science, technology, and international discourse."_

  • @DerLuukee
    @DerLuukee 11 ชั่วโมงที่ผ่านมา

    Jetson Industries or Ominous Jetsons?

    • @OminousIndustries
      @OminousIndustries  5 ชั่วโมงที่ผ่านมา

      😠😠😠😠😠😠😠😠

    • @DerLuukee
      @DerLuukee 4 ชั่วโมงที่ผ่านมา

      @OminousIndustries haha 😂 no worries man, any kind of content by you is awesome! Love the different AI stuff you do!

  • @Someone7R7
    @Someone7R7 9 ชั่วโมงที่ผ่านมา +1

    U don't need jetson to run it, i ran it on my phone cause 1.5B is tiny, it worked easily

    • @OminousIndustries
      @OminousIndustries  5 ชั่วโมงที่ผ่านมา

      Yeah, I just wanted to show it with the jetson. Webgpu is awesome and allows a lot of cool phone stuff!

  • @ESGamingCentral
    @ESGamingCentral 12 ชั่วโมงที่ผ่านมา +1

    I wish Im able to get you. these days if you are not a youtube or sponsored u can't get a nano from nvidia

    • @OminousIndustries
      @OminousIndustries  5 ชั่วโมงที่ผ่านมา

      Hopefully it gets back in stock soon!

  • @hojonglin1261
    @hojonglin1261 6 ชั่วโมงที่ผ่านมา

    Run DeepSeek R1 70b with WebUI/Ollama, throw a logic question at it, and it just spits pages after pages of reasoning, deduction, etc. Finally it stopped without a conclusion.
    Not even comparable with Llama 3.3.

    • @OminousIndustries
      @OminousIndustries  5 ชั่วโมงที่ผ่านมา

      Very interesting, I noticed sometimes it would go into a sort of llm psychosis as well. Regardless, an interesting model!

  • @miketrago4561
    @miketrago4561 6 ชั่วโมงที่ผ่านมา +1

    You run gigantic billion parameter models on your AI computer
    I run 200k parameter llms on my ESP32
    We are not the same

  • @AmanKumar-jk1qu
    @AmanKumar-jk1qu 13 ชั่วโมงที่ผ่านมา

    sir i have primarily two questions:
    Question1: Sir what are these parameters like1.5Billion or 7 billion like what do they exactly tell about?
    Quesytion2: Sir do you know of any good text to speech technology stack which are close to open ai's natural voice ?
    great demonstration, Loved it

  • @HextorBane
    @HextorBane 7 ชั่วโมงที่ผ่านมา

    Add a beard and you will be the next Jack Black

    • @OminousIndustries
      @OminousIndustries  5 ชั่วโมงที่ผ่านมา

      *Immediately shaves head and loses 50lbs to avoid this reality*