FREE Local LLMs on Apple Silicon | FAST!

แชร์
ฝัง
  • เผยแพร่เมื่อ 9 พ.ค. 2024
  • Step by step setup guide for a totally local LLM with a ChatGPT-like UI, backend and frontend, and a Docker option.
    Temperature/fan on your Mac: www.tunabellysoftware.com/tgp... (affiliate link)
    Run Windows on a Mac: prf.hn/click/camref:1100libNI (affiliate)
    Use COUPON: ZISKIND10
    🛒 Gear Links 🛒
    * 🍏💥 New MacBook Air M1 Deal: amzn.to/3S59ID8
    * 💻🔄 Renewed MacBook Air M1 Deal: amzn.to/45K1Gmk
    * 🎧⚡ Great 40Gbps T4 enclosure: amzn.to/3JNwBGW
    * 🛠️🚀 My nvme ssd: amzn.to/3YLEySo
    * 📦🎮 My gear: www.amazon.com/shop/alexziskind
    🎥 Related Videos 🎥
    * 🌗 RAM torture test on Mac - • TRUTH about RAM vs SSD...
    * 🛠️ Host the PERFECT Prompt - • Hosting the PERFECT Pr...
    * 🛠️ Set up Conda on Mac - • python environment set...
    * 🛠️ Set up Node on Mac - • Install Node and NVM o...
    * 🤖 INSANE Machine Learning on Neural Engine - • INSANE Machine Learnin...
    * 💰 This is what spending more on a MacBook Pro gets you - • Spend MORE on a MacBoo...
    * 🛠️ Developer productivity Playlist - • Developer Productivity
    🔗 AI for Coding Playlist: 📚 - • AI
    Repo
    github.com/open-webui/open-webui
    Docs
    docs.openwebui.com/
    Docker Single Command
    docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
    - - - - - - - - -
    ❤️ SUBSCRIBE TO MY TH-cam CHANNEL 📺
    Click here to subscribe: / @azisk
    - - - - - - - - -
    Join this channel to get access to perks:
    / @azisk
    - - - - - - - - -
    📱 ALEX ON X: / digitalix
    #machinelearning #llm #softwaredevelopment
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 259

  • @AC-cg6mf
    @AC-cg6mf 19 วันที่ผ่านมา +18

    I really like that you showed the non-docker install first. I think too many rely on docker black-boxes. I prefer this. Thanks!

    • @philipo1541
      @philipo1541 15 วันที่ผ่านมา +4

      Dockers are not a black-box. You can get it in them, and change stuff!!!

  • @camsand6109
    @camsand6109 21 วันที่ผ่านมา +17

    This channel is the gift that keeps on giving.

  • @JosepCrespoSantacreu
    @JosepCrespoSantacreu 21 วันที่ผ่านมา +2

    Another great video Alex, I really enjoy your videos. And I really appreciate your perfect diction in English, which makes it easy to follow your explanations even for those who do not have English as their first language.

  • @asnifuashifj91274
    @asnifuashifj91274 21 วันที่ผ่านมา +11

    Great video Alex! yes please make videos on image generation!

  • @ReginaldoKono
    @ReginaldoKono 13 วันที่ผ่านมา

    Yes Alex, you will help us more if we could learn with you how on how to add an image generator as well. We thank you for your time and colaboraron. Your channel is a must have subscription in it now-a-days.

  • @kaorunguyen7782
    @kaorunguyen7782 15 วันที่ผ่านมา

    Alex, I love this video very much. Thank you!

  • @7764803
    @7764803 21 วันที่ผ่านมา +2

    Thanks Alex for videos like this 👍
    I would like to see Image generation follow up video 😍

  • @ChrisHaupt
    @ChrisHaupt 21 วันที่ผ่านมา +1

    Very interesting, will definitely be trying this when I get a little downtime!

  • @Ginto_O
    @Ginto_O 19 วันที่ผ่านมา +1

    Thank you, got it to work without docker

  • @gustavohalperin
    @gustavohalperin 17 วันที่ผ่านมา +2

    Great video!! And yes, please add a video explaining how to add the images generator.

  • @aldousroy
    @aldousroy 21 วันที่ผ่านมา +1

    Awesome thing waiting for more videos on the way

  • @brunosanmartin1065
    @brunosanmartin1065 21 วันที่ผ่านมา +22

    These videos are so exciting for me; this channel is the number one on TH-cam. That's why I subscribe and gladly pay for TH-cam Premium. A hug, Alex!

    • @AZisk
      @AZisk  21 วันที่ผ่านมา +3

      thanks for saying! means a lot

    • @RealtyWebDesigners
      @RealtyWebDesigners 21 วันที่ผ่านมา +3

      Now we need 1TB MEMORY DRIVES (Like the Amiga used to have 'fast ram' )

    • @MrMrvotie
      @MrMrvotie 17 วันที่ผ่านมา

      @@AZisk Is their any chance you could incorporate a PC GPU Relative Performance Equivalence to each new apple silicon microchip that you review?

  • @iv4sik
    @iv4sik 17 วันที่ผ่านมา +1

    if ur trying docker, make sure it is version 4.29+, as host network driver (for mac) revealed there as a beta feature

  • @sungm2n
    @sungm2n 19 วันที่ผ่านมา

    Amazing stuff. Thank you

  • @mrdave5500
    @mrdave5500 21 วันที่ผ่านมา

    Woot woot! great stuff. Nice easy tutorial and I now have a 'smarter' Mac. Thanks :)

  • @AaronHiltonSPD
    @AaronHiltonSPD 21 วันที่ผ่านมา +5

    Amazing tutorial. Great stuff!

    • @AZisk
      @AZisk  21 วันที่ผ่านมา +2

      Thank you! Cheers!

  • @loveenjain
    @loveenjain 18 วันที่ผ่านมา

    Excellent Video giving it a try tonight on my M3 Max 14 inch model and see what are the results will share probably...

  • @erenyeager655
    @erenyeager655 21 วันที่ผ่านมา +1

    One thing for sure... I'll be implementing this on my menu bar for easy access :D

  • @sikarinkaewjutaniti4920
    @sikarinkaewjutaniti4920 18 วันที่ผ่านมา

    Thx for sharing good stuff for us. Nice onec

  • @johnsummers7389
    @johnsummers7389 21 วันที่ผ่านมา +1

    Great Video Alex. Thanks.

    • @AZisk
      @AZisk  21 วันที่ผ่านมา

      Glad you liked it!

  • @OrionM42
    @OrionM42 19 วันที่ผ่านมา

    Thanks for the video.😊😊

  • @DaveEtchells
    @DaveEtchells 21 วันที่ผ่านมา

    I was gonna spring for a maxed M3 Max MBP, but saw rumors that the M4 Max will have more AI-related chops, so just picked up a maxed M1 Max to tide me over 😁
    Really excited about setting all this up, finding this vid was very timely, thanks!

  • @ilkayayas
    @ilkayayas 15 วันที่ผ่านมา

    Nice. Image generation and integrating new chatgpt in to this will be great.

  • @BenjaminEggerstedt
    @BenjaminEggerstedt 20 วันที่ผ่านมา

    This was interesting, thanks

  • @gligoran
    @gligoran 20 วันที่ผ่านมา

    Amazing video! I'd just recommend Volta over nvm.

  • @guyguy467
    @guyguy467 21 วันที่ผ่านมา +3

    Thanks! Very nice video

    • @AZisk
      @AZisk  21 วันที่ผ่านมา

      Wow! Thank you!

  • @moranmono
    @moranmono 21 วันที่ผ่านมา +1

    Great video. Awesome 👏

  • @bvlmari6989
    @bvlmari6989 21 วันที่ผ่านมา +1

    Amazing video omg, incredible tutorial man

    • @AZisk
      @AZisk  21 วันที่ผ่านมา

      Glad you liked it!

  • @soulofangel1990
    @soulofangel1990 21 วันที่ผ่านมา

    Yes, we do.

  • @vadim487
    @vadim487 21 วันที่ผ่านมา

    Alex, you are awesome!

  • @mendodsoregonbackroads6632
    @mendodsoregonbackroads6632 14 วันที่ผ่านมา

    Yes I’m interested in an image generation video. I’m running llama3 in Bash, haven’t had time to set up a front end yet. Cool video.

  • @WokeSoros
    @WokeSoros 11 วันที่ผ่านมา

    I was able to, by tracking down your Conda video, get this running.
    I have some web dev and Linux experience, so it wasn’t a huge chore but certainly not easy going in relatively blind.
    Great tutorial though. Much thanks.

  • @jorgeluengo9774
    @jorgeluengo9774 20 วันที่ผ่านมา +1

    by the way, I just joined your channel, I really enjoyed these videos, very helpful, thanks!

    • @AZisk
      @AZisk  20 วันที่ผ่านมา

      awesome. welcome!

  • @cjchand
    @cjchand 6 วันที่ผ่านมา +1

    Just some food for thought for future vids: Anaconda's licensing terms changed to require any org > 200 employees to license it. For this reason, many Enterprises are steering their devs away from Anaconda. Would be helpful if the tutorials used "vanilla" Python (e.g.: venv) unless Conda were truly necessary. Thanks for the vids and keep up the great work!

    • @AZisk
      @AZisk  6 วันที่ผ่านมา

      good to know. thanks

  • @davidgoncalvesalvarez
    @davidgoncalvesalvarez 21 วันที่ผ่านมา +121

    My M1 Mac 16GB be real frightened on the side rn.

    • @blackandcold
      @blackandcold 21 วันที่ผ่านมา +12

      I ran 7b variants no problem on my now sold m1 air 16g

    • @ivomeadows
      @ivomeadows 21 วันที่ผ่านมา +5

      got macbook with the same specs. tried to run 15b starcoder2 quantized k5m in LM studio on it, max GPU layers, getting me around 12-13 tokens per sec, not good but manageable

    • @RobertMcGovernTarasis
      @RobertMcGovernTarasis 21 วันที่ผ่านมา +9

      Don't be, unless you are using other things that are super heavy as well. Llama3 8B(?) takes up about 4.7GB of Ram, with the Silicon's event use of the Nvme and Swap you'll be fine. (I prefer using LM Studio now to Ollama as it has CLI and Web built in, no need for Docker/OrbStack but, Ollama on its own without a WebUI works too)

    • @martinseal1987
      @martinseal1987 21 วันที่ผ่านมา

      😂

    • @DanielHarrisCodes
      @DanielHarrisCodes 20 วันที่ผ่านมา

      Great video. What format are LLM models download as? Looking into how I can use those downloaded with OLLAMA with other technologies like .NET

  • @dibyajit9429
    @dibyajit9429 21 วันที่ผ่านมา +1

    I've just started my career as a Data Scientist, and I found this video to be awesome! 🤩🥳Could you please consider making a video on image generation (in LLama 3) in a private PC environment?🥺🥺

  • @willmartin4715
    @willmartin4715 21 วันที่ผ่านมา

    i believe my laptop has 80 Tensor cores. for starters. This looks like a really good shift for a fri night! thanks.

  • @shapelessed
    @shapelessed 21 วันที่ผ่านมา +10

    YO! Finally hearing of a big Svelte project!
    Like really, it's so much quicker and easier to ship with Svelte than others, why am I only seeing this now?

    • @AZisk
      @AZisk  21 วันที่ผ่านมา +4

      Svelte for the win!

    • @precisionchoker
      @precisionchoker 21 วันที่ผ่านมา +1

      Well.. Apple, Brave, New York times, IKEA among other big names all use svelte

    • @shapelessed
      @shapelessed 21 วันที่ผ่านมา

      ​@@precisionchoker But they do not acknowledge that too much..

  • @AzrealNimer
    @AzrealNimer 20 วันที่ผ่านมา +1

    I would love to see the image generation tutorial 😁

  • @Dominickleiner
    @Dominickleiner 14 วันที่ผ่านมา +1

    instant sub, great content thank you!

    • @AZisk
      @AZisk  14 วันที่ผ่านมา

      Welcome aboard!

  • @LucaCilfoneLC
    @LucaCilfoneLC 19 วันที่ผ่านมา

    Yes! Image generation, please!

  • @Meet7
    @Meet7 17 วันที่ผ่านมา

    thanks alex

  • @RealtyWebDesigners
    @RealtyWebDesigners 21 วันที่ผ่านมา +5

    BTW - One of the BEST programmer channels!

  • @erwintan9848
    @erwintan9848 20 วันที่ผ่านมา +1

    Is it fast on mac m1 pro too?
    How many storage used for all instalation sir?
    Your video is awesome!

  • @jehad4455
    @jehad4455 21 วันที่ผ่านมา

    Mr. Alex Ziskind
    Could you clarify whether training deep learning models on a GPU for the Apple Silicon M3 Pro might reduce its lifespan?
    Thank you.

  • @keithdow8327
    @keithdow8327 21 วันที่ผ่านมา +4

    Thanks!

    • @AZisk
      @AZisk  21 วันที่ผ่านมา

      Wow 🤩 thanks so much!

  • @thetabletopskirmisher
    @thetabletopskirmisher 18 วันที่ผ่านมา

    What advantage does this have over using LM Studio that you can install directly as an app instead of using the Terminal? (Genuine question)

  • @gayanperera7273
    @gayanperera7273 20 วันที่ผ่านมา

    Thanks @Alex, by the way is there a reason it can only use GPU, any reason not taking advantage of NPUs ?

  • @akhimohamed
    @akhimohamed 20 วันที่ผ่านมา +1

    As a game dev, this is so good to have. Btw am gonna try this on parallels for my m1 pro

    • @Lucas-fl8ug
      @Lucas-fl8ug 16 วันที่ผ่านมา

      You mean in windows through parallels? why would it be useful?

  • @99cya
    @99cya 21 วันที่ผ่านมา +1

    Hey Alex, would you say Apple is in a very good position when it comes to AI and the required hardware? So far Apple has been really quiet and lots of ppl dont think Apple can have an edge here. Whats your thought in general here?

  • @agnemedia624
    @agnemedia624 21 วันที่ผ่านมา

    Thanks 👍🏻

  • @IsaacFromHK
    @IsaacFromHK 21 วันที่ผ่านมา

    Can someone tell me how is it different from LM studio, Anything LLM or using Lamafile? I get a bit confused with all these. Also can I make this to run with RAG?

  • @alexanderekeberg4343
    @alexanderekeberg4343 21 วันที่ผ่านมา

    should i upgrade from macbook pro 2020 (intel core i5 8th gen quad-core 1.4ghz) to macbook air m3 15 inch for coding?

  • @XinYue-ki3uw
    @XinYue-ki3uw 19 วันที่ผ่านมา

    i like this tutorial, it is computer dummy friendly~

  • @cookiebinary
    @cookiebinary 18 วันที่ผ่านมา +2

    Tried llama3 on 8GB ram M1 :D ... I guess I was too optimistic

  • @ykimleong
    @ykimleong 20 วันที่ผ่านมา

    Hi, please please, if possible to generate images through ollama webui

  • @ontime8109
    @ontime8109 18 วันที่ผ่านมา

    thanks!

  • @Raptor235
    @Raptor235 18 วันที่ผ่านมา

    Great video Alex, is there anyway to have an LLM execute local shell scripts to perform tasks?

  • @historiasinh9614
    @historiasinh9614 15 วันที่ผ่านมา

    Which model is good for programing on JavaScript no a Apple Silicon 16GB?

  • @haralc
    @haralc 19 วันที่ผ่านมา

    Oh you got distracted! You're a true developer!

  • @shalomrutere2649
    @shalomrutere2649 18 วันที่ผ่านมา

    I've ran the phi3 model on my windows laptop and it is running on the CPU. How do I switch it to run on the GPU??

  • @bekagelashvili2904
    @bekagelashvili2904 9 วันที่ผ่านมา

    easy question, if i am not developer, what's the benefit i get from installing LLM in my apple silicon, what's the difference, between free version, or paid version of ai models ?

  • @ashesofasker
    @ashesofasker 14 วันที่ผ่านมา

    Great video! So are you saying that we can get ChatGPT like quality just faster, more private and for free by running local LLM's on our personal machines? Like, do you feel that this replaces ChatGPT?

  • @Megabeboo
    @Megabeboo 20 วันที่ผ่านมา

    How do I find out about the hardware requirements like RAM, disk space, GPU?

  • @abdorizak
    @abdorizak 21 วันที่ผ่านมา

    Alex why M1 Mac getting heated when use like 10 minutes?

  • @filipjofce
    @filipjofce 16 วันที่ผ่านมา

    So cool, and it's free (if we don't count the 4 grands spent for the machine). I'd love to see the images generation

  • @aaronsayeb6566
    @aaronsayeb6566 15 วันที่ผ่านมา

    do you know if any llm would run on base model M1 MacBook Air (8GB memory)?

  • @mediocreape
    @mediocreape 20 วันที่ผ่านมา

    thanks for the video man

  • @yianghan751
    @yianghan751 19 วันที่ผ่านมา +1

    Alex, excellent video!
    Can my MacBook air m2 with 16G RAM host these AI engines smoothly?

  • @AlexLaslau
    @AlexLaslau 17 วันที่ผ่านมา +1

    MBP M1 Pro with 16GB of RAM would be enough to run this?

  • @matteobottazzi6847
    @matteobottazzi6847 21 วันที่ผ่านมา +3

    A video on how you could incorporate these LLMs in your applications would be super interesting! Let's say that in your application you have a set of pdfs or html files that provide documentation on your product. If you let these LLMs analyse that documentation, then the user could get very useful information just asking and not searching through all of the documentation files!

    • @FelipeViaud
      @FelipeViaud 20 วันที่ผ่านมา +2

      +1

    • @neoqe6lb
      @neoqe6lb 6 ชั่วโมงที่ผ่านมา +1

      Ollama has api endpoints that you can integrate in your apps. Check their documentation.

  • @thevirtualdenis3502
    @thevirtualdenis3502 15 วันที่ผ่านมา

    Thanks ! Is Macbook air enough for that?

  • @alexbanh
    @alexbanh 20 วันที่ผ่านมา

    How does the MBP performance compare to Intel x Nvidia when running these local LLM

  • @113bast
    @113bast 21 วันที่ผ่านมา +4

    Please show image generation

  • @youssefragab2109
    @youssefragab2109 21 วันที่ผ่านมา +1

    This is really cool, love the channel and the videos Alex! Just curious, how is this different to an app like LM Studio? Keep up the good work!

    • @yuanyuanintaiwan
      @yuanyuanintaiwan 4 วันที่ผ่านมา

      My guess is that this web UI has more capabilities such as image generation which LM Studio doesn’t have. If the goal is simply to have text interaction, then I agree that this may not be necessary

  • @rafaelcordoba13
    @rafaelcordoba13 21 วันที่ผ่านมา +2

    Can you train these local LLMs with your own code files? For example adding all files from a project as context so the AI suggests things based on your current code structure and classes.

    • @dmitrykomarov6152
      @dmitrykomarov6152 14 วันที่ผ่านมา

      Yeap, you can then make a RAG with the LLMs you prefer. Will be making my own RAG with llama3 this weekend.

  • @uwegenosdude
    @uwegenosdude 22 ชั่วโมงที่ผ่านมา

    Thanks for the video. I tried to download the code companion. Do you know why when the download of this LLM is going on, happens an upload of a couple of GBytes?

  • @pixelplay1098
    @pixelplay1098 21 วันที่ผ่านมา

    Amazing stuff as Usual. Now make a tutorial on Automatic 1111

  • @ActdeskSG
    @ActdeskSG 7 วันที่ผ่านมา

    How do we get the models updated regularly?

  • @toddbristol707
    @toddbristol707 20 วันที่ผ่านมา +1

    Great channel! I just did a build something similar with lm studio and flask based web ui. I’m going to try this method now. Btw, what was the ‘code .’ command you ran? Are you using visual studio code? Thanks again!

    • @AZisk
      @AZisk  13 วันที่ผ่านมา

      Thanks! and thanks for joining. I did the flask thing a few videos ago, but it's just another thing to maintain. I find this webui a lot more feature rich and better looking. And yes, the 'code .' command just opens the current folder in VSCode

  •  21 วันที่ผ่านมา

    Is mps available on docker for Apple Silicon already?

  • @OlegShulyakov
    @OlegShulyakov 16 วันที่ผ่านมา

    When there will be a video to run LLM on an iPhone or iPad? Like using LLMFarm

  • @jakubpeciak429
    @jakubpeciak429 18 วันที่ผ่านมา

    Hi Alex, I would like to see the image generation video

  • @innocent7048
    @innocent7048 21 วันที่ผ่านมา +19

    Here you have a super like - and a cup of coffee 🙂

    • @AZisk
      @AZisk  21 วันที่ผ่านมา +6

      Yay, thank you! I haven't been to Denmark in a while - beautiful country.

  • @Mikoaj-ie6gt
    @Mikoaj-ie6gt 19 วันที่ผ่านมา

    very intresting

  • @AdityaSinghEEE
    @AdityaSinghEEE 20 วันที่ผ่านมา

    Can't believe, I found this video today because I just started searching for Local LLMs yesterday and today, I found the complete guide. Great video Alex :)

    • @scorn7931
      @scorn7931 15 วันที่ผ่านมา

      You live in Matrix. Wake up

  • @justintie
    @justintie 19 วันที่ผ่านมา +1

    the question is: are opensource LLMs just as good as say chatGPT or Gemini?

  • @TheBiffsterLife
    @TheBiffsterLife 19 วันที่ผ่านมา

    Will the m4 chips be many times faster still?

  • @travelchoice89
    @travelchoice89 19 วันที่ผ่านมา +2

    🚀 Excited to dive into this video! Local LLMs on Apple Silicon for FREE? That's a game-changer! Let's see just how FAST these optimizations are! 🍏💻

  • @MohammedAraby
    @MohammedAraby 13 วันที่ผ่านมา

    Well be happ to see a tutorial for automatic 1111 ❤

  • @abhishekjamdade6701
    @abhishekjamdade6701 13 วันที่ผ่านมา

    i have 8gb of ram can i run this. i know this uses gpu but still do i need 16 or 32 gb ram

  • @zorawarsingh11
    @zorawarsingh11 18 วันที่ผ่านมา

    Yes do images please 🙏🏻

  • @jorgeluengo9774
    @jorgeluengo9774 20 วันที่ผ่านมา

    Thank You Alex, amazing video, I followed all steps and I enjoyed the process and the results with my m3 max. I wonder if there is a GPT that we can use from the laptop and have searches online since the cutoff knowledge date of these models seem to be over a year ago or more. For example when I ask questions of what is the terraform provider version for aws or other type of platform, is old and there is a potential to have deprecated code responses. What do you recommend in this case? not sure if you have already a video for that lol.

    • @AZisk
      @AZisk  20 วันที่ผ่านมา +1

      that’s a great question. you’ll need to use a framework like flowise or langchain to accomplish this I believe, but i don’t know much about them - it’s on my list of things to learn

    • @jorgeluengo9774
      @jorgeluengo9774 20 วันที่ผ่านมา

      @@AZisk makes sense, I will do some research about it and see what I can find out to test but I will look forward when you share a video with this type of model orchestration, will be fantastic.

  • @TheMrApocalips
    @TheMrApocalips 20 วันที่ผ่านมา

    Can you make stock trading "AI" using these tools on apple or snap dragon/similar?

  • @swapwarick
    @swapwarick 21 วันที่ผ่านมา +24

    I am running llama, code Gemma on my laptop for local files intelligence. It's slow but damm it reads all my PDFs and give perfect overview

    • @devinou-programmationtechn9979
      @devinou-programmationtechn9979 21 วันที่ผ่านมา +10

      Do you do it through ollama and open webui ? I m curious as to how you can send files to be processed by llms

    • @ShakeAndBakeGuy
      @ShakeAndBakeGuy 21 วันที่ผ่านมา

      @@devinou-programmationtechn9979 GP4All works fairly well with attachments. But I personally use Obsidian as a RAG to process markdown files and PDFs. There are tons of plugins like Text Generator and Smart Connections that can work with Ollama, LM Studio, etc.

    • @TheXabl0
      @TheXabl0 21 วันที่ผ่านมา

      Can you describe this “perfect overview”? Just curious what you mean by

    • @swapwarick
      @swapwarick 21 วันที่ผ่านมา

      Yes running open webui for llama and code Gemma llms on windows machine. Running open webui on localhost gives textarea where you can upload the file. The upload takes time. Once it is done, you can ask questions like give me an overview of this document, tell me all the important points of this document etc

    • @TheChindoboi
      @TheChindoboi 15 วันที่ผ่านมา

      Gemma doesn’t seem to work well on Apple silicon

  • @tyron2854
    @tyron2854 21 วันที่ผ่านมา +1

    What about a new M4 iPad Pro video?

  • @RAHUL-lk6vx
    @RAHUL-lk6vx 21 วันที่ผ่านมา

    bro i cant open the code using code . command, what should I do ??

  • @ScottSquires
    @ScottSquires 21 วันที่ผ่านมา

    Curious with all Xcode, all the models, docker, along with video editing, etc how much disk space does your system have? Are you using external drives? Trying balance ease of having plenty of drive vs Apple drive costs.

    • @ghost-user559
      @ghost-user559 21 วันที่ผ่านมา +1

      An external OWC Thunderbolt enclosure and an nvme 1-2 TB is what I went with. I use it as my boot drive and use the internal as a backup now. I run everything off of it, and I have tons of room to spare. Has to be a true thunderbolt enclosure to boot from however.

    • @ScottSquires
      @ScottSquires 20 วันที่ผ่านมา

      @@ghost-user559thanks. I’ll probably still boot from mac and have multiple fast drives for large data (videos, photos, models, etc)

    • @ghost-user559
      @ghost-user559 20 วันที่ผ่านมา +1

      @@ScottSquires Yeah I did that for almost a year myself. But ultimately I realized that many of my apps like Ai apps and music libraries for Logic Pro which take up hundreds of GB and I was having issues with permissions on files externally. For normal data this isn’t an issue, but, many apps I use only store data in the local directory so the only way to run them is from the Boot Drive. As long as you only need storage in general then your plan works really well. But if you want to work on files regularly on that drive it’s easier to just install MacOS directly onto the external.

  • @faysal1991
    @faysal1991 12 วันที่ผ่านมา

    lets do some image generation please it would be super helpful

  • @MW-mn1el
    @MW-mn1el 21 วันที่ผ่านมา +3

    I use Ollama with Continue plugin with VSCode. And Chatbox GUI when not code related. Work well with both Mac and Linux with Ryzen 7000 CPU. On linux it's running in a podman(docker) container. But best experience is with MacBook Pro, apple silicon and unified memory make it speedy.