AI on Mac Made Easy: How to run LLMs locally with OLLAMA in Swift/SwiftUI

แชร์
ฝัง
  • เผยแพร่เมื่อ 8 ม.ค. 2025

ความคิดเห็น • 21

  • @khermawan
    @khermawan 5 หลายเดือนก่อน +9

    Ollamac and OllamaKit creator here! 👋🏼 Great video, Karin!! ❤

  • @Juansevvc
    @Juansevvc หลายเดือนก่อน

    Thanks Karin, this is what I was looking for!

  • @Another0neTime
    @Another0neTime 6 หลายเดือนก่อน +1

    Thank you for the video, and sharing your knowledge.

  • @LinuxH2O
    @LinuxH2O 4 หลายเดือนก่อน

    Really informative, something I kind of was in need. Thanks for showing things off.

  • @andrelabbe5050
    @andrelabbe5050 5 หลายเดือนก่อน

    I enjoyed the video. Easy to understand and most importantly showing what you can do without to much hassle with a not too powerful MacBook. From the video I believe I have the same model as the one you used. I do like the idea of setting preset for the 'engine'. I do use the Copilot Apps. I can then check how both perform for the same question. I have just tested deepseek-coder-v2 with the same questions as you... Funny thing, it is not exactly the same answer. Also on my 16Gb Mac,,, The Memory activity get a nice yellow colour. Sadly contrary to the Mac in the video, I got more stuff running in the background like Dropbox, etc... Which I cannot really kill just for the sake of it,

  • @tsalVlog
    @tsalVlog 6 หลายเดือนก่อน

    Great video!

  • @KD-SRE
    @KD-SRE 5 หลายเดือนก่อน +1

    I use '/bye' to exit out of the Ollama cli

  • @juliocuesta
    @juliocuesta 6 หลายเดือนก่อน

    if i understood correctly. The idea could be to create an app for macOS that includes some function that requires a LLM. The app is distributed without the LLM. The user is notified that said function will only be available if download the model. This message should be implemented in a View that contains a button that will download the file and configure the macOS app to start its use.

  • @kamertonaudiophileplayer847
    @kamertonaudiophileplayer847 6 หลายเดือนก่อน

    The awesome video!

  • @ericwilliams4554
    @ericwilliams4554 6 หลายเดือนก่อน

    Great video. Thank you. I am interested to know if any developers are using this in their iOS apps.

    • @SwiftyPlace
      @SwiftyPlace  6 หลายเดือนก่อน +2

      This is not working for iOS. If you want to run LLM on an iPhone you will need to use a smaller model which usually dont perform so well. Most iPhones have less than 8GB Ram. That is also why Apple Intelligence will process more advanced complex task in the cloud

  • @guitaripod
    @guitaripod 6 หลายเดือนก่อน +1

    wondering what it'd take to get something running on iOS. Even with 2B it might prove useful

  • @Algorithmswithsubham
    @Algorithmswithsubham 2 หลายเดือนก่อน

    more on these please

  • @botgang5092
    @botgang5092 3 หลายเดือนก่อน

    Nice! 👍

  • @mindrivers
    @mindrivers 5 หลายเดือนก่อน

    Dear Karin, Could you please advise on how to put my entire Xcode project into a context window and ask the model about my entire codebase?

  • @officialcreatisoft
    @officialcreatisoft 6 หลายเดือนก่อน

    I've tried using the LLM's locally, but I only have 8gb of ram. Great video!

    • @SwiftyPlace
      @SwiftyPlace  6 หลายเดือนก่อน +1

      Unfortunately, Apple made the base models with 8GB. A lot of people have the same problem as you.

    • @jayadky5983
      @jayadky5983 6 หลายเดือนก่อน +1

      I feel like you can still run the Phi3 model on your device.

  • @midnightcoder
    @midnightcoder 6 หลายเดือนก่อน +2

    Any way of running it on iOS?

    • @EsquireR
      @EsquireR 5 หลายเดือนก่อน

      Only watchos sorry

  • @bobgodwinx
    @bobgodwinx 5 หลายเดือนก่อน

    LLMs have a long way to go. 4GB to run a simple question is a no go. The have to reduce it to 20MB and people will start paying attention.