Here is How You Create AI Operating System from Scratch

แชร์
ฝัง
  • เผยแพร่เมื่อ 30 ก.ย. 2024

ความคิดเห็น • 51

  • @MeinDeutschkurs
    @MeinDeutschkurs 3 หลายเดือนก่อน

    I rewatched this. Now I think that the term from scratch is misleading. Now I think that from scratch should start with „open a new python file“.

  • @benh8199
    @benh8199 4 หลายเดือนก่อน +2

    Out of all the AI tools and frameworks you’ve used, which one(s) do you find to be the most useful and have the most promise moving forward?

  • @moses5407
    @moses5407 4 หลายเดือนก่อน +4

    Really the absolute BEST AI presentations and development around! Thanks!

  • @christopheboucher127
    @christopheboucher127 4 หลายเดือนก่อน +6

    Hi Mervin, what about postgres memory ? is it a long term memory ? something like autogen teachable agent or memgpt ? thx again for your amazing content !

    • @phidata
      @phidata 4 หลายเดือนก่อน +3

      Postgres right now is storing chat history -- but chatgpt like personalized memory is in the works :)

    • @christopheboucher127
      @christopheboucher127 4 หลายเดือนก่อน +3

      @@phidata great! Amazing! I don't know if it will be possible, but I dream of a long term management system in a sql like database with autocreation of tables for topics which will be filled in by the agent when he found relevant info to keep (for example preferences, backstory of the user, the company, personals data's, ideas and thought, etc.) this kind of memory would be very helpful for all kind of assistants, from office to psychotherapist, coach, etc etc and maybe a runtime to reorganise all the database when it's needed... And if all that can be user session managed, it will be the perfect framework for new kind of agentic system, if you see what I mean... Unfotunaly I don't have enough coding skills to build that or to help building that.. Thanks a lot @phidata for.. Phidata ;) very great job and very great gift for the world!

    • @phidata
      @phidata 4 หลายเดือนก่อน +4

      @@christopheboucher127 this is truly amazing! im coding the personalized memory piece right now and your message was like the AI gods speaking to me showing me what to build. Thank you!
      Cannot express how much I appreciate this guidance, thank you

  • @moses5407
    @moses5407 4 หลายเดือนก่อน +1

    Would it be possible to add approved chat comments to the local knowledge base? Or is that automatic via the postgres storage? Also, can PraisonAI automatic multi-agent creation be added vs. manually/programmtically defining all of the agents/tools in LLM OS?

  • @michaelmarkoulides7068
    @michaelmarkoulides7068 4 หลายเดือนก่อน +1

    Holy Shit Marvin I’m impressed . You’ve just shown how with a few python libraries and some code you can hook into os file system and create agents with specialised knowledge . If I am not mistaken this could run on any embedded hardware with Linux os and a 5G or WiFi connnwction because your using an open ai api call Which means you could extend Ai to edge computing right now to all the millions of connected edge devices out there
    Kudos man 👏👏👏
    I’m going to try this on some hardware I have at work

  • @2771237
    @2771237 4 หลายเดือนก่อน +1

    Awesome Dear. Thanks for sharing. One Que - Can we use LLAMA3 instead of GPT4 ?

  • @ninolindenberg4444
    @ninolindenberg4444 4 หลายเดือนก่อน

    Here is how you create AI OS from SCRATCH => Python …. 😂😂😂😂😂😂
    What’s next? Building AI rockets from SCRATCH with Python? 😂😂😂😂😂😂
    Please stop the nonsense and start educating people with real stuff. Yes Python is installed by default in OS but it‘s not used to program the OS.

  • @Ahmed-Sabrie
    @Ahmed-Sabrie 4 หลายเดือนก่อน +2

    If I would give a score for this spectacular video in a scale from 1 to 10 .. I would give you 20! Well Done and Many Thanks

  • @timothywcrane
    @timothywcrane 4 หลายเดือนก่อน

    Love to make something similar, however I would seek the foundational LLM to actually be a local SLIM and call for larger models if needed. I would wish to make it local GPU agnostic/unneeded. Modular GPUs can be added on prem or called from providers. My idea is not in any way superior on the face of it... just an iteration/extrapolation on a similar idea. Thanks.

  • @HyperUpscale
    @HyperUpscale 4 หลายเดือนก่อน +3

    You are from another planet, Mervin... Always few steps ahead in the future 🤯🤯🤯...

    • @asqu
      @asqu 4 หลายเดือนก่อน

      this is Phidata Team project th-cam.com/video/YMZm7LdGQp8/w-d-xo.html

  • @Techonsapevole
    @Techonsapevole 4 หลายเดือนก่อน +1

    super cool, I hope the next CPUs will run Llama 3 70B fast

  • @JohnSmith762A11B
    @JohnSmith762A11B 4 หลายเดือนก่อน

    It's funny in the film Her the fictional OS1 appears to take up the whole screen. It does later show documents and other UI elements, so I wonder if it's fair to call it an OS. I will say Apple and Microsoft need to get ahead of this and start allowing LLMs to control their desktops. My guess is both are working feverishly on this. In a few years you won't need to know how an email app like Outlook even works to send and receive email, or to create and update spreadsheets, your AI OS will handle that for you, just tell it what you want in there.

  • @АлексГладун-э5с
    @АлексГладун-э5с 4 หลายเดือนก่อน +8

    You could have shared a link to the original video: th-cam.com/video/6g2KLvwHZlU/w-d-xo.html
    instead of recording a clone yourself.

    • @tomasbusse2410
      @tomasbusse2410 4 หลายเดือนก่อน

      Hm 😢

    • @phidata
      @phidata 4 หลายเดือนก่อน +3

      tbh i think mervin explained it better than me :)

    • @helix8847
      @helix8847 4 หลายเดือนก่อน +2

      if you havent noticed he does that a lot for most of his videos.

    • @phidata
      @phidata 4 หลายเดือนก่อน +1

      @@helix8847 im actually a big fan of that because he explains it much better than me :) hope he continues to do that. Mervin has a way of communicating complex information and i learn a lot about my own work when he makes a video

  • @moses5407
    @moses5407 4 หลายเดือนก่อน +1

    Would love to see a remotely accessible server addition to this setup to act something like open interpreter and the O1 lite.

  • @mehditaslimifar2521
    @mehditaslimifar2521 3 หลายเดือนก่อน

    nicee, thank you :-) can you please make a video on LLM OS on AWS?

  • @spencerfunk6697
    @spencerfunk6697 4 หลายเดือนก่อน

    I’m having problems using lm studio with this or groq

  • @josephtilly258
    @josephtilly258 4 หลายเดือนก่อน

    Hello, exporting my openai api key isn't working on the terminal, any tips on how to import it ?

  • @mehditaslimifar2521
    @mehditaslimifar2521 3 หลายเดือนก่อน

    can LLM OS be done with vectorshift?

  • @kamalkamals
    @kamalkamals 4 หลายเดือนก่อน

    there is a technical reason to choose phidata instead of langchain ?

  • @mehditaslimifar2521
    @mehditaslimifar2521 4 หลายเดือนก่อน +1

    very nice video, thanks Mervin

  • @luxipo9934
    @luxipo9934 4 หลายเดือนก่อน +1

    This is impressive

  • @SDN-Acad
    @SDN-Acad 4 หลายเดือนก่อน +1

    Amazing video. Thank you!

  • @RICHARDSON143
    @RICHARDSON143 4 หลายเดือนก่อน +2

    ❤❤❤

  • @enesgul2970
    @enesgul2970 4 หลายเดือนก่อน

    Kendi görüntün çok büyük. Kodları göremiyoruz.

  • @nrpragmatic
    @nrpragmatic 4 หลายเดือนก่อน

    Can it run doom tho?

  • @optalgin2371
    @optalgin2371 4 หลายเดือนก่อน

    Legit question, why not just use the embed models like Nomic for example, chatting with my LLM I learned these vector "memories" create neural connection cells/nodes or whatever and it connects to these vector memories , meaning it's knowledge and memories sort of expands..

  • @onlineinformation5320
    @onlineinformation5320 4 หลายเดือนก่อน

    Is there any way i can use these assistants created by phidata in multi agentic frameworks like crewai or autogen?

  • @redbaron3555
    @redbaron3555 4 หลายเดือนก่อน

    Let’s rename an agent as OS and pretend it is novel…🤷🏻‍♂️🙈

  • @henrychien9177
    @henrychien9177 4 หลายเดือนก่อน

    Is openai the only api? Or can i use local llm to mimik openai api will it work?

  • @MeinDeutschkurs
    @MeinDeutschkurs 4 หลายเดือนก่อน

    🎉🎉👏👏👏

  • @benh8199
    @benh8199 4 หลายเดือนก่อน

    Phidata > Praison AI?

  • @AI-Wire
    @AI-Wire 4 หลายเดือนก่อน +1

    How does PhiData relate to CrewAI and PraisonAI? Would we use them all separately and independently? Or do they work together somehow? If they are independent, which do you recommend and why?

    • @denisblack9897
      @denisblack9897 4 หลายเดือนก่อน

      1. Start with crewAI
      2. Find it was a waste of time
      3. Move on with your life 😅
      No need to blow your mind with more complex stuff to see this stuff provides zero value
      CrewAI is perfect and a little bit illegal with how simple it is.

    • @AI-Wire
      @AI-Wire 4 หลายเดือนก่อน

      @@denisblack9897 What do you mean by, "CrewAI is perfect and a little bit illegal with how simple it is."

    • @JohnSmith762A11B
      @JohnSmith762A11B 4 หลายเดือนก่อน

      @@denisblack9897 I'm confused... are you saying CrewAI is too simple to do real work? Or are you saying it's amazing?

  • @Badg0r
    @Badg0r 4 หลายเดือนก่อน +1

    This is never going to work, since the LLM has to work from an OS as well. This is an OS on an OS. And it always needs lots of power.

    • @fieldcommandermarshall
      @fieldcommandermarshall 4 หลายเดือนก่อน

      depends on what you mean by ‘this’ 🤓

    • @nathanpallavicini6687
      @nathanpallavicini6687 4 หลายเดือนก่อน +1

      Must have never heard of virtualisatiin

    • @michaelmarkoulides7068
      @michaelmarkoulides7068 4 หลายเดือนก่อน

      Yes it’s true LLM’s have an OS under it in the same way an ATM or a parking meter have an app running on top of a operating system like windows embedded or embedded Linux but I think you missing the point . LLM’s are black boxes they have no contact with things outside their domain . LLM Os is a concept and Marvin has demonstrated how you can implement that concept and giving it hooks into your OS to access files and special agents . In this video Marvin uses an OpenAI api key which means you are making a api call to
      Open Ai servers . So you could run this as is on a barebones Linux system with a WiFi or 5G connection and run it on raspberry pi or higher end beaglebone black .
      If you were going to replace the OpenAI LLM component with an open source Llm like Grok or llama then you are correct you would need a lot more memory and compute power not to mention a gPU