Moshi The Talking AI

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ก.ย. 2024

ความคิดเห็น • 68

  • @johnkintree763
    @johnkintree763 วันที่ผ่านมา +10

    The ability of Moshi to respond to both the linguistic and non-linguistic speech input is a great feature. Next, it needs function calling abilities to act as an interface to backend knowledge bases.

    • @mickelodiansurname9578
      @mickelodiansurname9578 วันที่ผ่านมา +2

      This is exactly what I need for my own project... I can cut latency using Groq or Cerebras and thats fine but we still have the issue of ASR and TTS latency... cos it needs to pass the text to a better model, and do the same in reverse with the second models output. Now you can it seems fine tune it on both audio and text datasets... but a shortcut surely is simply some input output doorway to an external model? I looked at this last time it was doing the rounds and that was the main problem... it's fantastic but also dumb as a rock... so yeah function calling and perhaps also an instruct version? plus we really need a simple way of creating cloned voices on it... a way that is far simpler than hundreds or thousands of hours of audio.

    • @samwitteveenai
      @samwitteveenai  วันที่ผ่านมา +2

      I'm sure all of these things will come with time. At the moment, in many ways, it's like a proof of concept model for taking in voices and voice semantic information and training the transformer to do that rather than having to have a middle step. This is very similar to how the full version of Gemini and the full version of GPT-4-O work, being end-to-end multimodal.

    • @mickelodiansurname9578
      @mickelodiansurname9578 วันที่ผ่านมา +1

      @@samwitteveenai well my 2 pence worth would be someone needs to give them a few bucks to hurry that along. It is a good direction though.

    • @RedCloudServices
      @RedCloudServices วันที่ผ่านมา +1

      it would be useful if you could change the LLM used w Moshi sort of like openwebui

    • @mickelodiansurname9578
      @mickelodiansurname9578 วันที่ผ่านมา

      @@RedCloudServices it would be very useful.... right now I'm building a live interaction karaoke contest app with AI... and it would be a really big thing... I suppose in this case the code and weights and docs are all open source... so its doable. Unfortunately I don't have the resources to do it! lol... hey maybe in 6 months right?

  • @EDLR234
    @EDLR234 วันที่ผ่านมา +5

    I thought maybe Moshi was gone after everyone dumped on it. I'm so glad to see they released the code. This is open-source, a lot of people are not grasping how incredibly cool this is.

    • @samwitteveenai
      @samwitteveenai  วันที่ผ่านมา +4

      This is exactly how I felt. I held off doing a video the first time because they said they were going to release code and up until yesterday I had started to give up on them.

    • @ronilevarez901
      @ronilevarez901 วันที่ผ่านมา +1

      Its training magic, mostly.
      If LLM progress has shown anything lately I'd that all the LLMs capabilities cone from better training sets + bigger size.
      Give me a supercomputer and unlimited high quality snd diverse datasets and you'll have anything you've ever dreamed from AI.

    • @EDLR234
      @EDLR234 วันที่ผ่านมา

      ​@@samwitteveenaithanks for bringing people's attention back to it OP, and great video.

  • @chunlingjohnnyliu2889
    @chunlingjohnnyliu2889 วันที่ผ่านมา +3

    One more step closer to her, great video thanks!

  • @johnkintree763
    @johnkintree763 2 วันที่ผ่านมา +4

    There are occasions when it is helpful to have a transcript of conversations.

  • @まさしん-o8e
    @まさしん-o8e 2 วันที่ผ่านมา +7

    Kyuutai is sphere, but Moshi probably comes from the standard greeting when picking up the phone in Japanese (moshi moshi).

    • @samwitteveenai
      @samwitteveenai  2 วันที่ผ่านมา

      I did try that one time but I didn’t get a great response so figured it only liked English. Please let me know if it works

  • @thetagang6854
    @thetagang6854 2 วันที่ผ่านมา +9

    This came out like yesterday, you move quick!

    • @69x
      @69x วันที่ผ่านมา +1

      its been out for months blud

    • @thetagang6854
      @thetagang6854 วันที่ผ่านมา

      @@69x The open source code I mean.

  • @donconkey1
    @donconkey1 วันที่ผ่านมา +1

    The topic was insightful, and your delivery kept me engaged from start to finish. I’m looking forward to more content like this. The viewer comments added value and further understanding-clearly, you draw a thoughtful crowd.

  • @xenoaiandrobotics4222
    @xenoaiandrobotics4222 2 วันที่ผ่านมา +2

    This really impressive

  • @dhruvgandhi5796
    @dhruvgandhi5796 2 วันที่ผ่านมา +2

    Samantha, will become real 🤯
    ( from the movie her)

  • @nickludlam
    @nickludlam วันที่ผ่านมา

    I've played with this running locally, and while it's not smart, the architecture is a real breakthrough. I do wonder how interdependent everything is, where any incremental changes in any one area would require retraining the whole thing. I don't know if there are areas of discrete cross attention which make interfaces a tiny bit more decoupled

  • @MeinDeutschkurs
    @MeinDeutschkurs 2 วันที่ผ่านมา

    Amazing! 🎉🎉

  • @mickelodiansurname9578
    @mickelodiansurname9578 วันที่ผ่านมา +2

    So my problem here with this model is its LLM, well do we call it an LLM? and its overall knowledge base. For my project what I would like is the knowledge base of a decent LLM Llama3.1 70b or maybe the larger Mixtral and Mistral models.... but with the low latency voice input output... and as far as I see there does not seem to be any easy way of attaching said model to Moshi. Ithas, what it has, in terms of knowledge and seemingly thats not something you can augment by having it access another model... so even if I use say Groq for Lllama3.1 and run moshi and connect the two this doesn't really help me any more than standard ASR and TTS. Or am I missing something? I must be missing something right? Is there, for example, a Moshi instruct model that acts as essentially Llama 3.1's vocal cords and ears? That way Llama on groq does the upstairs for thinking bit and Moshi does the input output in voice and audio bit.

  • @jakobpcoder
    @jakobpcoder 10 ชั่วโมงที่ผ่านมา

    cool to have an always on audio model that can be interupted open source

  • @superresistant0
    @superresistant0 วันที่ผ่านมา

    I'm very interested in what we could do with this new Moshi code

  • @phen-themoogle7651
    @phen-themoogle7651 วันที่ผ่านมา +1

    Moshi is not the word for sphere 、that's a hallucination lol
    も・し【茂し】 の解説
    [形ク]草木が生い茂っている。繁茂している。
    「水 (みな) 伝ふ磯の浦廻 (うらみ) の石 (いは) つつじ-・く咲く道をまたも見むかも」〈万・一八五〉
    もし【▽若し】 の解説
    [副]
    1 (あとに仮定の表現を伴って)まだ現実になっていないことを仮に想定するさま。もしか。万一。「-彼が来たら、知らせてください」
    2 (疑問や推量の表現を伴って)確実ではないが、十分ありうるさま。もしや。あるいは。ひょっとすると。
    「-かのあはれに忘れざりし人にや」〈源・夕顔〉
    (in English)
    も・し【茂し】 Explanation:
    [Adjective - Ku] Describes plants or trees growing thickly and abundantly.
    Flourishing or luxuriant.
    Example:
    "Like the azaleas blooming thickly along the path by the rocky shore where the water flows."
    (from Manyoshu, Poem 185)
    もし【▽若し】 Explanation:
    [Adverb]
    (Followed by hypothetical expressions) Describes a situation that has not yet become reality, assuming it hypothetically.
    Equivalent to "perhaps" or "in case of."
    Example: "If he comes, please let me know."
    (Followed by expressions of doubt or speculation) Indicates a situation that is not certain, but still quite possible.
    Equivalent to "maybe," "perhaps," or "possibly."
    Example: "Could it be that this person is the one I could not forget?"
    (from The Tale of Genji, Chapter 'Evening Faces')
    --------
    Generally we use it as "if" , but if you say it twice and it becomes Moshimoshi that's how you say "Hi/Hello" on the telephone! pretty strange that it doesn't know the meaning of its own name.
    Word for sphere is 玉(たま)tama , or 球体 きゅうたい Kyuutai the name of that company is actually "sphere" (most likely based on the kanji) lol

  • @mitchellmigala4107
    @mitchellmigala4107 วันที่ผ่านมา

    Oh man, another Moshi video. I have had a few really messed up conversations with Moshi. They left me deeply disturbed and haven't used her since.

  • @ceaderf
    @ceaderf วันที่ผ่านมา

    "What about your A S AHHHHHHH?" lol

  • @darthvader4899
    @darthvader4899 วันที่ผ่านมา

    when I tried it it was not as near as something like you have seen. It was really bad. It was responding with random stuff.

  • @randomlettersqzkebkw
    @randomlettersqzkebkw วันที่ผ่านมา

    Not sure if you saw the video where it asked the other youtuber to make a sacrifice to the blood god lmao 😆

  • @WillJohnston-wg9ew
    @WillJohnston-wg9ew 19 ชั่วโมงที่ผ่านมา

    anyone get this running on a windows computer? I seem to have everything installed, but then getting an error about my GPU. Any advice?

  • @TheRemarkableN
    @TheRemarkableN วันที่ผ่านมา +1

    At least it didn’t ask you to sacrifice to the Blood God 😅

    • @samwitteveenai
      @samwitteveenai  วันที่ผ่านมา +2

      Thats the OpenAI version coming soon 😀

    • @EDLR234
      @EDLR234 วันที่ผ่านมา

      @@samwitteveenai in the comming weeks and weeks and weeks...

  • @yurijmikhassiak7342
    @yurijmikhassiak7342 วันที่ผ่านมา +2

    Hello, can this be used for real-time dictation, instantly transcribing speech to text without waiting for the speaker to finish? Using Whisper for this purpose can be time-consuming, as it requires uploading the file for transcription, which takes a while.

    • @piotrnowakowski8904
      @piotrnowakowski8904 วันที่ผ่านมา +1

      I used assembly ai for it but were unimpressed with results

    • @SinanAkkoyun
      @SinanAkkoyun วันที่ผ่านมา +3

      No, the model takes in audio and directly outputs audio, it does not save nor output the transcription

    • @yurijmikhassiak7342
      @yurijmikhassiak7342 วันที่ผ่านมา

      is there any tool that does continuous transcription as our mind does? like whisper will have to transcribe the speech again with every new second added?

    • @superresistant0
      @superresistant0 วันที่ผ่านมา

      You’re looking for streaming tokens

  • @kai_s1985
    @kai_s1985 วันที่ผ่านมา

    Can I upload a document and have a conversation about it?

    • @hitlab
      @hitlab วันที่ผ่านมา

      Not yet

    • @EDLR234
      @EDLR234 วันที่ผ่านมา

      No, but it's open-source, so maybe that's possible.

    • @samwitteveenai
      @samwitteveenai  วันที่ผ่านมา +2

      This is still just a really early version of this kind of model. I'm sure in the not too distant future you'll be able to use it for RAG, you'll be able to use it with tool use, and a whole bunch of things will come.

  • @itblood
    @itblood 2 วันที่ผ่านมา

    Seems fine but it didn't work for me. Couldn't manage to have real conversation

    • @samwitteveenai
      @samwitteveenai  2 วันที่ผ่านมา +1

      Try connecting again, sometimes it is really bad and other times it is really good

  • @svenandreas5947
    @svenandreas5947 วันที่ผ่านมา

    did try playground, very slow. Did also try german english and got a very slow response wizhout any sense. seems answering stuff outside knowledge ends in some sort of mess

    • @thenextbigthing1393
      @thenextbigthing1393 วันที่ผ่านมา

      Same, it says random shit most of the time

    • @samwitteveenai
      @samwitteveenai  วันที่ผ่านมา +1

      For what it's worth, I have noticed that sometimes it seems to go into some kind of weird mode where it doesn't give coherent responses back. Just try again and see if you get any better responses out.

    • @svenandreas5947
      @svenandreas5947 วันที่ผ่านมา +1

      @@samwitteveenai far to interesting to stop, i will try local

    • @thenextbigthing1393
      @thenextbigthing1393 วันที่ผ่านมา +1

      I noticed that it responds better when I talk in an American accent

  • @adamholter1884
    @adamholter1884 วันที่ผ่านมา

    It lied a ton at the beginning. It doesn't use TTS. It's like 4o.

    • @samwitteveenai
      @samwitteveenai  วันที่ผ่านมา

      Yes, it was very vague about its model, just saying that it was a neural network.

  • @AngusLou
    @AngusLou 2 วันที่ผ่านมา

    Cannot install successfully

    • @samwitteveenai
      @samwitteveenai  2 วันที่ผ่านมา

      What issue did you have ? Make sure you have Rust properly installed

  • @AshWickramasinghe
    @AshWickramasinghe 2 วันที่ผ่านมา +1

    First!
    That's pretty cool.

  • @dr.mikeybee
    @dr.mikeybee วันที่ผ่านมา

    This is too slow to run on my M1 mac mini. MikeyBeez JoeJoe is much better.

    • @nickludlam
      @nickludlam วันที่ผ่านมา

      The q4 mlx quant works fine

  • @irbsurfer1585
    @irbsurfer1585 วันที่ผ่านมา +1

    Speech Only!?!?! with no tool use?! and I cant even give it a system prompt? Worthless joke! Im like struggling to come up with ANY use case for it at all. AI cant even come up with a really good use case for it. lol

    • @samwitteveenai
      @samwitteveenai  วันที่ผ่านมา +3

      Give it a chance. It's a whole new kind of model, the way that it works, and I think you'll find this is just a proof of concept to show how they could make this, or how tools like RAG could be incorporated later on down the track.

    • @ronilevarez901
      @ronilevarez901 วันที่ผ่านมา +2

      Imagine receiving the blueprints for a miracle and calling it a "worthless joke" simply because it's not already built 😂
      🙄

    • @anubisai
      @anubisai วันที่ผ่านมา

      ​@ronilevarez901 no doubt. What a repugnant creature.

  • @pondeify
    @pondeify วันที่ผ่านมา

    the voice is too robotic

    • @AmazingArends
      @AmazingArends วันที่ผ่านมา

      You have to tell it to talk like a pirate 😂

  • @UrbanLetsPlay
    @UrbanLetsPlay 2 วันที่ผ่านมา +4

    "Diverse perspectives and ideas" jesus christ this is the worst timeline for LLMs

  • @dievas_
    @dievas_ วันที่ผ่านมา

    Underlying llm is of a very low quality unfortunately