Build A Human-Like AI Agent That Feels Shockingly Real with Gemini 2.0 Flash API

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ม.ค. 2025

ความคิดเห็น •

  • @SanatanaWisdomDaily
    @SanatanaWisdomDaily 22 ชั่วโมงที่ผ่านมา

    great, how can i interrupt it and not wait for it to speak the whole response.

  • @ImranKhan-wr2il
    @ImranKhan-wr2il หลายเดือนก่อน +1

    how can we use it with livekit, speacially for tts part

  • @Ari-pq4db
    @Ari-pq4db หลายเดือนก่อน +1

    Awesome video ❤🎉

  • @nathanchilds3952
    @nathanchilds3952 หลายเดือนก่อน +3

    Love you videos but man, you're killing me with writing code before doing imports lmao... Half the time my auto import fails so it's a pain plus in this example pyaudio isn't working right. I wish you'd shown installing it and importing it etc.

  • @apple5206
    @apple5206 6 วันที่ผ่านมา

    I'm not a coder but I know a little about it. Is there a way for me to just download your Py file since you already did all the coding work?

  • @frizzfrizz3550
    @frizzfrizz3550 25 วันที่ผ่านมา

    How much will Gemini's voice cost once the experimental phase is over?

  • @animatedzombie64
    @animatedzombie64 หลายเดือนก่อน

    is this video made with ai. sounds like an agent did all this and uploaded to youtube

  • @stormcrusher1165
    @stormcrusher1165 17 วันที่ผ่านมา

    received 1007

  • @ChronicKPOP
    @ChronicKPOP หลายเดือนก่อน

    AI needs decades of development to become useful for mainstream

    • @goldnarms435
      @goldnarms435 หลายเดือนก่อน +3

      Perhaps the most insane, uninformed, and ignorant comment in the history of the Internet. Congrats!

    • @ChronicKPOP
      @ChronicKPOP หลายเดือนก่อน

      @@goldnarms435 clearly you drank the coolaid and got overly drunk 😄 Despite recent advancements, AI remains unreliable for mainstream adoption due to its frequent errors, lack of deep reasoning, and dependence on human oversight. Current AI systems, like ChatGPT and DALL·E, often produce inaccurate information, struggle with context, and generate flawed outputs, limiting their practicality in critical tasks. Furthermore, ethical concerns, data biases, and security risks raise questions about trust and accountability. Until AI can consistently demonstrate accuracy, adaptability, and autonomy without errors, it will remain more of a supplementary tool than a dependable, mainstream solution, requiring years-if not decades-of further development and refinement.

    • @IceMetalPunk
      @IceMetalPunk หลายเดือนก่อน

      I'd say years, not decades. Yes, current models are too unreliable, but considering the progress from just about 4 years ago, I think "decades" is too long an estimate.
      But also... "until it can do these things without errors" is way too strict a threshold. Even humans make mistakes, so requiring no errors is the same as saying AI needs to be more perfect than humans in order to be useful to humans.

    • @ChronicKPOP
      @ChronicKPOP หลายเดือนก่อน

      @@IceMetalPunk That's a fair point, but it depends on how AI is used.
      For casual tasks like drafting emails, generating ideas, or automating repetitive processes, AI's current capabilities are often "good enough" despite occasional errors. Progress in just a few years has been impressive, and it's reasonable to expect AI will continue improving at a rapid pace.
      However, for critical applications-like medicine, finance, or autonomous driving-even a small error rate can lead to serious consequences. In these areas, AI needs higher reliability than humans, especially since it's often trusted as an authority rather than a tool for suggestions.
      So while AI might become mainstream for casual use within a few years, full reliability in high-stakes areas could still take decades of refinement. The timeline really depends on the level of risk tolerance society is willing to accept.

    • @IceMetalPunk
      @IceMetalPunk หลายเดือนก่อน +1

      @@ChronicKPOP I still don't agree. Why do we need "higher reliability than humans" for those tasks? If we accept the rate of human error in driving, medicine, etc., then why should we not accept that same rate when a computer does it?
      For instance, if we're okay when, say, there are 6 million car accidents a year with humans driving, then why should we not also be okay with 6 million car accidents a year with AI driving?
      Obviously, fewer errors is always better, but why should it be unacceptable for AI to mess up as much as humans when it's not unacceptable for humans to do so?

  • @justindevasia8754
    @justindevasia8754 หลายเดือนก่อน

    I am getting websocker error: 1007 , anyone knows why?

    • @TechSpective-p2f
      @TechSpective-p2f หลายเดือนก่อน

      Same error dude... Any solution you get?

    • @justindevasia8754
      @justindevasia8754 หลายเดือนก่อน

      @TechSpective-p2f I found the solution.. try with a single type of response either Text or Audio, not both. Multiple type of response is only available for some selected customers.

    • @justindevasia8754
      @justindevasia8754 หลายเดือนก่อน

      @TechSpective-p2f I found the solution.. try with a single type of response either Text or Video. Not the both at a time. Multiple response is only available for some early access customers.