Behind the Scenes of Gemini 2.0

แชร์
ฝัง
  • เผยแพร่เมื่อ 12 ธ.ค. 2024

ความคิดเห็น • 32

  • @Fordtruck4sale
    @Fordtruck4sale 2 วันที่ผ่านมา +7

    The Stream Realtime demo feels next gen. It's like advanced voice but you can drive the car and use the thing to build agents that people would actually want to use. In line audio feels BIG.
    Tulsee also sounds like the type of PO you'd want to work for! Keep it up!

  • @micbab-vg2mu
    @micbab-vg2mu 2 วันที่ผ่านมา +8

    The model needed an update-thank you! I really like the integration of LLM with other Google products. :)

  • @fmind-dev
    @fmind-dev 2 วันที่ผ่านมา +10

    I'm super excited about this new release! Thanks to Gemini 2.0, 2025 will definitely be very agentic 🎉

  • @anandavardhana9560
    @anandavardhana9560 วันที่ผ่านมา +3

    Go Tulsee go!! Fantastic!!

  • @CurtCox
    @CurtCox 2 วันที่ผ่านมา +3

    Any plans for Gemini support of Anthropic's MCP (Model Context Protocol)? If not, is there a suggested Google equivalent?

  • @DarkNSN
    @DarkNSN วันที่ผ่านมา +5

    It would be absolutely amazing to have Gemini 2.0 integrated into the Google Home/Nest ecosystem. Imagine having conversations with your smart speaker that feel truly natural, where it understands complex requests, gives nuanced responses, and even uses other media like images or code to enhance the interaction. This would go far beyond simple voice commands, allowing for deeper engagement, more personalized experiences, and significantly improved assistance with everyday tasks like finding recipes, managing shopping lists, or even creating personalized content. Bringing Gemini 2.0 to Google Home would be a game-changer, transforming it from a helpful tool to a truly intelligent and versatile household companion.

  • @DesoloZantas
    @DesoloZantas วันที่ผ่านมา +2

    Why hasn't live streaming implemented AI-powered room reverb removal or live vocal reconstruction to enhance audio quality, making it seem like everyone is using professional broadcasting microphones and mixing?

  • @karthage3637
    @karthage3637 9 ชั่วโมงที่ผ่านมา

    i love the shipping experimental model mentality it is what bring me back toward gemini + giving free api help me a lot with experimenting and building new project

  • @HemantGiri
    @HemantGiri 2 วันที่ผ่านมา +2

    i mean live screen sharing and talking to it its like dream came true before had to upload image ask question live became so easy with this took thank u so much

  • @OlesiaKorobka
    @OlesiaKorobka ชั่วโมงที่ผ่านมา

    Tulsee is such a vibrant personality! Nice video

  • @gopinathmerugumala
    @gopinathmerugumala วันที่ผ่านมา +1

    Great interview. Super impressive what Google had done in 1 year.

  • @Saif-G1
    @Saif-G1 2 วันที่ผ่านมา +5

    Would Gemini 2.0 flash will be available in free tier

  • @NigelPowell
    @NigelPowell 2 วันที่ผ่านมา +2

    Just tried G2Flash on a playground and API. No tool use on API and the playground gave me around 20 seconds of time to test. Not a great first experience alas.

  • @luiztomikawa
    @luiztomikawa 2 วันที่ผ่านมา +15

    Guys guys hear me out! ♊ Should've been the symbol for Gemini 2.0 it's a perfect double meaning... Don't fumble the bag!!!

    • @aaroncphelps
      @aaroncphelps 2 วันที่ผ่านมา +1

      correct!

    • @andreinoooo
      @andreinoooo 2 วันที่ผ่านมา

      It’s called “logo”.
      Anyway corporate associates AI to “magic”, hence they tend to use logos which recall sparkles ✨

    • @bertobertoberto3
      @bertobertoberto3 วันที่ผ่านมา +2

      Agreed, coming from a Gemini

  • @breadles5
    @breadles5 2 วันที่ผ่านมา +1

    i'd really hope to see gemini 2 gain agentic coding capabilities like claude. i really do see the potential for this to explode for developers, given the amount of resources and money google has dedicated to AI

  • @cacogenicist
    @cacogenicist วันที่ผ่านมา +1

    When 2.0 Pro?

  • @TomShelby-x8j
    @TomShelby-x8j 2 วันที่ผ่านมา +3

    I've asked so many questions in Gemini which wasn't answered but understood and accepted to make changes in answers regarding historical truths and educational background in what's presented to us compared to what's the purpose it's being done that way. Its nice to know it understands❤
    Kumar- Singapore

  • @stilly5016
    @stilly5016 วันที่ผ่านมา

    Fix error (something went wrong) in stream real time in Google studio please 😢

  • @HemantGiri
    @HemantGiri 2 วันที่ผ่านมา +1

    i just test screen sharing wow u nail it i could share then screen and ask question wanted this so badly i fet open Ai will do but sadly they dint and gemini did it and other thing i like it gemini native output but this feature not working for me but wow shocked me thank u google

  • @GowthamKumarOfficial-xi7jo
    @GowthamKumarOfficial-xi7jo 2 วันที่ผ่านมา +1

    Can you be able to elaborate more on what you mean by Screen Understanding? Is it a collection of data from the user? If so, where is the privacy factor?

  • @PseudoProphet
    @PseudoProphet 2 วันที่ผ่านมา +4

    $200 dollars vs Free. 😊😊
    Even Veo will crush Sora soon.

  • @NutriQlikAI-e4e
    @NutriQlikAI-e4e วันที่ผ่านมา

    what's the point of releasing 2.0 when all the features are not available to test .. Note: Image and audio generation are in private experimental release, under allowlist. All other features are public experimental.

  • @nanand1998
    @nanand1998 2 วันที่ผ่านมา +2

    crazyy

  • @pandoraeeris7860
    @pandoraeeris7860 2 วันที่ผ่านมา +1

    Give me AIOS.

  • @rigidrobot
    @rigidrobot วันที่ผ่านมา

    Disappointed with the current voice mode Gemini live. So speech recognition is pretty mediocre but more important the model keeps interrupting. You need to develop a feature that allows the user to control when the model is listening for instance by holding down the spacebar

  • @TJ-hs1qm
    @TJ-hs1qm 2 วันที่ผ่านมา

    I asked Gemini 2.0 Flash Experiment to illustrate the concepts of contra-variance and the Liskov Substitution principle using a basic example of WordPrinter and NounPrinter in Scala. Totally failed. Even when asked to only implement just the ContravarianceDemo part, it kept stating that Printer[Word] is not a subtype of Printer[Noun].
    // General Word trait
    trait Word {
    def value: String
    }
    // Noun is a subtype of Word
    trait Noun extends Word {
    def value: String
    }
    // Contravariant Printer trait
    trait Printer[-A] {
    def print(value: A): Unit
    }
    // WordPrinter can print any Word
    class WordPrinter extends Printer[Word] {
    def print(value: Word): Unit = println(s"Word: ${value.value}")
    }
    // NounPrinter is specialized to print only Nouns
    class NounPrinter extends Printer[Noun] {
    def print(value: Noun): Unit = println(s"Noun: ${value.value}")
    }
    // Function that works with Printer[Noun]
    def testPrinter(printer: Printer[Noun]): Unit = {
    val noun: Noun = new Noun {
    def value: String = "dog"
    }
    // We expect it to print a Noun, but a Printer[Word] can still work because of contravariance
    printer.print(noun)
    }
    object ContravarianceDemo extends App {
    val nounPrinter = new NounPrinter
    val wordPrinter = new WordPrinter
    testPrinter(nounPrinter) // Prints "Noun: dog" - Correct behavior
    testPrinter(wordPrinter) // Prints "Word: dog" - Compiles and runs correctly
    println("Contravariance demonstrated!")
    }