CS 194/294-196 (LLM Agents) - Lecture 1, Denny Zhou

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 ม.ค. 2025

ความคิดเห็น • 58

  • @vuxminhan
    @vuxminhan 4 หลายเดือนก่อน +91

    Please improve the audio quality next time! Otherwise great lecture. Thanks Professors!

  • @prakashpvss
    @prakashpvss 4 หลายเดือนก่อน +77

    Lecture starts at 14:31

    • @elizabnet3245
      @elizabnet3245 4 หลายเดือนก่อน

      Thanks for letting us know! I was kind of confused

  • @cyoung-s2m
    @cyoung-s2m 2 หลายเดือนก่อน +1

    Excellent lecture! building a groundbreaking approach rooted in fundamental, solid principles and first-principles. It’s not only about llm agents but also a profound wisdom for life.

  • @VishalSachdev
    @VishalSachdev 4 หลายเดือนก่อน +78

    Need better audio capture setup for next lecture

    • @sheldonlai1650
      @sheldonlai1650 4 หลายเดือนก่อน

      Agree your opinion

    • @jeffreyhao1343
      @jeffreyhao1343 3 หลายเดือนก่อน +2

      The audio quality is good enough, mate, but this is Chinglish, requiring better listening skills.

    • @OlivierNayraguet
      @OlivierNayraguet 3 หลายเดือนก่อน

      @@jeffreyhao1343 You mean you need good reading skills. Otherwise, by the time I manage the form, I lose the content.

  • @arnabbiswas1
    @arnabbiswas1 4 หลายเดือนก่อน +7

    Listening to this lecture after OpenAI's o1 release. The lecture is helping me to understand what is possibly happening under the hood of o1. Thanks for the course.

  • @deeplearning7097
    @deeplearning7097 4 หลายเดือนก่อน +29

    It's worth repeating, the audio is terrible. You really want some determination to stick through this. Shame really. These presenters deserve better, and the people who signed up for this. Thanks though.

    • @sahil0094
      @sahil0094 4 หลายเดือนก่อน +2

      It’s definitely some Indian commenting this

  • @yaswanthkumargothireddy6591
    @yaswanthkumargothireddy6591 2 หลายเดือนก่อน +4

    what I did to reduce the echo noise is to download mp3 of this lecture, open with microsoft clipchamp(lucky I) and applied noise reducion filter(you have noise reduction filter in media players like VLC if you don't have clipchamp). Finally synced and played video and audio seperately. :)

  • @7of934
    @7of934 4 หลายเดือนก่อน +20

    Please make captions match the speaker's timing (currently they are about a 2-3 seconds late.

    • @claymorton6401
      @claymorton6401 3 หลายเดือนก่อน +1

      Use the TH-cam embedded caption.

  • @TheDjpenza
    @TheDjpenza 4 หลายเดือนก่อน +27

    I'm not sure this needs to be said, but I am going to say it because the language used in this presentation concerns me. LLMs are not using reason. They operate in the domain of mimicking language. Reason happens outside of the domain of language. For example, if you have blocks sorted by colors and hand a child a block they will be able to put it into the correct sorted pile even before they have language skills.
    What you are demonstrating is a longstanding principle of all machine learning problems. The more you constrain your search space, the more predictable your outcome. In the first moves of a chess game the model is less certain of which move leads to a win than it will be later in the game. This is not because it is reasoning throughout the game, it is because the search space has collapsed.
    You have found clever ways to collapse an LLM search space such that it will find output that mimics reasoning. You have not created a way to do reasoning with LLMs.

    • @user-pt1kj5uw3b
      @user-pt1kj5uw3b 4 หลายเดือนก่อน

      Wow you really figured it all out. I doubt anyone has thought of this before.

    • @JTan-fq6vy
      @JTan-fq6vy 4 หลายเดือนก่อน +2

      What is your definition of reasoning? And how does it fit into the paradigm of machine learning (learning from data)?

    • @romarsit1795
      @romarsit1795 4 หลายเดือนก่อน

      Underrated comment

    • @datatalkswithchandranshu2028
      @datatalkswithchandranshu2028 3 หลายเดือนก่อน +1

      What u refer can be done via Vision models.Color identification via vision, sorting via basic model. Reason means adding logic to steps for model rather than direct answer. The answer is in the statement maximisation of P(response|ques)= Sum(paths)P(responses,path|question)

    • @Andre-mi6fk
      @Andre-mi6fk 2 หลายเดือนก่อน +1

      This is not quite true. If you anchor reasoning to what your acceptable level of reasoning is, then you might have a point. However, reasoning and reason are distinct and should be called out. An LLM can tell you exactly why it chose the answer or path it did, sometimes wrong, yes, but it gave you it's thought process. That is --> LEARNED from the data pattern in the training data.

  • @haodeng9639
    @haodeng9639 หลายเดือนก่อน

    Thank you, professor! The best course for beginners.

  • @Shiv19790416
    @Shiv19790416 2 หลายเดือนก่อน

    Excellent lecture. Thanks for first-principles approach to learning agents.

  • @jteichma
    @jteichma 4 หลายเดือนก่อน +3

    Agree especially the second speaker. Sound quality is muffled. Thanks 🙏

  • @garibaldiarnold
    @garibaldiarnold 3 หลายเดือนก่อน +4

    I don't get it... at 49:50: What's the difference between "LLM generate multiple responses" vs "sampling multiple times"?

    • @aryanpandey7835
      @aryanpandey7835 3 หลายเดือนก่อน +2

      generating multiple responses can lead to better consistency and quality by allowing for a self-selection process among diverse outputs, while sampling multiple times may provide a more straightforward but less nuanced approach.

    • @faiqkhan7545
      @faiqkhan7545 2 หลายเดือนก่อน +2

      @@aryanpandey7835 I think you have shuffled the concept here.
      sampling multiple times can enhance self consistency within LLMs , generating multiple responses is just generating different pathways and some might be wrong , it doesnt lead to better consistency .

    • @FanxuMin
      @FanxuMin 2 หลายเดือนก่อน +1

      @@faiqkhan7545 I agree with this, reasoning path is an irrelevant variable for the training of LLMs.

  • @Pingu_astrocat21
    @Pingu_astrocat21 4 หลายเดือนก่อน +2

    thank you for uploading :)

  • @ZHANGChenhao-x7v
    @ZHANGChenhao-x7v 3 หลายเดือนก่อน +1

    awesome lecture!

  • @akirasakai-ws4eu
    @akirasakai-ws4eu 3 หลายเดือนก่อน

    thanks for sharing❤❤ love this course

  • @ppujari
    @ppujari 17 วันที่ผ่านมา

    @sir, how you add reasoning to LLM? It automatically learns from data or some algo inserted?

  • @faizrazadec
    @faizrazadec 4 หลายเดือนก่อน +4

    Kindly Improve the Audio, It's barely hearable!

  • @sanjaylalwani1
    @sanjaylalwani1 4 หลายเดือนก่อน +1

    Great lecture. Audio could be improved in next lecture.

  • @arjunraghunandanan
    @arjunraghunandanan 3 หลายเดือนก่อน

    This was very useful.

  • @lucasxu2087
    @lucasxu2087 3 หลายเดือนก่อน

    Great lecture. one question on the example mentioned:
    Q: “Elon Musk”
    A: the last letter of "Elon" is "n". the last letter of "Musk" is "k". Concatenating "n", "k"
    leads to "nk". so the output is "nk".
    Q: “Bill Gates”
    A: the last letter of "Bill" is "l". the last letter of "Gates" is "s". Concatenating "l", "s" leads
    to "ls". so the output is "ls".
    Q: “Barack Obama"
    A:
    since LLM works by predicting the next token with highest probability, how can LLM with reasoning ability predict 'ka' which might not even be a valid token in the training corpus, and how can it be with highest probability given the prompt?

    • @IaZu-o5t
      @IaZu-o5t 3 หลายเดือนก่อน

      You can learn about Attention, Search "Attention is all you need" can find some popular science video about this paper

    • @datatalkswithchandranshu2028
      @datatalkswithchandranshu2028 3 หลายเดือนก่อน

      Due to the 2examples, we get LLM understanding of the steps to follow to get the answer, rather than just stating the answers nk and ls. So it increases the P(correct answer|question)

  • @arjunraghunandanan
    @arjunraghunandanan 3 หลายเดือนก่อน

    09:40 What do you expect for AI?
    I hope that going forth, AI can help reduce/remove the workload on menial tasks such as data entry, idea prototyping, onboarding, scheduling, calculations, knowledge localization & transformation tasks so that we, humans can focus on better tasks such as tackling climate change, exploring space, faster & safer transportation, preventing poverty and diseases, etc. (AI can help us in that too. ) Offloading operational overheads to an AI feels the best thing that should happen. But, the digital divide and lack of uniform access to latest tech across different parts of the world is the biggest problem I see here.

  • @wzyjoseph7317
    @wzyjoseph7317 3 หลายเดือนก่อน +2

    lidangzzz send me here, would finished this amazing lecture

    • @user-cy1ot8ge4n
      @user-cy1ot8ge4n 3 หลายเดือนก่อน +1

      haha~,me either .So awesome lecture!

    • @jianyangdeng1341
      @jianyangdeng1341 3 หลายเดือนก่อน +1

      same bro

    • @JKlupi
      @JKlupi 3 หลายเดือนก่อน

      😂same

    • @wzyjoseph7317
      @wzyjoseph7317 3 หลายเดือนก่อน

      @@JKlupi good luck! bro

    • @wzyjoseph7317
      @wzyjoseph7317 3 หลายเดือนก่อน

      @@user-cy1ot8ge4n good luck bro

  • @victorespiritusantiago8664
    @victorespiritusantiago8664 4 หลายเดือนก่อน

    thank you for share slides.!

  • @achris7
    @achris7 2 หลายเดือนก่อน

    Audio quality should be improved its very difficult to understnd

  • @yevonli-s5c
    @yevonli-s5c 3 หลายเดือนก่อน

    Please improve the audio quality, great lecture tho!

  • @rohithakash8093
    @rohithakash8093 หลายเดือนก่อน +1

    Terrible audio quality, I am not sure I would expect this from Berkley, But I would pass this as its the first lecture and would give space for benefit of doubt.

  • @MUHAMMADAMINNADIM-q4u
    @MUHAMMADAMINNADIM-q4u 2 หลายเดือนก่อน

    gREAT SESSIONS

  • @2dapoint424
    @2dapoint424 13 วันที่ผ่านมา +2

    Horrible audio 😔

  • @MrVoronoi
    @MrVoronoi 2 หลายเดือนก่อน +4

    Great content but the accent and audio are tedious. Please make an effort to improve that. Look at the great Andrew Ng. Being Chinese speaking is not an excuse for being incomprehensible. He's clear and articulate and delivers some of the most useful content on AI.

    • @AdrianTorrie
      @AdrianTorrie หลายเดือนก่อน

      💯 agree. Fantastic content, poor delivery on all fronts making it harder to take in the actual content.

  • @sahil0094
    @sahil0094 3 หลายเดือนก่อน +2

    even the subtitles are all wrong, AI cant recognize this person's english hahaahahah

  • @tianyushi2787
    @tianyushi2787 4 หลายเดือนก่อน

    32:35

  • @haweiiliang3311
    @haweiiliang3311 2 หลายเดือนก่อน

    Sorry, but, the accent of the lady from the begining drives me crazy.😅Typical Chinglish style.

  • @sahil0094
    @sahil0094 3 หลายเดือนก่อน

    waste of time

  • @五香还是原味瓜子
    @五香还是原味瓜子 หลายเดือนก่อน

    I am confused with the apple example 39:41. What does the token mean in this example? Where does the top-1:5, top-2:I... words come from?