New Summarization via In Context Learning with a New Class of Models

แชร์
ฝัง
  • เผยแพร่เมื่อ 12 มิ.ย. 2024
  • In this video I discuss some of the recent changes in building LLM apps and choosing which LLMs to use. I also show how I used some of these changes to build a Note taking app that creates summaries and long form notes
    🕵️ Interested in building LLM Agents? Fill out the form below
    Building LLM Agents Form: drp.li/dIMes
    👨‍💻Github:
    github.com/samwit/langchain-t... (updated)
    github.com/samwit/llm-tutorials
    ⏱️Time Stamps:
    00:00 Intro
    01:01 Personalization and Curation
    01:18 Personalization
    02:06 Curation
    05:20 The State of LLMs
    09:29 Long Output Use Cases
    11:20 Claude 3: Haiku
    12:10 Why Haiku
    13:31 Haiku Challenges
    14:23 Metaprompt
    14:57 Haiku Exemplars
    17:29 Summarizations
    17:32 Types of Summarization
    18:59 Simple Stuffing
    19:21 Map Reduce
    20:06 Refining our Calls
    20:21 Map ReRank
    20:31 New Summarization System
    23:28 Sectioning
    23:49 Advantages
    24:17 Disadvantages
    25:01 Conclusion
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 49

  • @hienngo6730
    @hienngo6730 หลายเดือนก่อน +22

    Criminally underrated channel. One of the absolute best AI/LLM TH-cam channels that somehow only has 55K subs?!? Thank you for all of your hard-earned insights; very useful to jumpstart our own projects.

  • @kenchang3456
    @kenchang3456 หลายเดือนก่อน +2

    Another excellent video. Thanks for pushing forward the practicalities of using the variety of models and services that could be appropriate for your project and where you are in the progress of your project.

  • @ozind12
    @ozind12 หลายเดือนก่อน +5

    i could not find the link to the code for the sumarization app being talked about in the video? would be interesting to see flow

  • @HowtoSmartWork
    @HowtoSmartWork หลายเดือนก่อน +2

    This kind of content is really impressive. I was working on note taking app and trying to build a scalable app with team.
    Had alot of Challenge after watching your explanation I was able to relate to this.

  • @supercurioTube
    @supercurioTube หลายเดือนก่อน +1

    I'm listening to your video for months now and as I'm transitioning towards building LLM apps myself, I'm really grateful for the insights you've been sharing all along.
    Its invaluable to learn from someone who's been building realistic, real world products based on LLMs while following on the research closely.

    • @samwitteveenai
      @samwitteveenai  หลายเดือนก่อน +3

      Thanks this is exactly what I was aiming to do with the channel, I have never desired to be a "youtuber" I started to show some friends so cool stuff with LLMs and it took off. I try not to hype stuff just show what can be done etc with various models.

  • @sayanosis
    @sayanosis หลายเดือนก่อน

    You single-handedly explain literally everything someone needs. Thank you so so much for what you do ❤

  • @reza2kn
    @reza2kn หลายเดือนก่อน

    Brilliant video Sam! 🤗 Great job! Learned a ton!

  • @MeinDeutschkurs
    @MeinDeutschkurs หลายเดือนก่อน

    I love your thoughts!
    In the moment each word takes flight, spoken, penned, or whispered into the night, we dream a future bright where models converse, their voices intertwine.
    An IoT daydream, woven from the threads of thought and machine's silent hum.

  • @davidtindell950
    @davidtindell950 29 วันที่ผ่านมา +2

    I have been reviewing many of your YT Videos and evaluating your many code examples.
    This video is certainly different in that it makes us think about how to transistion from the
    current state and applications of LLM's to new personalized and curated practical solutions
    -- especially by applying smaller, faster, lower-cost "variant" LLMs like Anthropic's Haiku ...
    I agree that we can find a "middle ground" between Sam A.'s two so-called "choices" !

    • @davidtindell950
      @davidtindell950 28 วันที่ผ่านมา +1

      Now, going back to review your earlier "Mastering Haiku Video" !

  • @ralph5768
    @ralph5768 27 วันที่ผ่านมา

    Thanks Sam! I have been really tinkering about Summarization and this helps a LOT. Subscribe + like

  • @bennie_pie
    @bennie_pie หลายเดือนก่อน +1

    Your talk summarised my project...i too have been using Claude, except opus however my free API access ends in a few days and so in order to build something which could go live trading down to Haiku but with multiple iterations was just starting to dawnn and then boom you're solving issues or suggesting use cases I hadnt even considered! This video has been absolute gold - thank you

    • @jarad4621
      @jarad4621 หลายเดือนก่อน

      Yeah when Claude come out I didn't give a crap about the fancy models it was the use cases for the good but cheap.models like haiku and now llama 3 that excite me, low cost but still effective = $$$$$$. My phi3 agent swarm with agentic self reflection and auto error correcting and iterative improvement and quality assurance is going to be EPIC and free, trust me learn agents asap the time is coming be ready and be in the front

    • @bennie_pie
      @bennie_pie หลายเดือนก่อน

      @@jarad4621 Yeah local agents that work while you sleep is the way. But it can be like hearding chickens.... time-consuming and you still get shit. Its that fine balance of an intelligent model, good prompting and good oversight. CrewAI seems decent or is it better to DIY?

  • @DannyGerst
    @DannyGerst 11 วันที่ผ่านมา

    Great idea. I did something with grouping sentences together for a topic (Louvain community detection algorithm), so that sentences with the same semantic meaning are grouped together. Working incredible great for books chapter by chapter summaries.The benefit is that topics what you called sections are grouped even if the topic is handled in later sections again. But in the end it was Map Reduce. So I am curious to see the result combined with your new system.

  • @experter_analyser
    @experter_analyser หลายเดือนก่อน

    I have always fine the videos very interesting and educative with different new thought. ❤

  • @micbab-vg2mu
    @micbab-vg2mu หลายเดือนก่อน

    Great video - thank you - )

  • @TheMelo1993
    @TheMelo1993 หลายเดือนก่อน

    Great content 👍! Do you have any suggestions on how to implement this? Or a repo?

  • @SergioMunozGonzalez
    @SergioMunozGonzalez 23 วันที่ผ่านมา

    ty so much for the video sam, do you have any implemementation of this new summarization method?
    Thank you in advance

  • @SonGoku-pc7jl
    @SonGoku-pc7jl หลายเดือนก่อน

    i like so much all video :) thanks!!!

  • @stephaneleroi8506
    @stephaneleroi8506 26 วันที่ผ่านมา

    Exellent. Do you know where the summerization with sections and full document in each section is implemented ?

  • @willjohnston8216
    @willjohnston8216 หลายเดือนก่อน

    Another great video. Sam have you found any methods for having the LLM spend more time on the analysis. The results I'm getting seem to be generic and something summarized from the web. I'd like to find a way to force more thinking through the problem set.

    • @samwitteveenai
      @samwitteveenai  หลายเดือนก่อน

      This is a really good question. I think there are at least 2 paths to this. 1. is better alignment training where it can push back and clarify things better. A version of this (perhaps not the best version eventually) will probably come in the next OpenAI model on Monday. This kind of clarification in analysis is a very important one for Self Recursive Learning. This is something I have been running a lot of tests on and testing some unreleased models with but no amazing results I can talk about yet. 2. You can do something similar by prompting from multiple angles etc. eg have 1 prompt that rewrites multiple questions or angles of analysis. This is a bit of what the summarization prompts do in the app I show.

  • @murilocurti1474
    @murilocurti1474 หลายเดือนก่อน

    Great explanation! As usual 😃 Do you think it’s possible to do the same process of sectioning using gpt3.5 turbo?

    • @samwitteveenai
      @samwitteveenai  หลายเดือนก่อน +1

      Yes but Haiku, Llama3 and another model coming out next week are better than 3.5 for this.

    • @murilocurti1474
      @murilocurti1474 หลายเดือนก่อน

      @@samwitteveenai Thanks!!!

  • @ralph5768
    @ralph5768 27 วันที่ผ่านมา

    Do you have a code example for this new type of summarisation?

  • @janalgos
    @janalgos หลายเดือนก่อน

    My concern with smaller models is the relatively higher hallucination. What has your experience been with Haiku when it comes to hallucination?

    • @jarad4621
      @jarad4621 หลายเดือนก่อน

      Agentic patterns reflection review, iterative improvement, qa agents collaboration, one master opus overseer to manage, etc this will solve all your concerns about quality and still be super cheap

    • @samwitteveenai
      @samwitteveenai  หลายเดือนก่อน +1

      I don't think the hallucinations are that much more of a problem. never use an LLM for facts, use the context for that. The advantage with the cheaper calls is you can do self reflection etc. to double check these.

    • @janalgos
      @janalgos หลายเดือนก่อน

      @@samwitteveenai would be neat to see a tutorial on how to use those techniques to reduce instances of hallucinations and improve overall response quality for the smaller models

  • @comfixit
    @comfixit หลายเดือนก่อน +2

    Content and commentary was top notch, thank you for this video. An area for improvement is that you way overused the video B-Roll. First half of the video was kind of off-putting. Last half of the video the B-Roll was all good as it related to the subject. Example you are talking about Anthropic family of models and you show logos of Anthropic, pricing charts, performance charts etc... This is great stuff. But at the beginning you are talking and we are seeing animations of Robots with a sticker that says Hello. That doesn't work. I would rather see a talking head in those cases if you don't have B-Roll that is strongly related to the content.
    Just a personal preference but very much enjoyed the video content.

  • @rajesh_kisan
    @rajesh_kisan หลายเดือนก่อน

    Can you share the code, or prompts at least? I tried implementing it but faced challenge with creating sections.
    I'm using Llama 8B in my local, and also tried Llama 70B.
    If you can share it, it'd be great help.

  • @123arskas
    @123arskas หลายเดือนก่อน

    I still can't differentiate the "New Summarization System" you talked about VS "Refine Method".
    Refine tends to keep the context too of each chunk.
    The entire video felt like a promotional Ad for "Haiku"

    • @samwitteveenai
      @samwitteveenai  หลายเดือนก่อน

      this is quite different in that you can't do refine parallel you have to queue and wait. regarding the ad for haiku I do think it is in class of its own until new models get announced next week.

  • @josephroman2690
    @josephroman2690 หลายเดือนก่อน

    I would very much like contribute to this project, if it is possible if not, at least would like to be one of the testing users

  • @mickelodiansurname9578
    @mickelodiansurname9578 หลายเดือนก่อน

    @Sam Witteveen Has anyone ever told you that you are the spitting image of the Poker player Daniel Negreanu?

  • @dhrumil5977
    @dhrumil5977 หลายเดือนก่อน

    Download 😅

    • @JacobAsmuth-jw8uc
      @JacobAsmuth-jw8uc หลายเดือนก่อน

      What?

    • @dhrumil5977
      @dhrumil5977 หลายเดือนก่อน

      @@JacobAsmuth-jw8uc the video haha

    • @explorer945
      @explorer945 หลายเดือนก่อน

      ​@@dhrumil5977 ah possibility of getting it deleted?