Developing an LLM: Building, Training, Finetuning

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ต.ค. 2024

ความคิดเห็น • 76

  • @tusharganguli
    @tusharganguli 4 หลายเดือนก่อน +13

    Your articles and videos have been extremely helpful in understanding how LLMs are built. Building LLM from Scratch and Q and AI are resources that I am presently reading and they provide a hands-on discourse on the conceptual understanding of LLMs. You, Andrej Karpathy and Jay Alammar are shining examples of how learning should be enabled. Thank you!

    • @SebastianRaschka
      @SebastianRaschka  4 หลายเดือนก่อน +1

      Thanks for the kind comment!

  • @adityasamalla3251
    @adityasamalla3251 2 หลายเดือนก่อน +5

    You are the best! Thanks a lot for sharing your knowledge to the world.

  • @box-mt3xv
    @box-mt3xv 4 หลายเดือนก่อน +24

    The hero of open source

    • @SebastianRaschka
      @SebastianRaschka  4 หลายเดือนก่อน +2

      Haha, thanks! I've learned so much thanks to all the amazing people in open source, and I'm very flattered by your comment to potentially be counted as one of them :)

  • @chineduezeofor2481
    @chineduezeofor2481 3 หลายเดือนก่อน +3

    Thank you Sebastian for your awesome contributions. You're a big inspiration.

  • @admercs
    @admercs 2 หลายเดือนก่อน +1

    You are a true educator. Honored to be a contributor to one of your libraries.

  • @JR-gy1lh
    @JR-gy1lh 2 หลายเดือนก่อน +2

    I know you don't do many tutorials but personally I love theme especially from you!

    • @SebastianRaschka
      @SebastianRaschka  2 หลายเดือนก่อน

      Thanks, that's very motivating to hear!

  • @kyokushinfighter78
    @kyokushinfighter78 2 หลายเดือนก่อน +2

    One of the best 60 minutes of my time. Really thankful for this..

  • @haribhauhud8881
    @haribhauhud8881 2 หลายเดือนก่อน +1

    Thank you, Sir. Your lessons are beneficial for the community. Appreciate your hard work..!! 😊

  • @guis487
    @guis487 3 หลายเดือนก่อน +1

    I am your fan, I have most of your books, thanks for this excellent video ! Another evaluation metric that I found interesting in another channel was to make the LLMs to play chess against each other 10 times.

    • @SebastianRaschka
      @SebastianRaschka  3 หลายเดือนก่อน

      Hah nice, that's a fun one. How do you evaluate who's the winner, do you use a third LLM for that?

  • @bjugdbjk
    @bjugdbjk 16 วันที่ผ่านมา

    u r a LEGEND,luv ur work,thnx a ton for sharing!

  • @ZavierBanerjea
    @ZavierBanerjea 3 หลายเดือนก่อน +1

    What wonderful Tech Minds : { Sebastian Raschka, Yann LeCun, Andrej Karpathy, ...} who share their works and beautiful ideations for Mere mortal like me... Sebastian's teachings are so, so fundamental that takes fear off my clogged mind... 🙏
    Although I am struggling to build LLMs for specific & niche areas, I am confidant of cracking them with great resources like : Build a Large Language Model (From Scratch)!!!

  • @RobinSunCruiser
    @RobinSunCruiser 4 หลายเดือนก่อน +1

    Hi, nice videos! One question for my understanding. When talking about embedding dimensions such as 1280 in "gpt2-large" do you mean the size of the number vector encoding the context of a single token or the number of input tokens? When comparing gpt2-large and Lama2 the number is the same for the ".. embeddings with 1280 tokens".

    • @SebastianRaschka
      @SebastianRaschka  4 หลายเดือนก่อน

      Good question, the term is often used very broadly and may refer to the input embeddings or the hidden layer sizes in the MLP layer. Here, I meant the size of the tokens that are embedded.

  • @tomhense6866
    @tomhense6866 4 หลายเดือนก่อน +1

    Very nice video, I liked it so much that I preordered your new book directly after watching it (to be fair I have read your blog for some time now).

    • @SebastianRaschka
      @SebastianRaschka  4 หลายเดือนก่อน

      Thanks! I hope you are going to like the book, too!

  • @moshoodolawale3591
    @moshoodolawale3591 หลายเดือนก่อน +1

    Thanks for the detailed videos and articles. I want to ask if it's possible to create a customized tokenizer as an extension to existing ones for a custom dataset? Also, how do decoder-only models handle other tasks like summarization, and classification after fine-tuning without forgetting their causal pre-trained causal next token task?

    • @SebastianRaschka
      @SebastianRaschka  หลายเดือนก่อน

      Good question. Yes you, can do that, tiktoken for example allows you to extend the vocabulary with additional tokens. However, you have to keep in mind that you'll always have to update the embedding layer and output layer with these tokens in case you want to use the updated tokenizer with an existing LLM. Regarding your second question, you could do that but that would not be ideal because only the last tokens contains information about all other tokens. If you use other tokens, you'll have more information loss.

  • @nithinma8697
    @nithinma8697 2 หลายเดือนก่อน +2

    00:02 Three common ways of using large language models
    02:39 Developing LLM involves building, pre-training, and fine-tuning.
    07:11 LLM predicts the next token in the text
    09:30 Training LLM involves sliding fixed size inputs over text data to create batches.
    14:22 Byte pair encoding and sentence piece variations allow LLMs to handle unknown words
    16:42 Training sets are increasing in size
    21:09 Developing an LM involves architecture, pre-training, model evaluation, and fine-tuning.
    23:14 The Transformer block is repeated multiple times in the architecture.
    27:22 Pre-training creates the Foundation model for fine-tuning
    29:28 Training LLMs typically done for one to two epochs
    33:44 Pre-training is not usually necessary for adapting LLM for a certain task
    35:51 Replace the output layer for efficient classification.
    39:54 Classification fine-tuning is key for practical business tasks.
    42:01 LLM instruction data set and preference tuning
    45:58 Evaluating LLMs is crucial, with MML being a popular metric.
    48:07 Multiple choice questions are not sufficient to measure an LM's performance
    52:34 Comparing LLM models for performance evaluation
    54:32 Continued pre-training is effective for instilling new knowledge in LLMs
    58:28 Access slides on the website for more details

  • @andreyc.3600
    @andreyc.3600 หลายเดือนก่อน +1

    Ich nehme stark an, dass Du Deutsch sprichst :). Wo kann man Dein Buch im Kindle (mobi oder f2b) Format finden? Danke & LG.

    • @SebastianRaschka
      @SebastianRaschka  หลายเดือนก่อน

      Vielen Dank fuer das Interesse an meinem Buch. So weit ich es vom Verlag mitbekommen habe wird diese Woche zum Drucker geschickt und dann sollte es hoffentlich in ein paar Wochen auch bein Amazon.com/.de als Kindle Version erhaeltlich werden.

    • @andreyc.3600
      @andreyc.3600 หลายเดือนก่อน

      @@SebastianRaschka super, vielen Dank.

  • @haqiufreedeal
    @haqiufreedeal 4 หลายเดือนก่อน +3

    Oh, my lord, my favourite machine learning author is a Liverpool fan.😎

    • @SebastianRaschka
      @SebastianRaschka  4 หลายเดือนก่อน +1

      Haha, nice that people make it that far into the video 😊

    • @ananthvankipuram4012
      @ananthvankipuram4012 4 หลายเดือนก่อน

      @@SebastianRaschka You'll never walk alone 🙂

  • @KumR
    @KumR 4 หลายเดือนก่อน +1

    Great Video. Now that LLM is so powerful , will regular machine learning & deep learning slowly vanish?

    • @SebastianRaschka
      @SebastianRaschka  4 หลายเดือนก่อน +1

      Great question. I do think that special purpose ML solutions still have and will continue to have their place. The same way ML didn't make certain more traditional statistics based models obsolete. Regarding deep learning ... I'd say LLM is a deep learning model itself. But yeah, almost everything in deep learning is nowadays either a diffusion model, transformer-based model (vision transformer and most LLMs), or state space model

  • @DataChiller
    @DataChiller 4 หลายเดือนก่อน +6

    the greatest Liverpool fan ever! ⚽

    • @SebastianRaschka
      @SebastianRaschka  4 หลายเดือนก่อน +5

      Haha nice, at least one person watched it until that part :D

  • @Xnaarkhoo
    @Xnaarkhoo 2 หลายเดือนก่อน +1

    @16:37 when you say Llama was trained on 1T token, do you still mean there was 32K unique token ? because on your blog post you have "They also have a surprisingly large 151,642 token vocabulary (for reference, Llama 2 uses a 32k vocabulary, and Llama 3.1 uses a 128k token vocabulary); as a rule of thumb, increasing the vocab size by 2x reduces the number of input tokens by 2x so the LLM can fit more tokens into the same input. Also it especially helps with multilingual data and coding to cover words outside the standard English vocabulary."

    • @SebastianRaschka
      @SebastianRaschka  2 หลายเดือนก่อน

      Thanks for the comment! So in the talk these are the dataset sizes using the respective tokenizer that was used during model training. The vocabulary sizes that the models used are 32k for Llama 2 and 128k for Llama 3.1. So, regarding "do you still mean there was 32K unique token", the vocabulary was 32k unique tokens (but there could be more unique tokens in the dataset). I hope this helps. Otherwise, please let me know, happy to explain more!

  • @rachadlakis1
    @rachadlakis1 4 หลายเดือนก่อน +3

    Thanks for the great knowledge You are sharing

  • @timothywcrane
    @timothywcrane 4 หลายเดือนก่อน +1

    I'm interested in SLM RAG with Knowledge graph traversal/search for RAG dataset collection and vector-JIT semantic match for hybrid search. Any repos you think I would be interested in?

    • @timothywcrane
      @timothywcrane 4 หลายเดือนก่อน

      bookmarked, clear and concise.

    • @SebastianRaschka
      @SebastianRaschka  4 หลายเดือนก่อน

      Unfortunately I don't have a good recommendation here. I have only implemented standard RAGs without knowledge graph traversal.

  • @sahilsharma3267
    @sahilsharma3267 4 หลายเดือนก่อน +4

    When is your whole book coming out ? Eagerly waiting 😅

    • @SebastianRaschka
      @SebastianRaschka  4 หลายเดือนก่อน +2

      Thanks for your interest in this! It's already available for preorder (both on the publisher's website and Amazon) and if the production stage goes smoothly, it should be out by the end of of August

  • @alokranjansrivastava623
    @alokranjansrivastava623 2 หลายเดือนก่อน +1

    Nice Video.
    Does LLM mean only auto-regressive models (Not Bert)?

    • @SebastianRaschka
      @SebastianRaschka  2 หลายเดือนก่อน

      Yes, here LLM is basically synonymous with decoder-style autoregressive model like Llama, GPT, Gemma, etc.

    • @alokranjansrivastava623
      @alokranjansrivastava623 2 หลายเดือนก่อน

      @@SebastianRaschka Bert has stack of encoder transformers, but it is not LLM. Am I correct here?

    • @SebastianRaschka
      @SebastianRaschka  2 หลายเดือนก่อน

      @@alokranjansrivastava623 Architecture-wise, it's kind of the same thing though, except it doesn't have the causal mask, and the pretraining task is not next-token prediction but predicting masked tokens (plus sentence order prediction).

    • @alokranjansrivastava623
      @alokranjansrivastava623 2 หลายเดือนก่อน

      ​@@SebastianRaschkaJust One question. How to define LLM? When we can say that this particular Language model is of LLM category.

  • @bashamsk1288
    @bashamsk1288 4 หลายเดือนก่อน +1

    in the instruction fine tuning we propagate loss only on output text tokens? or for all tokens from start to EOS?

    • @SebastianRaschka
      @SebastianRaschka  4 หลายเดือนก่อน +1

      That's a good question. You can do both. By default all tokens, but more commonly you'd mask the tokens. In my book, I include the token masking as a reader exercise (it's super easy to do). There was also a new research paper a few weeks ago that I discussed in my monthly research write-ups here: magazine.sebastianraschka.com/p/llm-research-insights-instruction

    • @bashamsk1288
      @bashamsk1288 4 หลายเดือนก่อน

      @@SebastianRaschka
      Thanks for the reply
      I just have a general question: do we use masking? For example, was masking used during the instruction fine-tuning of LLaMA 3 or mistral any Open source LLMs? Also, does your book include any chapters on the parallelization of training large language models?

    • @SebastianRaschka
      @SebastianRaschka  4 หลายเดือนก่อน

      @@bashamsk1288 Masking is commonly used, yes. We implement it as the default strategy in LitGPT. In my book we do both. I can't speak about Llama 3 and Mistral regarding masking, because while these are open-weight models they are not open source. So there's no training code we can look at. My book explains DDP training in the PyTorch appendix, but it's not used in the main chapters because as a requirement all chapters should also work on a laptop to make them accessible to most readers.

  • @ArbaazBeg
    @ArbaazBeg 3 หลายเดือนก่อน

    Should we give prompt to LLM when fine tuning for classification with last layer modification or directly pass the input to the LLM like in deberta?

    • @SebastianRaschka
      @SebastianRaschka  3 หลายเดือนก่อน +1

      Thanks for the comment, could you explain a bit more what you mean by passing the input directly?

    • @ArbaazBeg
      @ArbaazBeg 3 หลายเดือนก่อน +1

      @@SebastianRaschka Hey, sorry for the bad language. I meant should the chat formats like alpaca etc be applied or we give the text as it is to LLM for classification.

    • @SebastianRaschka
      @SebastianRaschka  3 หลายเดือนก่อน +1

      @@ArbaazBeg Oh I see now. And yes, you can. I wanted to create an example and performance comparison for that to the GitHub repo (github.com/rasbt/LLMs-from-scratch) some time. For that I wanted to first instruction-finetune the model on a few more spam classification instructions and examples though.

  • @tashfeenahmed3526
    @tashfeenahmed3526 4 หลายเดือนก่อน

    That's great Dr. Hope you will be doing good.
    I wish if i could download your deep learning book which is published recently. If there is any open source link to download it please mention in comments.
    Thanks and regards,
    Researcher at Texas

  • @pe6649
    @pe6649 2 หลายเดือนก่อน +1

    Danke!

  • @muthukamalan.m6316
    @muthukamalan.m6316 4 หลายเดือนก่อน +1

    great content! love it ❤

  • @joisco4394
    @joisco4394 4 หลายเดือนก่อน

    I've heard about instruct learning, and it sounds similar to how you define preference learning. I have also heard about transfer learning. How would you compare/define those?

    • @SebastianRaschka
      @SebastianRaschka  4 หลายเดือนก่อน +1

      Transfer learning is basically involved in everything you do when you start out with a pretrained model. We don't really name or call it out explicitly anymore because it's so common. In instruction finetuning, the loss function is different from preference tuning mainly. Instruction finetuning trains the model to answer queries, and preference finetuning is basically more about the nuance of how these get answered. All preference tuning methods that are used today (DPO, RLHF+PPO, KTO), etc. expect you to have done instruction finetuning on your model before you preference finetune.

    • @joisco4394
      @joisco4394 4 หลายเดือนก่อน +1

      @@SebastianRaschka Thanks for explaining it. I need to do a lot more research :p

    • @SebastianRaschka
      @SebastianRaschka  2 หลายเดือนก่อน +1

      @@joisco4394 Btw I recently coded the alignment (using direct preference optimization) here, which might help clarifying this step: github.com/rasbt/LLMs-from-scratch/blob/main/ch07/04_preference-tuning-with-dpo/dpo-from-scratch.ipynb

    • @joisco4394
      @joisco4394 2 หลายเดือนก่อน

      @@SebastianRaschka Much appreciated

  • @alihajikaram8004
    @alihajikaram8004 4 หลายเดือนก่อน

    Would
    you make videos about time series and trannsformer?

  • @kartiksaini5847
    @kartiksaini5847 4 หลายเดือนก่อน +1

    Big fan ❤

  • @mushinart
    @mushinart 4 หลายเดือนก่อน +1

    Im sold , im buying your book .. would love to chat with you sometime if possible

    • @SebastianRaschka
      @SebastianRaschka  3 หลายเดือนก่อน +1

      Thanks, hope you are liking it! Are you going to SciPy in July by chance, or maybe Neurips end of the year?

    • @mushinart
      @mushinart 3 หลายเดือนก่อน

      @@SebastianRaschka unfortunately not,but I'd like to have a zoom/google meet chat with you if possible

  • @MadnessAI8X
    @MadnessAI8X 4 หลายเดือนก่อน +1

    What we are seeking not only fuzzing code

  • @ramprasadchauhan7
    @ramprasadchauhan7 4 หลายเดือนก่อน

    Hello sir, please also make with javascript

  • @TheCuriousCurator-Hindi
    @TheCuriousCurator-Hindi 2 วันที่ผ่านมา

    I have been in this field for a while (decade +) but not in touch with LLMs and it is useless for uninformed and even more useless for the informed. I don't know I am in which category but I didn't learn anything. I read about transformers when the paper came then I assumed RLHF is just reinforce algorithm which is probably correct to assume. Anyways highly repellent video.

  • @krum.00
    @krum.00 4 หลายเดือนก่อน +1

    🤌

    • @SebastianRaschka
      @SebastianRaschka  4 หลายเดือนก่อน

      I take that as a compliment!? 😅😊

    • @krum.00
      @krum.00 4 หลายเดือนก่อน +1

      @@SebastianRaschka Yes yes! It was supposed to be a compliment only. You are doing great work with our teaching materials :).

  • @redthunder6183
    @redthunder6183 4 หลายเดือนก่อน

    Easier said than done unless u got a GPU super computer lying around lol

    • @SebastianRaschka
      @SebastianRaschka  4 หลายเดือนก่อน +1

      ha, I should mention that all chapters in my book run on laptops, too. It was a personal goal for me that everything should work even without a GPU. The instruction finetuning takes about ~30 min on a CPU to get reasonable results (granted, the same code takes 1.24 min on an A100)