Hopefully this doesn’t sound entitled, but rather expresses my gratitude towards your excellent work - yesterday I did a TH-cam search for MOE on this topic and saw several videos but decided not to watch others and rather wait for your analysis- and here I am today and this video enters my feed automatically :) Thanks for all you do for your community!
00:02 Mixture of Experts LLM enables efficient computation and research allocation for AI models. 02:46 Mixture of Experts LLM uses different gating functions to assign tokens to specific expert systems. 05:24 Mega Blocks addressed limitations of classical MoE system and optimized block sparse computations. 08:12 Mixture of Experts selects the top K expert system based on scores. 10:59 Mixture of Experts LLM enhances model parameters without computational expense 13:33 Mixture of Experts LLM - MoE efficiently organizes student-teacher distribution 16:07 Block Spar formulation ensures no token is left behind 18:35 Mixture of Expert system dynamically adjusts block sizes for more efficiency in matrix multiplication 20:57 Mixture of expert layer consists of independent feed-forward experts with an intelligence gating functionality.
In autoregressive model, the generation of the token is progressively. However, when will the router works? Is it in each pass or the routing will be decided at the very beginning ?
What a great class! Very much appreciated 🙌👏👏🙏
Video implementation with MoE training with several swiching Lora layers would be great!
Woah...thanks a lot for this clean and powerful explanation about this dense topics, as a representative of average people, I appreciate it so much.
Hopefully this doesn’t sound entitled, but rather expresses my gratitude towards your excellent work - yesterday I did a TH-cam search for MOE on this topic and saw several videos but decided not to watch others and rather wait for your analysis- and here I am today and this video enters my feed automatically :)
Thanks for all you do for your community!
Please create a video on Fine tuning a MoE LLM using LoRA adapters.
Can one train individual expert LLM within a MoE such as Mixtral 8x7B
yaya!🎉🎉🎉🎉🎉 ty so much once again
00:02 Mixture of Experts LLM enables efficient computation and research allocation for AI models.
02:46 Mixture of Experts LLM uses different gating functions to assign tokens to specific expert systems.
05:24 Mega Blocks addressed limitations of classical MoE system and optimized block sparse computations.
08:12 Mixture of Experts selects the top K expert system based on scores.
10:59 Mixture of Experts LLM enhances model parameters without computational expense
13:33 Mixture of Experts LLM - MoE efficiently organizes student-teacher distribution
16:07 Block Spar formulation ensures no token is left behind
18:35 Mixture of Expert system dynamically adjusts block sizes for more efficiency in matrix multiplication
20:57 Mixture of expert layer consists of independent feed-forward experts with an intelligence gating functionality.
very nice, thank you for a great vid.
In autoregressive model, the generation of the token is progressively. However, when will the router works? Is it in each pass or the routing will be decided at the very beginning ?
Is this where I raise the obvious question of "wouldn't a Grokked(tm) model be the perfect fit for an Expert-Picking mechanism?"
Can you please share a link to your Presentation. Need to use the content to make my own abridged notes.
which PDF reader you are using to read the research paper?
🤩🤩🤩🥳🥳🥳👍
Can you explain to me how to mix MoE with Lora adapters?
Do you have a patreon or other paid subscription?
❤
Hello!
I wonder if I can get them to do RPA
I made them do SEX. I was tough but I managed.
cool but MoE is so fool
You're not Indian! 😁