Stanford CS25: V4 I Overview of Transformers
ฝัง
- เผยแพร่เมื่อ 26 ก.ย. 2024
- April 4, 2024
Steven Feng, Stanford University [styfeng.github...]
Div Garg, Stanford University [divyanshgarg.com/]
Emily Bunnapradist, Stanford University [ / ebunnapradist ]
Seonghee Lee, Stanford University [shljessie.gith...]
Brief intro and overview of the history of NLP, Transformers and how they work, and their impact. Discussion about recent trends, breakthroughs, applications, and remaining challenges/weaknesses. Also discussion about AI agents. Slides here: docs.google.co...
More about the course can be found here: web.stanford.e...
View the entire CS25 Transformers United playlist: • Stanford CS25 - Transf...
Can't believe, ... Just today, we started the part about LSTM and transformers in my ML course, and here it comes
Thank you guys !
Awesome, thank you Stanford online for sharing these amazing video series
Hello Everyone! Thank you very much for uploading these materials. Cheers
Great!! Finally It's time for CS25 V4🔥
I recently started to explore using transformers for timeseries classification as opposed to NLP. Very excited about this content!
it's finally released! hope y'all enjoy(ed) the lecture 😁
Don't hold the mic so close bro. The lecture was really good though :)
What is a good course to learn NLP?
hi feng! will there be more cs25 v4 lectures upload in this channel?
@@siiilversurfffeeer yes! should be a new video out every week, approx. 2-3 weeks after each lecture :)
Thanks for sharing this course and palestry Staford. Congratulations . Here the Brazil
Be careful using anthropomorphic language when talking about LLMs. Eg: thoughts, ideas, reasoning. Transformers don’t “reason” or have “thoughts” or even “knowledge”. They extract existing patterns in the training data and use stochastic distributions to generate outputs.
That's a pretty important observation imo
Then what is “reason” “thoughts” “knowledge”?
Do individual Neurons have thoughts , reason or knowledge - or is it once again the collective which we should be assessing
This mis-anthropomorphism problem will only grow because each end of the field/industry is being sloppy with it , so calls for sanity will just get derided as time goes on.
On the starting side we have academics title baiting like they did with "attention" so papers get attention instead of just making a new word|phrase like 'correlation network' , 'word window' , 'hyper hyper-networks' etc ; or just overloading existing terms 'backtracking', backpropagation etc.
And the on other end of the collective full court press is corporations continuing to pass assistant(tools) off as human like with names such cortana, siri etc for the sake of branding and marketing.
@@junyuzheng5282 `Then what is “reason” “thoughts” “knowledge”?`
reason,thoughts,knowledge, etc are more than are hallucinated in your linear algebra formulas
Very interesting summarization
Sincere students and smart
Very cool! Thanks for posting this publicly, it's really awesome to be able to audit the course :)
I want to know more about 'filters.' Are they human or computer processes or mathematical models? The filters are a reflection, I'd like to understand more about. I hope they are not an inflection, that would be an unconscious pathway.
This is a really sweet dip into the currency of knowledge and these students are to be commended however, in the common world there is a tendency developing towards a 'tower of babel'.
Greed may have an influence that we must be wary of. I heard some warnings in the presentation that consider this tendency.
I'm impressed by these students. I hope they aren't influenced by the silo system of capitalism and that they remain at the front of the generalization and commonality needed to keep bad actors off the playing field.
In summary, Transformers mean using tons of weight matrixes, leading to way better results.
Where are the professors?
future artificial intelligence
i was into talk this
probability challenge
Gemini ai talking ability rapid talk i suppose so
it's splendid
so they stand for 1 hour
Is it okay to cheat in an exam if you have the opportunity to do so?
☺️☺️☺️🥰🥰🥰
what is said in 13:47 is incorrect.
Large language models like ChatGPT or other state-of-the-art language models do not only have a decoder in their architecture. They employ the standard transformer encoder-decoder architecture. The transformer architecture used in these large language models consists of two main components:
The Encoder:
This encodes the input sequence (prompt, instructions, etc.) into vector representations.
It uses self-attention mechanisms to capture contextual information within the input sequence.
The Decoder:
This takes in the encoded representations from the encoder.
It generates the output sequence (text) in an autoregressive manner, one token at a time.
It uses self-attention over the already generated output, as well as cross-attention over the encoder's output, to predict the next token.
So both the encoder and decoder are critical components. The encoder allows understanding and representing the input, while the decoder enables powerful sequence generation capabilities by predictively modeling one token at a time while attending to the encoder representations and past output.
Having only a decoder without an encoder would mean the model can generate text but not condition on or understand any input instructions/prompts. This would severely limit its capabilities.
The transformer's encoder-decoder design, with each component's self-attention and cross-attention, is what allows large language models to understand inputs flexibly and then generate relevant, coherent, and contextual outputs. Both components are indispensable for their impressive language abilities.
ChatGPT, Gemini, etc. are decoder-only models. Read their tech reports.
You should really read the GPT-1 paper. Otherwise, please source your claim that ChatGPT's models are encoder-decoder architecture.
This is not what I expected. What a complete terrible explanation. I was expecting a complete history of Transformers. The fall of the Deception's or how Optimus Prime came to be. A very misleading title indeed.
Stanford's struggles with microphones continue.
it is cool to see some negative comments in between lots of pos... ✌🏼✌🏼
@@jeesantony5308 I love the content, which makes me h8 the lack of thought and preparation that went into the delivery of all that knowledge even more. Just trying to reduce the loss as it were.
AMazing stuff! Thank you for publishing this valuable material!
it would be great if CS25: V4 created another playlist in youtube.