I’m currently reading Christopher Mannings book on Foundation of Statistical NLP. It is awesome to hear from him about his past experiences and what he expects in the future for NLP. Thanks Andrew!
Great point about how the best way to be a good "stochastic parrot" is to know things about language and the world. Some tidbits of word vector history that tend to be overlooked: in the early-mid '90s, Hinrich Shutze did great work on distributional semantics and word vectors, albeit with what now seems like very little data or computing power. The "parallelogram" model of analogical reasoning with word vectors, which briefly became very popular around 2015, actually dates back to early '70s work by Rumelhart and Abrahamson.
Knowing how to ask a question has never been easier. Language is tough, but today's LLMs are extraordinarily good at understanding. In fact, I find I'm getting very sloppy about how I communicate with an LLM because the LLM understands even when I do a poor job.
The effect of automatic differentiation libraries is hard to overstate. I passed up on testing out some models, in the bad old days, because figuring out the derivatives was too hard.
At 8:24, the transcription says "waiting factors" when of course Professor Manning meant "weighting factors." I find it very ironic that in a video about the state of the art of NLP posted by an illustrious institution showcasing their top people, there's this glaring NLP error that is published to the world as part of the final product. There's some growing up left for the field when this sort of error is seen as too embarrassing to publish rather than as an allowable sort of error to overlook and tolerate.
Thanks Andrew and Chris for a great talk. It helped us gain insights into current trends in NLP and also motivating a newcomer to start in AI.
Glad it was helpful!
All engeneers and creative peoples must hear your talks even small. You are cool professors. Thank you.
I’m currently reading Christopher Mannings book on Foundation of Statistical NLP. It is awesome to hear from him about his past experiences and what he expects in the future for NLP. Thanks Andrew!
Great point about how the best way to be a good "stochastic parrot" is to know things about language and the world. Some tidbits of word vector history that tend to be overlooked: in the early-mid '90s, Hinrich Shutze did great work on distributional semantics and word vectors, albeit with what now seems like very little data or computing power. The "parallelogram" model of analogical reasoning with word vectors, which briefly became very popular around 2015, actually dates back to early '70s work by Rumelhart and Abrahamson.
Love your talk
This will be an important video
I have to finish watching and understanding it
Knowing how to ask a question has never been easier. Language is tough, but today's LLMs are extraordinarily good at understanding. In fact, I find I'm getting very sloppy about how I communicate with an LLM because the LLM understands even when I do a poor job.
The effect of automatic differentiation libraries is hard to overstate. I passed up on testing out some models, in the bad old days, because figuring out the derivatives was too hard.
I will focus on their accent and their words and their tenure here
At 8:24, the transcription says "waiting factors" when of course Professor Manning meant "weighting factors." I find it very ironic that in a video about the state of the art of NLP posted by an illustrious institution showcasing their top people, there's this glaring NLP error that is published to the world as part of the final product. There's some growing up left for the field when this sort of error is seen as too embarrassing to publish rather than as an allowable sort of error to overlook and tolerate.
My trainer Andrew Ng. Manoah
Sound is too low
Anyone have any good recommendations for further reading about the transformer models that Professor Christopher Manning was talking about at 37:54
🎉
comments got removed cencorship channel
where about LLM's