This is one of the most complex episodes I've ever listened to on AI. A ton of complicated jargon that only experts can comprehend. I'm ambitious enough to stick with it until I master this one day 👍
I always wanted to read the V-JEPA paper but could not find the time. When this episode came out, I was excited and finished listening it. Then, I read the paper and then re-listened to the episode. I ended up getting much more out of the interview. I really like the layout of the interview. First, Mido explained how V-JEPA works. Then, Sam asked questions to drill down the concepts. Well done to both Sam and Mido! Can I have more episodes like this please? 😊
@@twimlai I have a bit of an ambitious request, but it would literally be game-changing, for me and many others, if you could somehow make it happen. I have tried reading the JEPA paper (not the V-JEPA one, but the general one, the "A Path to Autonomous Machine Intelligence" one), but a lot of concepts are way above my level. Could you, somehow, get the man himself-Yann LeCun-to discuss the main ideas in this paper? He has done a lot of talks on JEPA in general, but often he dives deep into the details without making sure the core concepts are clear (and I think this is the main reason why he is so misunderstood in the AI community). With enough time (and the right interviewer 😉), I think it could help clear up a lot of those blurry spots. For instance: 1. What is the difference between "reasoning" and "planning" (according to him)? 2. What is the difference between just "planning" and "hierarchical planning", if any? 3. What is "persistent memory"? Isn't that something LLMs already possess (since they seem to "remember" things from their training data)? 4. In the JEPA paper, he talks about "short-term memory". Is that different from "persistent memory"? 5. The idea of JEPA supposedly "understanding the world" by focusing on the bigger picture and ignoring details makes sense to me. But is that reasoning, planning, or something else? 6. He recently started talking about DINO. Is that related to JEPA? Can DINO plan? These are some of the questions I still struggle with, even after listening to his talks over and over. I honestly don’t think it’s even worth bringing up concepts like "regularized methods", "representation collapse", "contrastive methods" and other ridiculously abstract ideas until the core concepts are made clear. I also love when he uses analogies or real-life examples to explain his points, so if you could have him include some after each explanation, it would be even better! Speaking about "real life", he often points out how animals can do a lot of things that LLMs cant. To me it's obvious he isn't talking about stuff like "ability to do math" or "speak language" but it would be great if he could explain his thoughts in more details. So my question on this subject would be: can animals reason, plan, or do both? Could he provide concrete real-life examples of animals reasoning and planning? Of course, I am not delusional. I know asking Yann for an interview and specifically covering some of the topics I mentioned is a pretty big ask, so I will 100% understand if it’s not feasible. This interview is already very good. Take care!
I have a bit of an ambitious request, but it would literally be game-changing, for me and many others, if you could somehow make it happen. I have tried reading the JEPA paper (not the V-JEPA one, but the general one, the "A Path to Autonomous Machine Intelligence" one), but a lot of concepts are way above my level. Could you, somehow, get the man himself-Yann LeCun-to discuss the main ideas in this paper? He has done a lot of talks on JEPA in general, but often he dives deep into the details without making sure the core concepts are clear (and I think this is the main reason why he is so misunderstood in the AI community). With enough time (and the right interviewer 😉), I think it could help clear up a lot of those blurry spots. For instance: 1. What is the difference between "reasoning" and "planning" (according to him)? 2. What is the difference between just "planning" and "hierarchical planning", if any? 3. What is "persistent memory"? Isn't that something LLMs already possess (since they seem to "remember" things from their training data)? 4. In the JEPA paper, he talks about "short-term memory". Is that different from "persistent memory"? 5. The idea of JEPA supposedly "understanding the world" by focusing on the bigger picture and ignoring details makes sense to me. But is that reasoning, planning, or something else? 6. He recently started talking about DINO. Is that related to JEPA? Can DINO plan? These are some of the questions I still struggle with, even after listening to his talks over and over. I honestly don’t think it’s even worth bringing up concepts like "regularized methods", "representation collapse", "contrastive methods" and other ridiculously abstract ideas until the core concepts are made clear. I also love when he uses analogies or real-life examples to explain his points, so if you could have him include some after each explanation, it would be even better! Speaking about "real life", he often points out how animals can do a lot of things that LLMs cant. To me it's obvious he isn't talking about stuff like "ability to do math" or "speak language" but it would be great if he could explain his thoughts in more details. So my question on this subject would be: can animals reason, plan, or do both? Could he provide concrete real-life examples of animals reasoning and planning? Of course, I am not delusional. I know asking Yann for an interview and specifically covering some of the topics I mentioned is a pretty big ask, so I will 100% understand if it’s not feasible. This interview is already very good. Take care! (btw, this is a repost. Sorry for the light spam)
This is one of the most complex episodes I've ever listened to on AI. A ton of complicated jargon that only experts can comprehend. I'm ambitious enough to stick with it until I master this one day 👍
I always wanted to read the V-JEPA paper but could not find the time. When this episode came out, I was excited and finished listening it. Then, I read the paper and then re-listened to the episode. I ended up getting much more out of the interview. I really like the layout of the interview. First, Mido explained how V-JEPA works. Then, Sam asked questions to drill down the concepts. Well done to both Sam and Mido! Can I have more episodes like this please? 😊
Of course @shihgianlee! Let me know the papers on your backlock and I'll try to recreate the magic 🪄
@@twimlai I have a bit of an ambitious request, but it would literally be game-changing, for me and many others, if you could somehow make it happen.
I have tried reading the JEPA paper (not the V-JEPA one, but the general one, the "A Path to Autonomous Machine Intelligence" one), but a lot of concepts are way above my level.
Could you, somehow, get the man himself-Yann LeCun-to discuss the main ideas in this paper? He has done a lot of talks on JEPA in general, but often he dives deep into the details without making sure the core concepts are clear (and I think this is the main reason why he is so misunderstood in the AI community).
With enough time (and the right interviewer 😉), I think it could help clear up a lot of those blurry spots.
For instance:
1. What is the difference between "reasoning" and "planning" (according to him)?
2. What is the difference between just "planning" and "hierarchical planning", if any?
3. What is "persistent memory"? Isn't that something LLMs already possess (since they seem to "remember" things from their training data)?
4. In the JEPA paper, he talks about "short-term memory". Is that different from "persistent memory"?
5. The idea of JEPA supposedly "understanding the world" by focusing on the bigger picture and ignoring details makes sense to me. But is that reasoning, planning, or something else?
6. He recently started talking about DINO. Is that related to JEPA? Can DINO plan?
These are some of the questions I still struggle with, even after listening to his talks over and over. I honestly don’t think it’s even worth bringing up concepts like "regularized methods", "representation collapse", "contrastive methods" and other ridiculously abstract ideas until the core concepts are made clear.
I also love when he uses analogies or real-life examples to explain his points, so if you could have him include some after each explanation, it would be even better!
Speaking about "real life", he often points out how animals can do a lot of things that LLMs cant. To me it's obvious he isn't talking about stuff like "ability to do math" or "speak language" but it would be great if he could explain his thoughts in more details.
So my question on this subject would be: can animals reason, plan, or do both? Could he provide concrete real-life examples of animals reasoning and planning?
Of course, I am not delusional. I know asking Yann for an interview and specifically covering some of the topics I mentioned is a pretty big ask, so I will 100% understand if it’s not feasible. This interview is already very good.
Take care!
I have a bit of an ambitious request, but it would literally be game-changing, for me and many others, if you could somehow make it happen.
I have tried reading the JEPA paper (not the V-JEPA one, but the general one, the "A Path to Autonomous Machine Intelligence" one), but a lot of concepts are way above my level.
Could you, somehow, get the man himself-Yann LeCun-to discuss the main ideas in this paper? He has done a lot of talks on JEPA in general, but often he dives deep into the details without making sure the core concepts are clear (and I think this is the main reason why he is so misunderstood in the AI community).
With enough time (and the right interviewer 😉), I think it could help clear up a lot of those blurry spots.
For instance:
1. What is the difference between "reasoning" and "planning" (according to him)?
2. What is the difference between just "planning" and "hierarchical planning", if any?
3. What is "persistent memory"? Isn't that something LLMs already possess (since they seem to "remember" things from their training data)?
4. In the JEPA paper, he talks about "short-term memory". Is that different from "persistent memory"?
5. The idea of JEPA supposedly "understanding the world" by focusing on the bigger picture and ignoring details makes sense to me. But is that reasoning, planning, or something else?
6. He recently started talking about DINO. Is that related to JEPA? Can DINO plan?
These are some of the questions I still struggle with, even after listening to his talks over and over. I honestly don’t think it’s even worth bringing up concepts like "regularized methods", "representation collapse", "contrastive methods" and other ridiculously abstract ideas until the core concepts are made clear.
I also love when he uses analogies or real-life examples to explain his points, so if you could have him include some after each explanation, it would be even better!
Speaking about "real life", he often points out how animals can do a lot of things that LLMs cant. To me it's obvious he isn't talking about stuff like "ability to do math" or "speak language" but it would be great if he could explain his thoughts in more details.
So my question on this subject would be: can animals reason, plan, or do both? Could he provide concrete real-life examples of animals reasoning and planning?
Of course, I am not delusional. I know asking Yann for an interview and specifically covering some of the topics I mentioned is a pretty big ask, so I will 100% understand if it’s not feasible. This interview is already very good.
Take care!
(btw, this is a repost. Sorry for the light spam)
Every episode is an interesting