Great lecture! There is yet another way to do explainable ML, which involves the use of GAN. Take image classification as an example, first a GAN model can be trained unsupervised on many facial images, with which the model should discover a latent space where many important facial features are well-disentangled. If we do further supervised training based on this latent space, we can then quantify the category labels (age, size of nose, degree of pitch of the face, ethnicity, etc.) based on this latent space. Since these learned labels are quantified and highly general, they can be used as the basis for explaining other classification results (say, using a decision tree, etc.).
真的很震撼,我是一名数据分析师,这篇内容对我启发非常的大,解答了工作中的很大问题
好的解釋會讓人高興 -> 這個觀點我非常贊成
例如: 李弘毅老師解釋為什麼機器學習得出解答要講出其理由的時候,我們聽課聽得很高興。因為這是很好的Explanation !!
註:大象應該是告示牌上的生物,它不應該是白色且不會有耳朵
7:20 字幕應該是 Yann LeCun,深度學習三巨頭之一。
38:33 align = 對齊。
39:31 align = 對齊。
感謝您
最近在思考 有没有人尝试或也思考过 Editable ML ,就是使用DeepLearning去学习生成出DecisionTree用作最终模型决策,并且通过Explainable ML等辅助工具去解释这个生成出的复杂的DecisionTree,方便使用者对模型进行二次加工和修饰呢? 我看网络上尝试的人很少,是不是这个想法压根不可行呢? (我的初衷是想做一款基于AI的Sandbox游戏,即使不懂写代码也不懂ML的人也能通过直觉的学习来训练和编辑出理想的AI)
Decision Tree和深度学习并不是完全可以等量替换的,Decision Tree是不断的对空间进行线性切割,而深度学习则是进行非线性拟合。因此二者的互相转换应该会产生比较大的失真。或者可以考虑直接在神经网络上进行修改,利用可解释性的分析,对于某些神经元进行删除或者重新清零再训练,来达到特定的目的。
解释的太好了,谢谢老师
重要的是人相信就行😉
Great lecture!
There is yet another way to do explainable ML, which involves the use of GAN.
Take image classification as an example, first a GAN model can be trained unsupervised on many facial images, with which the model should discover a latent space where many important facial features are well-disentangled. If we do further supervised training based on this latent space, we can then quantify the category labels (age, size of nose, degree of pitch of the face, ethnicity, etc.) based on this latent space. Since these learned labels are quantified and highly general, they can be used as the basis for explaining other classification results (say, using a decision tree, etc.).
非常期待第二部分下!
老师,能不能讲讲因果推断
27:20 查到它是寶可夢 哈~
謝謝解答XDD
超清楚易懂!
这周什么时候更新呐
it's a great class. Thanks to Prof. Lee.
医生的诊断对于病人来讲也是黑箱。医生永远会误诊,我医学院的老师很早就这么教我们。
现代影像学等 可以视为对诊断结果的可解释性研究吗
某些西医也是用黑箱来攻击中医的。其实只是现代医学的量化指标让西医的解释看起来更plausible而已。我觉得李教授所提出的「让人们高兴的可解释性就是好的可解释性」,很经典。
神馬漢斯永遠是經典
这个厉害之处在于它潮知道吗?真的笑哭了。
😋😋😋
老师,能不能讲讲因果推断