Unsupervised Deep Learning - Google DeepMind & Facebook Artificial Intelligence NeurIPS 2018

แชร์
ฝัง
  • เผยแพร่เมื่อ 3 ก.พ. 2025

ความคิดเห็น • 26

  • @zkzhao279
    @zkzhao279 6 ปีที่แล้ว +14

    Slides: ranzato.github.io/

    • @siegmeyer995
      @siegmeyer995 6 ปีที่แล้ว

      Really useful! Thank you

  • @mohammadkhalooei637
    @mohammadkhalooei637 6 ปีที่แล้ว +3

    Thank you so much for your interesting presentation!

  • @bingeltube
    @bingeltube 6 ปีที่แล้ว +1

    Very recommendable! Talks by two very renowned researchers

  • @kazz811
    @kazz811 6 ปีที่แล้ว +6

    Great talks but wish Alex Graves had paced his talk better to focus on the interesting stuff instead of the more well known ideas.

    • @torincarl7934
      @torincarl7934 3 ปีที่แล้ว

      Not sure if you guys gives a damn but if you're bored like me during the covid times then you can watch pretty much all of the latest movies on InstaFlixxer. I've been watching with my girlfriend for the last couple of weeks xD

    • @michaelonyx3903
      @michaelonyx3903 3 ปีที่แล้ว

      @Torin Carl definitely, been using InstaFlixxer for since december myself :)

  • @Troyster94806
    @Troyster94806 6 ปีที่แล้ว

    Maybe it's possible to use narrow AI to figure the optimum method of unsupervised learning for us.

  • @messapatingy
    @messapatingy 5 ปีที่แล้ว

    What is density modelling?

    • @SudhirPratapYadav
      @SudhirPratapYadav 3 ปีที่แล้ว

      modelling -> finding out /predicting -> Basically finding out from data a model
      density -> here it means probability density function -> i.e. probability distribution of data/thing to be modelled.

  • @sofdff
    @sofdff 8 หลายเดือนก่อน

    Amazing

  • @vsiegel
    @vsiegel 3 ปีที่แล้ว

    He does not fully understand, I think.

  • @machinistnick2859
    @machinistnick2859 3 ปีที่แล้ว

    thanks god

  • @AnimeshSharma1977
    @AnimeshSharma1977 6 ปีที่แล้ว

    getting the metric right seems like feature engineering...

  • @reinerwilhelms-tricarico344
    @reinerwilhelms-tricarico344 4 ปีที่แล้ว

    0.5 < P(the cat sat on the mat | google talk) < 1

  • @jabowery
    @jabowery 6 ปีที่แล้ว +1

    About 17 minutes and I had to stop listening because I felt like I had lost about a standard deviation IQ. Hasn't this guy ever heard of Solomonoff induction? Hasn't he ever talked to Shane Legg? The intrinsic motivation is lossless compression and if the agent is active the decision theoretic utility determines the explore exploit tradeoff as in AIXI. If passive it just compresses whatever it's given as data.

    • @theJACKATIC
      @theJACKATIC 6 ปีที่แล้ว +3

      Thats Alex Graves... well renowned at DeepMind. He's released papers with Shane Legg.

    • @webxhut
      @webxhut 6 ปีที่แล้ว

      Fish !

    • @jabowery
      @jabowery 6 ปีที่แล้ว

      @@theJACKATIC I listened to the rest and he did, finally, bring in compression as one would expect of someone with his background. And it does appear important. His presentation threw me off. At a meta level, he really should start with the "high level coding" of his presentation: Describe the space in terms of AIXI's unification of Solomonoff Induction and Sequential Decision Theory before breaking down into his 2x2 taxonomy. That way it would be clear that "unsupervised learning" is simply lossless compression toward Solomonoff Induction's use of the KC program's "latent representations". He appears to have his head so far into the techniques of lossless compression that he elides the "top down" definition of AGI as the start of his "high level".

    • @coolmechelugwu7305
      @coolmechelugwu7305 6 ปีที่แล้ว +2

      @@jabowery some persons are not so advanced in this field and starting from the known to the unknown is a great technique in passing knowledge.
      Great presentation🙋

    • @jabowery
      @jabowery 6 ปีที่แล้ว +1

      @@coolmechelugwu7305 Solomonoff Induction is just Ockham's Razor for the Turing Age -- so there's no real challenge in coming up with an exoteric framing. Sequential Decision Theory can be framed quite simply as well:
      If you know the outcome of all choices available to you (provided by Solomonoff Induction), Decisions become trivial.
      The reason I'm hammering on this is that the failure to understand lossless compression's value as the intrinsic utility function of unsupervised learning has untold opportunity costs to society:
      The enormous resources poured, not only into the social sciences but social "experiments" conducted on vast populations without any serious notion of "informed consent", should be informed by the lossless compression of a wide range of longitudinal social data. Google DeepMind should be at the forefront of this given its background and Google's resources.
      See this question I put to Kaggle:
      www.kaggle.com/general/37155#post207935