Variational Inference: Foundations and Modern Methods (NIPS 2016 tutorial)

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 ม.ค. 2025

ความคิดเห็น • 7

  • @yididiyayilma900
    @yididiyayilma900 2 ปีที่แล้ว +1

    30:00 The most intuitive explaination for stochastic optimization I have ever heard so far.

  • @alexeygritsenko9955
    @alexeygritsenko9955 6 ปีที่แล้ว +4

    At 44:43 - why does the score function have expectation of zero?

    • @mataneyal
      @mataneyal 6 ปีที่แล้ว +3

      \begin{equation}
      \begin{aligned}
      \mathbb{E}_q [
      abla_
      u g(z;
      u)] &= \mathbb{E}_q [
      abla_
      u \log p(x,z) -
      abla_
      u \log q(z;
      u)]\\
      &= - \mathbb{E}_q [
      abla_
      u \log q(z;
      u)] ~ \textrm{($\log p(x,z)$ is not a function of $
      u$)}\\
      &= - \int q(z;
      u)
      abla_
      u \log q(z;
      u) \\
      &= - \int
      abla_
      u q(z;
      u)~\textrm{(Log Derivative Trick)}\\
      &= 0 ~\textrm{($q(z;
      u)$ is a continuous probability distribution)}
      \end{aligned}
      \end{equation}

    • @maxturgeon89
      @maxturgeon89 4 ปีที่แล้ว +1

      Chain rule and dominated convergence theorem

  • @Filaaaix
    @Filaaaix 5 ปีที่แล้ว +2

    At 1:27:30 => I didn't really get how you derive the Auxiliary variational bound. Is there a good source where it's explained more thoroughly?