Reproducing Kernels and Functionals (Theory of Machine Learning)

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 พ.ย. 2024

ความคิดเห็น • 20

  • @positivobro8544
    @positivobro8544 6 หลายเดือนก่อน +7

    Ayo the legend keeps on giving

  • @avigailhandel8897
    @avigailhandel8897 6 หลายเดือนก่อน +2

    I love your videos! I'm the person who posted that I will be starting grad school in the fall at the age of 55. I registered for classes at Montclair State University. Combinatorics, numerical analysis, linear algebra. And I'll be a TA. I am looking forward to being a graduate student in mathematics!

    • @JoelRosenfeld
      @JoelRosenfeld  6 หลายเดือนก่อน

      That’s awesome! It sounds like you have a fun schedule too. Congrats and let me know how it goes!

  • @ethandills4716
    @ethandills4716 6 หลายเดือนก่อน +3

    0:44 "if we have the time... and space" lol

  • @idiosinkrazijske.rutine
    @idiosinkrazijske.rutine 6 หลายเดือนก่อน +1

    The third book at @0:22 is "Meshfree Approximation Methods with MATLAB by Gregory Fasshauer. A good resource for Radial Basis Functions and similar topics..

    • @JoelRosenfeld
      @JoelRosenfeld  6 หลายเดือนก่อน +1

      Yeah it really is. I find Fasshauer does a great job at explaining the topic. Wendland goes into more of the theory, if you want to deeper.
      I met Fasshauer at a conference last year. Great guy

  • @richardgroff3807
    @richardgroff3807 6 หลายเดือนก่อน +1

    I am missing something important at around 13:39 in the video. I understand the line h_i(t)= , which uses the reproducing kernel to evaluate h_i at time t. The inner product must be the inner product for H for this to work. The next line looks like the standard property of the inner product, i.e. the complex conjugate of the inner product with entries swapped. What confuses me is the next line, which seems to expand out the definition of the inner product, but rather than the inner product for H, it looks like the inner product for L^2 and I can't figure out why that is. Was there an adjoint lurking about somewhere (rather than a property of the inner product)? How do I see it?
    .

    • @JoelRosenfeld
      @JoelRosenfeld  6 หลายเดือนก่อน

      It’s not an inner product. Each h_i represents some functional through the inner product. That expansion is the functional that h_i represents being applied to the kernel function.

    • @richardgroff3807
      @richardgroff3807 6 หลายเดือนก่อน +1

      ​@@JoelRosenfeld Thanks for your response! It finally sunk in. I didn't understand why you were swapping the entries, but it was to make the entries of the inner product match what was used with the Riesz Representation theorem a few lines above.
      In the next part of the video you do a numerical example where you generate a set of basis functions that are representations of the moments functionals, and then project function f onto that basis functions. By my understanding, the inner product used in your normal equations is the inner product for H, associated with the RBF kernel (defined at 9:00)? Is there an intuitive relationship between the best approximation using the norm associated with that inner product compared to, say, L^2? (I haven't watched your best approximation video, perhaps that question is answered there?)

    • @JoelRosenfeld
      @JoelRosenfeld  6 หลายเดือนก่อน

      The “best” approximation depends on the selection of the inner product and Hilbert space. In the best approximation video for L^2 we started with a basis, polynomials, then we selected a space where they reside and computed the weights for the best approximation in that setting. If we change the Hilbert space, then we need to find new weights.
      However, there is a difficulty that can arise. Perhaps, it’s not so easy to actually compute the inner product for that basis in that particular Hilbert space. This approach avoids that because we take the measurements and THEN select the basis. So we end up with these h’s rather than polynomials. The advantage here is that we never actually have to compute an inner product, we just leverage the Riesz theorem to dodge around it.
      What I’m setting up here is the Representer Theorem, which we will get to down the line (maybe 5 videos from now?). There it turns out that the functions you obtain from the Riesz theorem are the best basis functions to choose for a regularized regression problem. This was a result of Wabha back in the (80s?)

  • @robn2067
    @robn2067 6 หลายเดือนก่อน +6

    Very interesting video, but can you talk a little slower? Often it is not clear what words you are pronouncing, in particular for theorems.

    • @samueldeandrade8535
      @samueldeandrade8535 6 หลายเดือนก่อน +2

      Man, just change to configurations of the video to watch it slower.

    • @JoelRosenfeld
      @JoelRosenfeld  6 หลายเดือนก่อน +1

      Sorry if I talk too fast. I’ll work on it

    • @samueldeandrade8535
      @samueldeandrade8535 6 หลายเดือนก่อน +1

      @@JoelRosenfeld you don't. You talk just fine.

    • @HEHEHEIAMASUPAHSTARSAGA
      @HEHEHEIAMASUPAHSTARSAGA 6 หลายเดือนก่อน

      @@JoelRosenfeld Putting real subtitles on your videos would solve the issue. It's pretty easy these days, just put a transcript in and youtube will match up the times for you. It's probably even quicker to start with the automatic transcription and just fix the errors.

    • @JoelRosenfeld
      @JoelRosenfeld  6 หลายเดือนก่อน +1

      @@HEHEHEIAMASUPAHSTARSAGA in the past the transcripts were pretty bad that were produced by TH-cam. Premiere has a new AI feature that is actually pretty good at catching math terminology. I’ll give it some thought. Just takes more time

  • @jfndfiunskj5299
    @jfndfiunskj5299 2 หลายเดือนก่อน

    you really need to improve your communication skills. This is a terrible exposition..

    • @JoelRosenfeld
      @JoelRosenfeld  2 หลายเดือนก่อน

      @@jfndfiunskj5299 I’m always open to input. What could I change to improve it?