AI & Machine Learning in Finance: The Virtue of Complexity in Financial Machine Learning

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ธ.ค. 2024

ความคิดเห็น • 16

  • @Garrick645
    @Garrick645 8 หลายเดือนก่อน +3

    This video is just so indulging. Couldn't understand a few things but great

  • @Khari99
    @Khari99 ปีที่แล้ว +3

    This was a great lecture with very surprising results. I had always assumed that overfit large models would always be worse to use since the training data was finite. Never thought the opposite was the case. Great work.

    • @brhnkh
      @brhnkh 8 หลายเดือนก่อน

      Am I missing something or is it just "ridgeless" regression with appropriate penalty (z) being really good?

    • @Khari99
      @Khari99 8 หลายเดือนก่อน

      @@brhnkh From what I understand from the lecture, one of the key insights is that larger models should be able to perform better out of sample. The general problem with using ML for time series data is that they are easy to overfit due to the limited training set and feature representation. However with a diverse feature set, larger models should be able to generalize better which is intuitive to me.The reason why many experts would recommend against bigger models is because they are much easier to overfit to the training data but this may just because they were training with a small feature set that did not have many predictive outcomes. Whenever this is the case, a model will learn the dataset itself in order to achieve the reward without finding patterns that can be repeated in out of sample data.

    • @brhnkh
      @brhnkh 8 หลายเดือนก่อน +1

      @@Khari99 Right. The last sentence is what's novel I think. The model gets better at out-of-sample data because it is appropriately penalized while being trained with very large number of parameters. Apparently, they figured it out using random matrix theory, so the heavy lifting lies there I guess.

    • @Khari99
      @Khari99 8 หลายเดือนก่อน +1

      @@brhnkh the reward function is always the most difficult part of ML. It took me a while to figure out how to write mine for my architecture. Simply using max profit is not enough (a model could learn to buy and hold forever for instance) And neither is accuracy. (high accuracy != profitability). You have to reward it and penalize it in a similar way that you would a human based on metrics its able to achieve on a trade by trade and portfolio basis.

    • @maximlamoureux4129
      @maximlamoureux4129 8 หลายเดือนก่อน

      @@brhnkh I thought a definition of overfitting was when validation error starts to increase rapidly, after reaching its minimum during training, whilst training error continues to decrease. It is therefore not clear to me, why you would want a model to overfit at all, finance or not. I'm only 4 minutes into the video, perhaps he will explain it.

  • @kilocesar
    @kilocesar 11 หลายเดือนก่อน +10

    Quantitative Finance is really exhausting; many different authors, books, and articles contradict one another. Complexity is usually viewed with disapproval by the industry.

  • @GamuchiraiZororoNdawana
    @GamuchiraiZororoNdawana 2 หลายเดือนก่อน +1

    Stochastic discount factors?

  • @cwayne4
    @cwayne4 6 หลายเดือนก่อน +1

    so dump your predictors in a net with a huge hidden layer and feed the final layer through a ridge regression?

  • @brendanlydon5272
    @brendanlydon5272 11 หลายเดือนก่อน +1

    My eyes wide open - if creed knew ML

  • @traveleurope5756
    @traveleurope5756 2 หลายเดือนก่อน

    This is hardly something new. SVM’s have been designed on the same principle.

    • @carlosandrescastromarin7775
      @carlosandrescastromarin7775 12 วันที่ผ่านมา

      You're mistaken; this is completely different from SVMs. Therefore, an SVM would yield different results. Read their paper.

    • @traveleurope5756
      @traveleurope5756 11 วันที่ผ่านมา

      You misunderstood my point. I know it’s different from SVM I said the principles are the same. You need to read more especially read the works by Vapnik on the fundamentals of machine learning. SVM like DNN achieves zero training error but achieves good out of sample performance this due to controlling the learning capacity of the machine via regularization

    • @carlosandrescastromarin7775
      @carlosandrescastromarin7775 8 วันที่ผ่านมา

      @@traveleurope5756@traveleurope5756 Well, that is completely different from your initial statement and actually insightful. So you are saying that the same is achieved here through ridgeless regression?