Simple Linear Regression Model using Matrices

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 ธ.ค. 2024

ความคิดเห็น • 13

  • @existentialrap521
    @existentialrap521 ปีที่แล้ว +9

    Thanks, G. We out here makin moves, matricizing* linear regression. You know what it be. Stat boiis and girls comin in hard WOOOO

  • @kamanaakinyanju915
    @kamanaakinyanju915 ปีที่แล้ว +6

    the first TH-cam teacher that has actually helped. Thank you

    • @Stats4Everyone
      @Stats4Everyone  ปีที่แล้ว +1

      Thanks!! I am happy that this video was helpful!

  • @manuelkarner8746
    @manuelkarner8746 3 ปีที่แล้ว +4

    thank you so much for making this playlist, it saves my Semester

  • @EvanMcLaren
    @EvanMcLaren 9 หลายเดือนก่อน +2

    Have you considered putting your videos into a playlist? 😅

    • @Stats4Everyone
      @Stats4Everyone  9 หลายเดือนก่อน +1

      Yup. I have lots of playlists. This one might interest you: Simple Linear Regression: th-cam.com/play/PLJDUkOtqDm6UeH59-jG31Cma-abXLNse_.html

    • @EvanMcLaren
      @EvanMcLaren 9 หลายเดือนก่อน +1

      @@Stats4Everyone Oh weird-when I went to your playlists tab earlier they didn’t load, but I see them now. Thanks!

  • @xavierbonga88
    @xavierbonga88 2 ปีที่แล้ว +3

    In 2022. Thank u Michelle for this video... saving lives

  • @seslocrit9365
    @seslocrit9365 2 ปีที่แล้ว +1

    Perfect, I was confused between the two representations.

  • @burger_kinghorn
    @burger_kinghorn 29 วันที่ผ่านมา

    You might think of y = β0 + β1•x1 + β2•x2 + ... + e
    But it's better to see it as
    x1•β1 + x2•β2 + ... + e
    The *X* matrix is like our spreadsheet so that order is necessary for the dimensions to line up in the matrix multiplication. It's a bunch of known constants acting as the coefficients in a system of equations.
    Similar to the matrix equation *Ax* = *b*
    It's *Xβ* = *y*
    *β* is the variable vector transformed by *X*. Regression is about a linear combination of the β's.
    Given *Y* = *XB* + *e*
    E(*Y* | *X*) = *XB* + 0
    The error term averages out to 0, i.e. we regress back to the (conditional) mean of Y.
    The product of a vector with its transpose collapses into the sum of its squared elements.
    *x’x* = Σx.i ²
    Variance is the average of squared deviations.
    (*x* - μ)’(*x* - μ) = Σi(x.i - μ)²
    Divide that by n for σ² , by n-1 for s².
    Similarly Cov(x1, x2)
    = Σi(x.i1 - μ1)(x.i2 - μ2) / n
    σ1,2= (*x1* - μ1)’(*x2* - μ2) / n
    Generalize it to (*X* - *M*)’(*X* - *M*) / n
    If the variables were mean centered first their means are 0. Therefore *M* = *0* and the covariance matrix is *X’X* divided by n or n-1.

  • @গোলামমোস্তফা-শ৮থ

    But why we should write variance and covariance in matrix form like this!!
    And average of y is = average of (XB)?

  • @GEconomaster112
    @GEconomaster112 ปีที่แล้ว +1

    Thank you!!

  • @asmaeamhiou1033
    @asmaeamhiou1033 2 ปีที่แล้ว +1

    Thank u!