The Generalized Likelihood Ratio Test

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 พ.ย. 2024

ความคิดเห็น • 24

  • @kaiserhaz
    @kaiserhaz 6 ปีที่แล้ว +2

    Thank you dear Sir for this video.
    I would like to add something to the discussion.
    The test statistic L'(x) was mentioned in (Kay1998, Chapter 9, pg342) as being a non-CFAR test. This was because the test distribution is in fact Gaussian, which depends on the unknown variance. The expression itself is correct, but the threshold, linked to the PFA and hence to the variance is non-constant as it depends on the unknown variance.
    The MLE obtained by least squares as was done in the video is the correct approach, but for H0, the MLE needs to be restricted (Kay1998, Chapter 9, Appendix A). This can be understood better when the hypothesis test is reposed as a parameter test (i.e. H0: A=0, sigma0^2, H1: A!=0, sigma1^2). Since there is zero probability that A assumes the value 0, the MLE under H0 must be restricted in order to obtain the correct estimate for H0. If this is considered along with some scaling factors, then the test distribution indeed becomes an F distribution.
    Alternatively, I would love to know how the performance of the tests are characterized (i.e. how the test distribution is obtained).
    Thank you,

  • @allsignalprocessing
    @allsignalprocessing  11 ปีที่แล้ว

    The GLRT can be applied to the case where s is unknown - in that case you have to estimate s from the data (using the MLE) and then plug that estimate into the distribution in the likelihood ratio. Obviously, the less that is known about the signal s, the poorer the performance.

  • @allsignalprocessing
    @allsignalprocessing  11 ปีที่แล้ว

    Maximum likelihood estimation is normally applied to unknowns that are deterministic, not random. However, I'm guessing this is a hypothesis testing problem in which H_0 is s = 1, and H_1 is s = -1. In that case you can use the likelihood ratio (if everything else is known) or the GLRT by applying it separately to each hypothesis.

  • @christianescobar467
    @christianescobar467 7 ปีที่แล้ว

    Could you please help me with deriving the GLRT to provide the test statistic for the following hypothesis':
    H0 : x[n] = w[n], n = 0, 1, . . . , N − 1
    H1 : x[n] = y[n] + w[n] = A cos2πζ ln(n − n0) + φ+ w[n], n = 0, 1, . . . , N − 1 .
    With unknown parameters s (ζ, φ, n0, A). In addition, the noise samples w[n] are zero-mean additive white Gaussian with unknown variance σ^2
    Or any similar example you have in mind? Thank you very much in advance.

  • @kiranpadakanti4998
    @kiranpadakanti4998 3 ปีที่แล้ว

    thank you sir, it is a great explanation. can we get MATLAB CODE for this example

  • @DataBus
    @DataBus 10 ปีที่แล้ว +1

    Thanks Mr. Van Veen

  • @rakesh1991able
    @rakesh1991able 7 ปีที่แล้ว

    Good explanation sir! do you have a matlab codes to implement these algorithms?

  • @sanariaz2515
    @sanariaz2515 4 ปีที่แล้ว

    What is effect of type 1 error on GLRT

  • @toanvodai3314
    @toanvodai3314 8 ปีที่แล้ว

    Can you give an easy-to-read reference to the GLRT, please? Thank you very much

  • @noramaliaab
    @noramaliaab 10 ปีที่แล้ว

    What is the differemce between lkelihood ratio test and generalized likelihood rato test??

    • @allsignalprocessing
      @allsignalprocessing  10 ปีที่แล้ว +6

      The probability density functions (pdf) in a LRT are fully known - there are no unknown parameters. The LRT is the uniformly most powerful test of simple binary hypotheses. If there are unknown parameters in the pdfs, such as signal amplitude or noise variance, then there is no known uniformly most powerful test and we resort to sensible approaches to forming a test statistic. The GLRT is one such approach. It replaces the unknown parameters in the pdfs with their maximum likelihood estimates and then forms a likelihood ratio. So everything must be known to apply the (optimal) LRT, while the GLRT is not necessarily optimal as it uses estimated values for the unknown parameters in the pdfs.

    • @KleineInii
      @KleineInii 5 ปีที่แล้ว

      @@allsignalprocessing Hey Barry, nice video! Which literature do you use so that I can cite it in my thesis?

  • @simritigoshal9971
    @simritigoshal9971 10 ปีที่แล้ว

    I was trying to derive L''(x) . But I am not able to get x^T.s in the numerator where as the denominator comes out exactly same. I just wanted to confirm if the final expression for L''(x) in the lecture is correct.My numerator is x^T.x

    • @allsignalprocessing
      @allsignalprocessing  10 ปีที่แล้ว +1

      There is a small trick involved. Rewrite the numerator as
      x^T*x - (x^T*s)^2 + (x^T*s)^2
      then, dividing out the first two terms we see that
      L'(x) = 1 + (x^T*s)^2 /(x^T*x - (x^T*s)^2)
      So L''(x) = L'(x) -1.
      The offset of 1 is absorbed into the threshold.

  • @mahdkhalid366
    @mahdkhalid366 6 ปีที่แล้ว

    Can you please share Matlab code to implement this? It will be very helpful for me to visually understand this. Many thanks

  • @sdsachin24
    @sdsachin24 9 ปีที่แล้ว

    hi sir,, I have derived GLRT for the hypothesis mentioned in the above video,, can i know how to plot P_d vs. P_fa curve? How to obtain P_d and P_fa values with respect to varying threshold?
    thanks in advance

  • @ilansch1
    @ilansch1 11 ปีที่แล้ว

    I have this homework where s is unknown, but we know is values are =1 and -1 with probability 0.5 each. I don't really know how to use that. can you give me a hint? thank you very much

  • @ilansch1
    @ilansch1 11 ปีที่แล้ว

    what if s is unkown?

  • @tomasnobrega8087
    @tomasnobrega8087 3 ปีที่แล้ว

    Thank you the content is great

  • @celestem6453
    @celestem6453 9 ปีที่แล้ว +1

    awesome video

  • @noramaliaab
    @noramaliaab 10 ปีที่แล้ว +1

    Thank you so much!!the explanation is very helpful..thank you again :D

  • @aryamanful
    @aryamanful 5 ปีที่แล้ว

    Thanks boss. This is great!

  • @xesan555
    @xesan555 8 ปีที่แล้ว

    thakns for a valuable contibutiond

  • @HaVuong2023
    @HaVuong2023 10 ปีที่แล้ว

    Thanks