Origin of Markov chains | Journey into information theory | Computer Science | Khan Academy

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 เม.ย. 2014
  • Introduction to Markov chains
    Watch the next lesson: www.khanacademy.org/computing...
    Missed the previous lesson? www.khanacademy.org/computing...
    Computer Science on Khan Academy: Learn select topics from computer science - algorithms (how we solve common problems in computer science and measure the efficiency of our solutions), cryptography (how we protect secret information), and information theory (how we encode and compress information).
    About Khan Academy: Khan Academy is a nonprofit with a mission to provide a free, world-class education for anyone, anywhere. We believe learners of all ages should have unlimited access to free educational content they can master at their own pace. We use intelligent software, deep data analytics and intuitive user interfaces to help students and teachers around the world. Our resources cover preschool through early college education, including math, biology, chemistry, physics, economics, finance, history, grammar and more. We offer free personalized SAT test prep in partnership with the test developer, the College Board. Khan Academy has been translated into dozens of languages, and 100 million people use our platform worldwide every year. For more information, visit www.khanacademy.org, join us on Facebook or follow us on Twitter at @khanacademy. And remember, you can learn anything.
    For free. For everyone. Forever. #YouCanLearnAnything
    Subscribe to Khan Academy’s Computer Science channel: / channel
    Subscribe to Khan Academy: th-cam.com/users/subscription_...

ความคิดเห็น • 84

  • @wirechair
    @wirechair 7 ปีที่แล้ว +81

    this was amazing. little bit cloudy on the transition states but this has been so much more enlightening than scholarly literature about it

  • @TheWebPotato
    @TheWebPotato 4 ปีที่แล้ว +4

    This is one of the best videos I have ever seen. Thank you.

  • @katherinetheawesom
    @katherinetheawesom 8 ปีที่แล้ว +9

    Great explanation and perfect pacing. Thanks!

  • @thereverend8478
    @thereverend8478 5 ปีที่แล้ว

    I love your channel! Your videos always make these things easy!

  • @yenshen7181
    @yenshen7181 3 ปีที่แล้ว +3

    This was so beautifully done.

  • @axelnnz
    @axelnnz 6 หลายเดือนก่อน +1

    Brilliantly explained. I used markov chains in some uni course but I never gave a deep thought on the actual dynamics between probabilities. Thanks

  • @nimasanjabi626
    @nimasanjabi626 6 ปีที่แล้ว +69

    One of the most inspiring videos I´ve seen during the last two years. I am an AI student, and the first video that shook me like this was from Vsauce about zipf's law.

    • @SuperBhavanishankar
      @SuperBhavanishankar 3 ปีที่แล้ว

      hi

    • @dsm5d723
      @dsm5d723 3 ปีที่แล้ว +3

      I sussed out an older Google pageRank programmer and Complexity Theorist on a social network, and this is the "math detail" that Google used to steal the stochastic identity matrix of society, meaning the collective-personal memory connection. Historically, when pre-literate man built memory palaces we have trouble imagining, devices like sticks and rocks were ENCODING devices, not offloading devices. This is how the brain works, in terms of linear and exponential (metaphorical) connections present in nature and cognition, and people are losing the ability to remember collective events, as their connection to events is shaped by search results and ranking. At the personal and episodic level, all sort of biological triggers, such as smell, are better system 1 access points to memory than text (literacy). When I searched for events I recall vividly as happening, cultural ones, Google does not connect a past "alternative history" to the present cultural moment, where my accurate memory has been either reframed or excluded.
      I see it as a recursion of humanity down to what is deemed essential on both sides of input/output: resources in the Economic Lagrangian. Here in the US, it is tearing society apart. Last year, I called bullshit on the Google claim of Quantum Supremacy, as that is a moveable goalpost, one that does not assume a finite-dimensional and coherent quantum search space. Tang of Austin did the work on the classical clean-up. As a non-math student but with an education in the humanities, Nassim Nicholas Taleb is the one I am currently absorbing. Notice the nature of abstract mathematical processes like this one, at the base level of connection to mathematical reasoning: in physics, nobody can give a structure/function (element/process) definition for the things they describe. Math has the property, due to the errors in reasoning over some recent centuries, of splitting sets and comparing for equivalence. Here, the dependence of looping the function to fit the ratios predetermined by natural laws reveals itself in inverse. Don't say the word fractal around anyone working with equations; after the Leibniz notation won out, it was a buried-in-the-sauce notion. Sorry to chew your ear. We are nowhere close to AI, audio/visual calculators are all we really have; how we use them is what matters.

    • @jamesb9567
      @jamesb9567 2 ปีที่แล้ว +1

      @@dsm5d723 I didn't understand a single thing. Nice.

    • @pintupathak7461
      @pintupathak7461 2 หลายเดือนก่อน

      sm,,😮🎉,,8yp😊
      9e​

    • @mdemamhossen38
      @mdemamhossen38 หลายเดือนก่อน

      য৭ঙ😊 0:16 য😅োু😊😅ৃৃ😅😮য😅ঁ😅ৃুঋৌ£😊℅ ‌‌‌‌®®®®‌ ££££@😅$,z Zayed z6, Z!izzy! #‌∞,,,,,xXx zzzzz. Bd082 8,I'd,,.. zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzZzzzzzzzzzzzzzzzzzuzi fudxaaxszZ,,uz😊zZ8XxA😊a=!8!!!¡😅>,,.,,,,,,, ___z8‌ঽ,‌😊,,,,,,,,,,,,,,,,,,,,,,,,,,,৷৷৷৷,,৷৷, zঁ৷৷ 😅

  • @jcamargo2005
    @jcamargo2005 ปีที่แล้ว

    Interesting story about Nekrasov & Markov, I did know this one. It shows that failed, obscure and forgotten theories sometimes influence the progress of knowledge

  • @daviddavidson1090
    @daviddavidson1090 7 ปีที่แล้ว +6

    I wish you would cite sources on videos like this that are more about historical facts than your original contributions.
    Good video though!

  • @adityabasu1802
    @adityabasu1802 3 ปีที่แล้ว

    Greatly made, so inspiring. Hats off, truly!

  • @klam77
    @klam77 7 ปีที่แล้ว +46

    wow!
    Plato --> Bernouilli ---> Nekrasove -->Markov! philosophy is mixed in! Love it!
    this isn't Sal Khan narrating?

    • @lastua8562
      @lastua8562 3 ปีที่แล้ว +3

      right. But I cannot understand how Mr Khan knows so much that he teaches all other videos.

  • @_crispins
    @_crispins 6 ปีที่แล้ว

    excellent backgrounder/intro!

  • @mubchamp
    @mubchamp 8 ปีที่แล้ว

    Brilliant explanation. :-)

  • @johnfajer7691
    @johnfajer7691 ปีที่แล้ว

    Great video, thank you!

  • @jonathancauchi6457
    @jonathancauchi6457 4 ปีที่แล้ว

    Great video and explanation

  • @sciWithSaj
    @sciWithSaj 3 ปีที่แล้ว

    amazing
    need more such videos

  • @reshetech
    @reshetech 16 วันที่ผ่านมา

    This series of videos, with its breadth of knowledge, its ability to connect seemingly unrelated worlds, the clarity of its explanations, and its visual and auditory beauty, reminds me of the masterpiece "Cosmos" by Carl Sagan and Ann Druyan. Many thanks to the talented creators for crafting such a masterpiece.

  • @bibhutimohapatra7412
    @bibhutimohapatra7412 3 ปีที่แล้ว

    great explanation

  • @sergiolucas38
    @sergiolucas38 2 ปีที่แล้ว

    Great video, thanks :)

  • @a21871
    @a21871 2 ปีที่แล้ว

    Cool! Thank you!

  • @cotillion137
    @cotillion137 9 ปีที่แล้ว

    Hey Brit, good video - where did you find the quote from Bernoulli about the universe being governed by ratios?

  • @bingeltube
    @bingeltube 5 ปีที่แล้ว

    Very recommendable

  • @julianocamargo6674
    @julianocamargo6674 3 ปีที่แล้ว

    Brilliant video.

  • @83vbond
    @83vbond 2 ปีที่แล้ว

    Have seen only the first 1:30 yet, and it already is one of the most beautiful introductions I have ever seen in a Science video on TH-cam. Thank you

  • @MaitreyiSinha
    @MaitreyiSinha 8 ปีที่แล้ว +3

    That was fantastic! Do you have more such videos on the Markov Property?

    • @KhanAcademyLabs
      @KhanAcademyLabs  8 ปีที่แล้ว +1

      +Maitreyi Sinha sorry that's the only one I made. however you can see an applied version here: th-cam.com/video/3pRR8OK4UfE/w-d-xo.html

  • @sniff4643
    @sniff4643 3 ปีที่แล้ว

    great video

  • @g2baron
    @g2baron ปีที่แล้ว

    Thank you!

  • @diegofloor
    @diegofloor 7 ปีที่แล้ว

    This is really good!

    • @conortherk8002
      @conortherk8002 7 ปีที่แล้ว

      diegofloor EYY... DATS PRETTY GUD!

  • @photon_phi902
    @photon_phi902 3 ปีที่แล้ว

    Is it possible that be used in quantum state Markov chains ? And in subatomic particles

  • @choejunehyeok3358
    @choejunehyeok3358 3 ปีที่แล้ว +2

    Fantastic Video! but I need to ask something on 6:19 doesn't 0 and 1 have to change to make it right? It said that there are more black beans in state 0 than white beans. so obviously it shouldn't be a 50:50 chance??? please comment

  • @choejunehyeok3358
    @choejunehyeok3358 3 ปีที่แล้ว +1

    Fantastic Video! but I need to ask something on @ doesn't 0 and 1 have to change to make it right? It said that there are more black beans in state 0 than white beans. so obviously it shouldn't be a @ chance??? please comment

  • @rockthemic12
    @rockthemic12 ปีที่แล้ว +1

    Very informative. But the background music is distracting.

  • @vinayseth1114
    @vinayseth1114 7 ปีที่แล้ว +1

    3:34- Could someone please tel me the name of the theologian-turned-mathematician? Seems like an interesting guy; would love to read up on him, but the name wasn't clear in the video.
    Oh, and brilliant video! Love Khan Academy for sharing these gems for free!

  • @tanyd6627
    @tanyd6627 ปีที่แล้ว

    so much information in a 7 minutes video!!😁

  • @hrivera4201
    @hrivera4201 2 ปีที่แล้ว +6

    previous lesson: th-cam.com/video/PtmzfpV6CDE/w-d-xo.html
    next lesson: th-cam.com/video/WyAtOqfCiBw/w-d-xo.html

  • @luckypichuchannel837
    @luckypichuchannel837 2 ปีที่แล้ว

    ONE POINT THATS START EVRRYTHING. ;) IN SUMMARY

  • @kebman
    @kebman 2 ปีที่แล้ว

    Ok so Bernoulli thus described the basis of frequentist statistics?

  • @fallenIights
    @fallenIights 3 ปีที่แล้ว +2

    What kind of ending was that

  • @doodelay
    @doodelay 3 ปีที่แล้ว

    Gawdayum this is good

  • @marjavanderwind4251
    @marjavanderwind4251 4 ปีที่แล้ว

    This is a great explanation, but the transition matrix shown on 6:58 is wrong. The rows should add up to 1, instead of the columns.

    • @rodneycummings7319
      @rodneycummings7319 4 ปีที่แล้ว +1

      Why can't the matrix be transposed whereby the columns do add up to 1?

  • @PhDip
    @PhDip 3 ปีที่แล้ว

    Only the last 2 minutes of the video really speak to the topic - the first 5 minutes of the video speak to a *very* distantly relatedly philosophy of stochasticity starting with the Hellenistic geometers.

  • @TheResonating
    @TheResonating 9 ปีที่แล้ว +2

    ***** Im confused, are markov chains both independent and dependent? Because the next outcome only depends on the current one and not the previous ones before the current.

    • @DeJayHank
      @DeJayHank 9 ปีที่แล้ว

      TheResonating It DOES depend on previous outcome, beacuse it decided which state we are in right now.

    • @johannesgh90
      @johannesgh90 7 ปีที่แล้ว

      The next outcome is dependent on the state (because it decides current probabilities), and the state is dependent on the last outcome and not any outcomes before that. To move to the example ... which cup you draw from is dependent on what you drew last (white or black piece), but not dependent on what you drew before that.
      You might say that it IS dependent on the state two steps back because it decided the probabilities of what outcome you got last step, but that is the point ... the probabilities of something happening become irrelevant to the calculations once you know if it happened or not.

  • @JD-jl4yy
    @JD-jl4yy 6 ปีที่แล้ว

    2018.

  • @otiebrown9999
    @otiebrown9999 5 ปีที่แล้ว

    Remember Caude Shannon!

  • @eatfruitsalad345
    @eatfruitsalad345 3 ปีที่แล้ว +2

    is there a part 2? ending felt a little sudden

    • @nutron3333
      @nutron3333 3 ปีที่แล้ว

      See the description!

  • @stevenlam5841
    @stevenlam5841 4 ปีที่แล้ว

    i don't take seroiusly those that peddle plato. You're an educational channel! Where's Aristotle?

  • @santerisatama5409
    @santerisatama5409 2 ปีที่แล้ว

    Doesn't the idea of Markov chains violate the undecidability of Halting problem?

  • @EdgarFGirtainIV
    @EdgarFGirtainIV 4 ปีที่แล้ว

    Who did the music in this video?

    • @kimberleythorsen8384
      @kimberleythorsen8384 2 ปีที่แล้ว

      I wanted to ask about this but wouldn't call it music, it sounds like binaural beats or isometric tones it something in those realms, perhaps to help with cognitive recall? That would be in innocence, I thought it was a little unnerving/anxiety inducing...lol

  • @theSpicyHam
    @theSpicyHam ปีที่แล้ว

    it's like a generator

  • @obsoletepowercorrupts
    @obsoletepowercorrupts 11 หลายเดือนก่อน

    A fact is an intervention. It is truth. A singular thing we know has a limit and is thereby finite. In order to be infinite, one must also occupy the finite, otherwise there would be a place the infinite could not go, and so it would no longer be infinite. A fact can make its way into the mind by sheer luck or by pure brilliance which in its completeness would be capable of being by intervention. If it can be, then it will be. Brilliance takes will. So, for Pavel Nekrasov, his free will is an instance that describes the divine. As a particle in a bell jar vacuum of the mind, like a particle, the fact exists because it must exist. When forced to form a word not your own (a lie), one can at some point find a way to say "no" (or to have already said it), so as to defy the false. An untrue word is not a lie if it is simply not yet true. It can become true. That defiance can be fearlessness, a path of truth and thusly love. Love exists. Love is. It is better to be afraid for than to be afraid of, and each time, that division forms asymptote to fearlessness. Will is of intervention. The gift. The increased anecdotal instances are empirical evidences that make the truth clearer, a discovery more than constructing an invention. The equilibrium result does not account for the observor as an external entity, messaging. Every state transition is a message.
    My comment has no hate in it and I do no harm. I am not appalled or afraid, boasting or envying or complaining... Just saying. Psalms23: Giving thanks and praise to the Lord and peace and love. Also, I'd say Matthew6.

  • @WalterSamuels
    @WalterSamuels 8 หลายเดือนก่อน

    "Transision"...

  • @lastua8562
    @lastua8562 3 ปีที่แล้ว

    The ending was quite sad.

  • @kebman
    @kebman 2 ปีที่แล้ว

    Which Nekrasov? Pavel Alekseevich Nekrasov (1853-1924)?

  • @Hiphop101ize
    @Hiphop101ize 6 ปีที่แล้ว

    How do you watch videos at double speed o your phone?

    • @2024comingforyou
      @2024comingforyou 3 ปีที่แล้ว

      Click on the 3 vertical dots at the top right corner , you'll find out

  • @YogeshPersonalChannel
    @YogeshPersonalChannel 5 ปีที่แล้ว

    original video is here: th-cam.com/video/o-jdJxXL_W4/w-d-xo.html

  • @robertc6343
    @robertc6343 2 ปีที่แล้ว

    I’m not sure this video really explains the ORIGIN of Markov chain. It just introduces it. Again, why would the state vector converge to a stable set of numbers?

  • @sidhollander949
    @sidhollander949 9 ปีที่แล้ว

    SOUND!!!!!Are you kidding

  • @bubbleboy821
    @bubbleboy821 2 ปีที่แล้ว

    That ending is infuriating

  • @quantaloop4002
    @quantaloop4002 6 หลายเดือนก่อน

    not as clearly as Sal was able to explain

  • @moneyeye24
    @moneyeye24 2 ปีที่แล้ว

    This video is not good. This video makes Markov process sounds like dependent. A Markov process is a random process in which the future is independent of the past. @ 6:22 "Markov proved that when every state in the machine is reachable, when you run these machine in a sequence, they reach equilibrium....no matter where you start, once you begin the sequence, the number of time you visit each state, converge to some specific ratio, or probability " The probability to get a light bean or black bean is still 0.5, still independent. although one will stay in cup 0/state 0 when one pick up 1 light bean. If the light beans and black beans are equal large numbers in both cup/state, the probability of both state 0 and state 1 should be 0.5. The rule only forces one to stay in cup 0/state 0 after the result is shown and it doesn't affect the probability of the next choice so the next choice itself is still independent. I don't see the point to introduce Markov process this way which only confuse people.

  • @akshaydalvi2317
    @akshaydalvi2317 ปีที่แล้ว

    The video is too disturbing and annoying by constant shaking and needless activities going within. May be you are trying too much on making the content entertaining. Learning is my first objective here and entertainment comes second

  • @tanyd6627
    @tanyd6627 ปีที่แล้ว

    la traducción al español no es buena! spanish translation is bad.🥲