Claude Shannon's Information Entropy (Physical Analogy)

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 พ.ย. 2024

ความคิดเห็น • 144

  • @ArtOfTheProblem
    @ArtOfTheProblem  3 ปีที่แล้ว

    Link to series playlist: th-cam.com/play/PLbg3ZX2pWlgKDVFNwn9B63UhYJVIerzHL.html

    • @lewiskamryn6813
      @lewiskamryn6813 3 ปีที่แล้ว

      you probably dont give a shit but if you are stoned like me atm you can watch all of the new movies on instaflixxer. I've been binge watching with my gf for the last weeks =)

    • @kodamessiah3582
      @kodamessiah3582 3 ปีที่แล้ว

      @Lewis Kamryn yea, have been using instaflixxer for years myself =)

  • @realfuzzhead
    @realfuzzhead 11 ปีที่แล้ว +59


    To whoever runs ArtOfTheProblem, I want to personally thank you for all of the time you have spent making these videos. I came across one of them on a math forum a few months ago and I proceeded to consume every video you have made. Your videos led me to absolutely fall in love with information theory and persuaded me to open up a major in computer science. Claude Shannon has replaced Richard Feynman as my favorite scientist. I can't thank you enough, these videos literally changed the path I'm taking through life. 

    • @johnallard2429
      @johnallard2429 7 ปีที่แล้ว +1

      He's the same guy, Khan academy liked his videos and brought them onto the site

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 ปีที่แล้ว +2

      stay tuned for more!

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 ปีที่แล้ว +5

      hey! just checking in, how did your major end up, where did it lead you?

    • @johnallard2429
      @johnallard2429 3 ปีที่แล้ว +7

      @@ArtOfTheProblem Hey! Thanks so much for checking in. I ended up graduating w/ a BS in Computer Science from UC Santa Cruz and I'm currently four years into my career as a backend software engineer in Silicon Valley. I just left the first startup I joined out of college and joined a new one and I'm currently working on building a platform for training data management for enterprise ML applications. I really wanted to thank you again for making these videos, they altered the course of my studies and my career and, hence, my life. I still, to this day, read about information theory in my own time and still hold Claude Shannon up to be one of my favorite scientists. (I just noticed I'm posting from a new email address but it's me!)

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 ปีที่แล้ว +6

      @@johnallard2429 wow congrats. This note made my morning and is the reason I started this channel. You're probably way ahead of me now, i'm struggling to simplify my understanding of transformers at the moment for the next video :)

  • @iliyasone
    @iliyasone 11 หลายเดือนก่อน +3

    Thank god I found this video again
    Still this is the best explanation about Shannon formula that exists in TH-cam

    • @ArtOfTheProblem
      @ArtOfTheProblem  11 หลายเดือนก่อน

      nobody finds this anymore, glad you did!

  • @ako969
    @ako969 6 ปีที่แล้ว

    This Shannon guy is way ahead of his time. The father of data-compression in times when there were no Internet, computers or even binary-files.

    • @ako969
      @ako969 6 ปีที่แล้ว

      Ok. Not just data-compression. All digital encoding. This guy rivals Alan Turing, if not better. (Essentially the real first computer guy in terms of 'digitizing')

  • @mycityofsky
    @mycityofsky 8 ปีที่แล้ว +4

    This is the most intuitive video I've ever seen about Shannon entropy, thank you!

  • @dangerlibya2010
    @dangerlibya2010 8 ปีที่แล้ว +3

    I'm studying computer science and I wish all my professors explained things just like you !! that would've saved a lot of time !

  • @5ystemError
    @5ystemError 11 ปีที่แล้ว +1

    Thanks for uploading a new video!!! I just started watching these a few days ago and was bummed out when I saw you hadn't uploaded one in a little while. These are awesome.

  • @emmamovsesyan
    @emmamovsesyan 2 ปีที่แล้ว

    I consider that Khan Academy is one of the best things that we have in this internet era

  • @soumyadeeproy6611
    @soumyadeeproy6611 5 ปีที่แล้ว +2

    extremely intuitive explanation,so well explained I was wondering why the heck I couldn't understand the way you explained !

    • @ArtOfTheProblem
      @ArtOfTheProblem  5 ปีที่แล้ว

      I'm thrilled to hear this video helped you. it was an original explanation of this concept I don't think many are aware of

  • @SirusDas
    @SirusDas 7 ปีที่แล้ว

    This the best explanation on the INTERNET! Amazing, Awesome and Thank You!

  • @GuruKal
    @GuruKal 8 ปีที่แล้ว +2

    Definitely one of my favorite explanations of all time

  • @midevil656
    @midevil656 11 ปีที่แล้ว

    Could never understand the information entropy unit in class, but you nailed it here. Thanks!

  • @319quang
    @319quang 8 ปีที่แล้ว +36

    These videos are extremely informative and very well done! Thank you for all your hard work! :)

    • @ArtOfTheProblem
      @ArtOfTheProblem  8 ปีที่แล้ว +6

      +quang nguyen Thanks for feedback! Our new series has begun

  • @spandanhetfield
    @spandanhetfield 7 ปีที่แล้ว +6

    To anyone who did think of this, why would you ask questions such "AB v/s CD" in first case, and "A v/s BCD" in second case - The reason is that this ensures that both answers can occur with 50% probability. If at every step, you ask 2 questions of equal probability, the AVERAGE number of times you'll have to ask the questions (in other words, follow that tree structure of question/answers), will be minimum. Why? You'll need a little bit of basic computer science experience to think this one through, maybe a good reason to take a course on data structures!
    More interestingly, this is also the reason why binary search is more efficient that splitting a sorted list into 3 parts. It's a simple proof, I encourage you to think it through :)

    • @PedroTricking
      @PedroTricking 5 ปีที่แล้ว

      > If at every step, you ask 2 questions of equal probability, the AVERAGE number of times you'll have to ask the questions (in other words, follow that tree structure of question/answers), will be minimum. Why
      Why???? Proof please

  • @baseballsbetter
    @baseballsbetter 11 ปีที่แล้ว +1

    Great video! Glad to see you are back

  • @kc122
    @kc122 11 ปีที่แล้ว

    It's great to see a new video after a while :) thank you!

  • @pasansamarakkody4053
    @pasansamarakkody4053 ปีที่แล้ว

    Simply great explanation. Thanks!

  • @tmstani23
    @tmstani23 5 ปีที่แล้ว +1

    Great video and super interesting topic. I think this definition of information is super counter-intuitive of the everyday common sense use of the word "information". People often think of information as a source of knowledge or having some inherent use. It seems like the comment-sense definition of information is more akin to education and implies some utility. Whereas this definition of information seems to remain indifferent to its utility and is closer to complexity or an implied amount of substance. This makes sense since entropy and information are increasing in the wild. It is fascinating that information can be proven to be entropic. I wonder if there is a limit to information or if entropy increasing means the universe is infinite in possibilities? Maybe we can only observe an information state but information itself is infinite.

  • @JankoKandic
    @JankoKandic 11 ปีที่แล้ว +1

    Good to have you back :)

  • @marcinkovalevskij5998
    @marcinkovalevskij5998 7 ปีที่แล้ว

    Thank you for all your hard work making these videos. Every video is made putting attention to the smallest detail. And that intro sound...

    • @ArtOfTheProblem
      @ArtOfTheProblem  7 ปีที่แล้ว +1

      thanks so much, working on video now on Bitcoin i'm excited about

  • @niharikarastogi7169
    @niharikarastogi7169 7 ปีที่แล้ว +2

    I like the creativity u put in explaining anything.
    Simply wow.. :)

    • @ArtOfTheProblem
      @ArtOfTheProblem  7 ปีที่แล้ว

      Thanks for your feedback , much appreciated

  • @RimstarOrg
    @RimstarOrg 11 ปีที่แล้ว +10

    Very clear explanation. Thanks.

    • @stevebez2767
      @stevebez2767 7 ปีที่แล้ว +1

      RimstarOrg not in practice, SHA Syntacts unrealised post Collins Diction of call out too non self,I,you,me,them,we,us etc item subject object cycled online from individuals (English post Celt Anglo sax,say,for example)into literate nonsense of eventual some nutter going round system as act too get all way under Bletchly wooden hut non Enigma you need learn two ring Turing tar black out clout law system running Bowe street banker non nuts into lease of no where no be biz kit shopping street mummy's coming boooo?

    • @TrustinDom
      @TrustinDom 7 ปีที่แล้ว

      Steve Bez

  • @pezaventura
    @pezaventura 9 ปีที่แล้ว +2

    These videos are amazing!

  • @KenCubed
    @KenCubed 11 ปีที่แล้ว

    I am greatly enjoying these!

  • @bartziengs9602
    @bartziengs9602 7 ปีที่แล้ว +1

    Very nice video and clear explanation!

  • @diego898
    @diego898 11 ปีที่แล้ว +2

    Thank you for continuing! These are fantastic!

  • @ncaralicea
    @ncaralicea 7 ปีที่แล้ว

    Very nice intuitive explanation. Thank you!

  • @necrosudomi420thecuratorof4
    @necrosudomi420thecuratorof4 ปีที่แล้ว

    bro ii watch lots of vids and its first time i see a new approaches to describe it pretty dam well articulated ' thanks for sharing it .

  • @6san6sei6
    @6san6sei6 11 ปีที่แล้ว

    i simply love your videos.

  • @onecanina
    @onecanina 11 ปีที่แล้ว +2

    Niiiiice!! Thank you, i was waiting for the longest time!

  • @ThanadejR
    @ThanadejR 3 ปีที่แล้ว

    Super clear explanation.

  • @ky-effect2717
    @ky-effect2717 9 หลายเดือนก่อน

    great explanation

  • @hl2mukkel
    @hl2mukkel 11 ปีที่แล้ว +4

    HE IS ALIVE! =D WOOHOO

  • @jinnycello
    @jinnycello 7 ปีที่แล้ว

    Very easy to understand. THank you!

  • @Chr0nalis
    @Chr0nalis 8 ปีที่แล้ว +8

    The beginning of the video is misleading and implies that the second machine is not random. It is in fact random and the difference between them is that the first generates a uniform distribution whereas the second doesn't.

    • @ravenimperium1756
      @ravenimperium1756 7 ปีที่แล้ว

      Hmm, is it not so that the sampling (from the distribution) is random, but the actual values are not? For example, I can imagine a distribution that's just a single line at a specific value; random sampling from that distribution will then always give you the same value.

  • @cy9987
    @cy9987 4 ปีที่แล้ว

    Wow such a well made video! You totally desire more views and subs my good sir!

  • @brunomartel4639
    @brunomartel4639 4 ปีที่แล้ว +1

    Awesomely explained! what's the song at the end?,i love it,it's so, optimistical!

    • @ArtOfTheProblem
      @ArtOfTheProblem  4 ปีที่แล้ว

      thanks all music is original for this series

  • @77Fortran
    @77Fortran 9 ปีที่แล้ว +1

    Incredible video, thank you! :)

  • @steventmhess
    @steventmhess 11 ปีที่แล้ว

    Beautiful work.

  • @taylor8294
    @taylor8294 11 ปีที่แล้ว

    Great, great video

  • @Username-rm1rl
    @Username-rm1rl 3 ปีที่แล้ว

    There's an error/typo at 1:26 where the 2nd question repeats "Is it B?", it should read "Is it C?" or "Is it D?"

  • @Qoooba95
    @Qoooba95 7 ปีที่แล้ว +2

    It is very well explained indeed but I still have certain doubt. I would
    like to think about entropy in terms of random variables (like
    wikipedia definiton with discrete random variable X where possible
    events are values of X), but I fail to understand the following: how do
    you determine the entropy of an English word for example? What is the
    random variable in that case? I see that for the set of English
    characters we could determine the entropy according to the formula
    presented in the video as the X would represent the character that
    appears as the message and they all have certain probability of
    appearing so we have the probability distribution. But I've seen people
    determining entropy of a word by calculating the probabilities of the
    characters as appearance rates (ABA: 1/3 for B 2/3 for A) and
    considering that the probability distribution... If anyone could address
    that and shed some light I would be extremely grateful! Thanks!
    EDIT:
    I just realised that given a sequence of characters that appear one after another independently the entropy of such message considered as a value of a random vector would be a sum of entropies of random variables which make up for that vector (is that right?) so it kind of makes more sense to me and seems natural (every additional character provides the same amount of information per average) but I'm still puzzled with entropy of given English word... Hope someone could respond to it. Also, incredibly interesting topic! And this channel is just great!

    • @MN-sc9qs
      @MN-sc9qs 5 ปีที่แล้ว

      Hi. Maybe this will help. What was presented has each random letter being statistically independent of the following letter. However, what you seem to be describing is the situation where the random letters are not statistically independent, which is true for the English language.

  • @stevodestructo
    @stevodestructo 11 ปีที่แล้ว

    Love your videos. Thank you.

  • @chenphillis8612
    @chenphillis8612 7 ปีที่แล้ว

    very clear explanation! thank you

  • @sovansam2793
    @sovansam2793 6 ปีที่แล้ว

    Thank you alots man....This clear explaination get me to the point.

  • @gaboqv
    @gaboqv 4 ปีที่แล้ว

    another way to write shannon equation would be the log base 1/2(p), that you know is how many times you would have to divide in to to approach a probability equal to p, so how many levels we need to accomodate somehow fairly that simbol with that probability

  • @kamalgurnani924
    @kamalgurnani924 7 ปีที่แล้ว

    Very nice explanation! thanks for the help :)

  • @DarshanKalola
    @DarshanKalola 7 ปีที่แล้ว

    Great video!

  • @anthonyvance7
    @anthonyvance7 8 ปีที่แล้ว

    Great video. Where can I find the cool music at the end?

    • @ArtOfTheProblem
      @ArtOfTheProblem  8 ปีที่แล้ว

      +Anthony Vance Thanks, cam posts the music here (cameronmichaelmurray.bandcamp.com/album/art-of-the-problem-volume-1-gambling-with-secrets) though that song isn't yet listed

    • @anthonyvance7
      @anthonyvance7 8 ปีที่แล้ว

      +Art of the Problem Great, thanks. I love the work of your team. Inspiringly creative.

  • @nm800
    @nm800 5 ปีที่แล้ว

    Please could anybody explain the equivalence of the two formulas at 6.21? THANKS

    • @notmychairnotmyproblem
      @notmychairnotmyproblem 4 ปีที่แล้ว

      This is a year late but by using negative exponent properties we can express 1/p as p^(-1). And then by applying properties of logarithms, we can use the power rule to bring the -1 to the outside of the expression (hence the negative sign in the front).

  • @serkansandkcoglu3048
    @serkansandkcoglu3048 2 ปีที่แล้ว

    At 3:49, for D, it should have been 0.25X2 instead of 0.25X4, right?

  • @raghavdevgon5124
    @raghavdevgon5124 7 ปีที่แล้ว

    Thanks for such videos :)

  • @nic741
    @nic741 หลายเดือนก่อน +1

    At timestamp 3:49, there is a mistake, it should be 0.25(2). Not sure why its 4. Great Video btw.

    • @ArtOfTheProblem
      @ArtOfTheProblem  หลายเดือนก่อน

      glad you found it, it's an oldie!

    • @nic741
      @nic741 หลายเดือนก่อน

      ​@@ArtOfTheProblemI had a related question to the video. Say we played the same game but instead of ABCD, we used a fair dice with 1/6 probability for each face at the beginning.
      I use a strategy where I first ask if the outcome is >4? If yes, then I ask is the outcome {4}, and if it is no then I ask if the outcome is {5}. We do this until we have a similar tree- diagram as in your video (Time stamp 3:34)
      My question is if the probability of finding the correct outcome, say {4} remains the same as the beginning, that being 1/6 ; or does the probability drop to 1/3 after the first question?
      Similar resources I found indicates that it should stay at 1/6 but I can't see why. After the asking the first question, it seems to me like the probability of finding {4} should fall to 1/3.
      The end goal is the same as yours in the video, that being to find the average number of questions using this strategy and compare it against the information entropy.
      I realize my question comes a decade after the video was first posted, but I would appreciate your input if you have the time.
      Thank you :)

  • @jamesimmo
    @jamesimmo 7 ปีที่แล้ว

    Very good video, thanks, but I see (from my UoM physics course notes) that the natural logarithm is used instead of the base-2 logarithm: is this choice arbitrary or does it vary to suit a given situation?

  • @javipdr19
    @javipdr19 11 ปีที่แล้ว +1

    Yay! New video :)

  • @SashankDara
    @SashankDara 11 ปีที่แล้ว

    Just awesome !

  • @mouseeatcats
    @mouseeatcats 4 ปีที่แล้ว

    Really helpful thanks

  • @ImmacHn
    @ImmacHn 10 ปีที่แล้ว +3

    SO entropy is good for encryption but bad for communication right?

    • @nathanhart3240
      @nathanhart3240 9 ปีที่แล้ว +2

      +Immac (RockLeet) Good incite! Entropy is good for data encryption (because more guesses/questions are needed to crack it), but entropy is bad for data compression (because maximally compressed data, by definition, has the highest entropy for its size). Data compression is good for communication.
      Your statement is correct when referring to compression, but certain aspects of communication like bandwidth efficiency or error-correcting codes can improve with entropy. :-)

    • @nathanhart3240
      @nathanhart3240 9 ปีที่แล้ว

      +Immac (RockLeet) Sorry for misspelling "insight"

  • @FernandoGonzalez-qg7em
    @FernandoGonzalez-qg7em 8 ปีที่แล้ว +7

    Why is the number of outcomes state by 1/p??
    If the probability was .1 then the number of outcomes would be 10 and if p was .9 the number of putcomes would be 10/9?? I didn't get that =/
    But these videos are amazing, thank tou for making such a good video!

    • @ArtOfTheProblem
      @ArtOfTheProblem  8 ปีที่แล้ว +4

      +Fernando Gonzalez - use base 2 numbers in your examples to make it clearer. If p = 0.25 then # outcomes = 4 (1 in 4 chance = 25%), if p=0.125 then # outcomes = 8 (1 in 8 chance = 12.5%)...etc.

    • @johnhobson2106
      @johnhobson2106 6 ปีที่แล้ว

      Using the example in the video, would you be kind enough to explain what the eight outcomes are for C for example? It seems like there are still only 4 possible outcomes, and that to get 8 outcomes after dividing 1 by p, you would have to assume that all the outcomes are equally likely (ie. also p=.125).

    • @mappingtheshit
      @mappingtheshit 5 ปีที่แล้ว

      John Hobson Here there are, 8 possibilities. 000, 001, 010, 100, 110, 101, 011, 111. I hope this helped you. Otherwise ask

  • @arefsmiley4320
    @arefsmiley4320 8 ปีที่แล้ว

    it was great explanation. Thanks

  • @paul1964uk
    @paul1964uk 11 ปีที่แล้ว +2

    There is an arithmetical error at 3:47 (should be p_D*2)

    • @ArtOfTheProblem
      @ArtOfTheProblem  11 ปีที่แล้ว

      shoot. good catch I'll add annotation

  • @Scottastrong
    @Scottastrong 3 ปีที่แล้ว

    There is a typo at 3:52, the last term in #bounces should be .25*2. If this is changed the #bounces=1.75.

    • @Scottastrong
      @Scottastrong 3 ปีที่แล้ว

      And great video BTW!

  • @mudarsaied6258
    @mudarsaied6258 5 ปีที่แล้ว

    Hei, we did not count the probability of CD in the first machine? The first question was is it AB? then you continued only with Yes as an answer? What if it was N0?

  • @resistancefighter888
    @resistancefighter888 8 ปีที่แล้ว +2

    I don't get it, if we need to ask 1.75 questions to guess the next symbol of machine 2, doesn't it mean it produces MORE information?

    • @YumekuiNeru
      @YumekuiNeru 8 ปีที่แล้ว

      +resistancefighter888 I think of it as more information meaning more questions needed to sort through it

    • @resistancefighter888
      @resistancefighter888 8 ปีที่แล้ว

      Thanks a lot!

    • @kikones34
      @kikones34 8 ปีที่แล้ว +6

      You're seeing it the other way around. Imagine that person A and B both have a certain amount of information. To get all the information from A, you have to ask them a minimum of 100 questions. To get all the information from B you have to ask them just a single question. Does A have more or less information that B?
      It's not a very good analogy, but maybe it can make you understand better :D

    • @vimukthi.herath
      @vimukthi.herath 7 ปีที่แล้ว

      "what path would would the other guard show me had I asked him for the path?" And both indicate to the same path(the wrong one) which leaves you with the correct one

  • @dongzhiyang6550
    @dongzhiyang6550 7 ปีที่แล้ว

    Fantastic! Thanks you

  • @chaithanyaks6553
    @chaithanyaks6553 9 ปีที่แล้ว +2

    I am in

    • @agarcia3391
      @agarcia3391 8 ปีที่แล้ว +1

      That's because every question we ask makes the number of outcomes 2 times bigger (since we are asking yes/no questions). Look at the tree of machine 1, it always has 4 outcomes so we have to ask log2(4) = 2 questions each time. however, in the tree of machine 2 you can stop with less questions, if the letter is 'A' you'll end with 2 outcomes (letter 'A' or the question "is it 'B'?"), so log2(2) = 1, just 1 question.

  • @TableRaw
    @TableRaw 7 ปีที่แล้ว

    wow what a video :o thank you

  • @khkan
    @khkan ปีที่แล้ว

    5:49 In your mind visualize the binary tree. Then you can know what the "#outcomes" variable means.

  • @JeffreyBos97
    @JeffreyBos97 7 ปีที่แล้ว

    Thanks a lot, this was very usefull!

  • @ramanaraju7770
    @ramanaraju7770 7 ปีที่แล้ว

    Very good

  • @akshithbellare7568
    @akshithbellare7568 5 ปีที่แล้ว

    why outcomes equal 1/p?

  • @degiatronglang6103
    @degiatronglang6103 7 ปีที่แล้ว

    Great video, thank you. This make me realize how stupid the school is. They make beautiful things seem to be so dull and complex.

  • @bingeltube
    @bingeltube 6 ปีที่แล้ว

    Recommendable

  • @elizabethjohnson7103
    @elizabethjohnson7103 9 ปีที่แล้ว

    very nice.

  • @HSAdestroy
    @HSAdestroy 10 ปีที่แล้ว

    Why dont we ask those two questions from machine1 for machine2?

    • @ptyamin6976
      @ptyamin6976 10 ปีที่แล้ว

      Because you want your series of questions to reflect the probabilities produced by each machine
      Machine 1: 25% A, 25% B, 25% C, 25% D
      BUT Machine 2: 50% A, 12.5% B, 12.5% C, 25% D

    • @amihartz
      @amihartz 10 ปีที่แล้ว +2

      Because you know more about machine 2. So you would be asking more questions than you have to.

  • @profie24
    @profie24 6 ปีที่แล้ว

    素晴らしい

    • @profie24
      @profie24 6 ปีที่แล้ว

      when u ask urself what's entropy in information theory and get the point instantaneously, nice work!

  • @juliangoulette7600
    @juliangoulette7600 9 ปีที่แล้ว +3

    I've heard somewhere that randomness is the most compact way to store information because it lacks pattern

    • @Variecs
      @Variecs 8 ปีที่แล้ว +5

      +Julian Goulette that makes no sense

    • @aggbak1
      @aggbak1 8 ปีที่แล้ว

      thats dumb m8

    • @bobaldo2339
      @bobaldo2339 7 ปีที่แล้ว +1

      Well, if we define "entropy" as disorder, as in physics, then it would seem the more "entropy" the more "information". So, if physicists are concerned that it might be possible to "lose information" in a black hole (for instance - something they nearly all seem to find abhorrent) they are fearing disorder might become more ordered. If randomness is a sort of ultimate disorder, then it must produce the most "information". Retrieving that information is another subject.

  • @sitnarf7804
    @sitnarf7804 5 ปีที่แล้ว

    This is indeed art, but the tearing of the page in the end was painful. : )

  • @MauricioMartinez0707
    @MauricioMartinez0707 6 ปีที่แล้ว

    You are a god thank you

  • @ShreyAroraev3
    @ShreyAroraev3 5 ปีที่แล้ว

    5:33 why?

  • @Dzinuljak1
    @Dzinuljak1 5 ปีที่แล้ว

    great!

  • @devincope6450
    @devincope6450 6 ปีที่แล้ว

    wait... what about a and d or a and c... so on?

    • @notmychairnotmyproblem
      @notmychairnotmyproblem 4 ปีที่แล้ว

      2 years late but I think that has to be a typo. I've been puzzled on that part too.
      Assuming you're talking about 1:25

  • @stephlrideout
    @stephlrideout 10 ปีที่แล้ว

    I don't know about you, but I read #bounces as "hashtag bounces" and not "number of bounces". Oh look, youtube even made it a hashtag. Thanks, internet.

  • @kevinmcinerney9552
    @kevinmcinerney9552 7 ปีที่แล้ว

    At @3:50 it should be a .... + 0.25 x 2 and not ....+ 0.25 X 4

  • @heidyhazem7854
    @heidyhazem7854 6 ปีที่แล้ว

    great (y)

  • @MrMonshez
    @MrMonshez 8 ปีที่แล้ว

    b it

  • @XenoContact
    @XenoContact 9 ปีที่แล้ว

    How the fuck do you ask 1.75 questions?

    • @ArtOfTheProblem
      @ArtOfTheProblem  9 ปีที่แล้ว +8

      +XenoContact In the same way people have 2.5 children. It's just an average (expected) of # questions per symbol. Hence the example at the end of this video with 175 questions / 100 symbols

  • @afridgowun6623
    @afridgowun6623 7 ปีที่แล้ว

    This explains null and nothing. It's only clear for those that already confusing of it for tons years. But not for me, that never thinking about it, just started to grab its meaning. But, sorry, its explanation so fast, unable to attract me, and this is not an art product. Caused it makes me get confusion.