As a software engineer, I cannot help but be amazed at how you managed to break the information down clearly and logically to it's most important components while still being completely excited and in awe of the stuff you are talking about. My old professors and teachers could seriously learn a thing or two from you!
Well its always something to consider, it seems like an area thats going to ain popularity fairly quickly, it could just end up being a hobby, but thanks for the heads up, always appreciate people sharing their experience s
Im doing electrical engineering and find that it's a good mix of computer science and engineering since we do alot of the things computer science majors due (except for programming) and there is an insanely good job market for electrical engineers
Programmer here who finds this series seriously interesting. It's also nice to see very informed tutor and friendly way she covers subjects "in details". Keep up the good work!
vlogbrothers "Imaginary numbers are just as beautiful as imaginary stories." -- John Green (probably slightly misquoted). I'm glad you like it even with the math. I hope you stick around because the most fun parts of CS aren't so math heavy :)
John I love your world history videos, they are so informative and engaging :) and it's inspiring how you lead us to reflect on our own lives making history more powerful. I wish I was so engaged when I was younger! I love this current series for computer science as well. Thank you Crash Course :D!!!!!!!!!!!!
Man, so much love for this series. Never has encoding or Unicode been so exciting! Also, we cannot stress enough how absolutely REVOLUTIONARY it is for both our operations and the objects of our operations to be encoded basically in the same way. The algorithm that encodes your mp3 AND the encoded file itself are written with ones and zeroes. In a digital context, _nothing_ is really sub-symbolic anymore, everything is written.
As soon as she explained how Unicode is used for colors and numbers I paused the screen and was blown away at each individual pixel on my phone, knowing that each individual one was its own line of 64 ones and zeros. When I hit the play button, the true scope of what I'm holding in my hands to type this hit me.
CAN WE JUST TAKE A SECOND TO TALK ABOUT THE STOCK FOOTAGE OF A PERPLEXED BUSINESSMAN WORKING ON A LAPTOP WITH HIS FEET IN A PUBLIC POOL FOR SOME REASON
You ignited my interest in csci again!! I was seriously consider dropping my csci intro class because my teacher is really disorganized and doesn't know how to teach. You've given me hope!!
Oh my goodness, did Carrie Anne just say, "Unicode, one code to rule them all?!" I had to pause the video to catch a breath because it was so funny. I literally shed laughing tears. Her delivery of the line was too cool, very matter-of-fact, only the slightest smirk to give away the reference to the Lord of the Rings. Thank you, Carrie Anne, that made my day!
Correction: When discussing the prefixes, it would have been more up-to-date to talk about how KB used to be 1,024 bytes, but it's now 1,000 bytes, and the old 1,024 measurement is now known as the kibibyte, or KiB.
KB Still commonly means 1024 bytes despite also commonly meaning 1000 bytes. Kibibyte was created to solve this ambiguity but wasn't widely adopted. en.wikipedia.org/wiki/Kilobyte
KB = 1024, Kb = 1000. In networking kB is 1000 transfered and not 1024. Which kinda feels like cheating the enduser of internet speed, but there you go.
Nobody uses "kibi" because the change in vernacular doesn't actually help, and would just make things more difficult. For one example, CPU cache lines are often defined as 4KB in size - that's 4096 bytes. That's useful because that represents 64 sets of 64 bytes, which can be nicely divided in to 512 64 bit numbers, or 1024 32 bit numbers, or 256 128 bit numbers - all common sizes programmers work with, and keeping it defined in powers of 2 keeps things nicely divisible. Software developers who keep track of their cache don't want to start referring to them as "4 point oh nine six KB" cache blocks, or purely in bytes (which would defeat the purpose anyway), and hardware developers don't want to change things because nobody wants to design a physical hardware bus with a bandwidth of 31-and-a-quarter 32 bit numbers just for the sake of being 24 closer to metric units. And nobody wants to say "kibibyte" because it sounds stupid. The one and only part of the tech sector that _does_ want the change though are hard drive manufacturers - because then they can sell you a "500GB" hard drive that's actually 465GB.
KingBobXIV, you bring up some good points on why it makes sense to have a unit that means 1024 bytes. The problem is that calling it "kilobyte" is just bad because "kilo" already means 1000 and now "kilobyte" can be interpreted in both ways. If it was called something else to begin with (Like "kibibyte" for example...) and "kilobyte" would never have meant 1024 bytes this problem would have been avoided. Unfortunately, people called it kilo, it stuck and now it's near impossible to fix since like you said "nobody wants to say 'kibibyte' because it sounds stupid". Also, nice profile pic, that meme is aging well.
Superb series, lots of love to Carrie Anne and Crash Course ❤❤❤ At 9:27, Carrie Anne mentions that 16 bits with space over a million codes. But, actually in 16 bits the max value is 2^16 = 65536. Right? With more than 120,000 of them in over 100 types of script plus space for mathematical symbols and even graphical characters like emoji. Then with 16 bits is it possible to refer all of them as the size is limited to 65535.
That moment when Carrie Ann is talking about ASCII extensions and you painfully remember the barely readable texts in your language.. and suddenly, your language is mentioned as an example. glory days :D
All the hosts right now are awesome (not that the past ones haven't been). I'm a software dude, so this isn't much new stuff for me, but it still makes me excited just hearing the "Hi, I'm Carrie Anne..." (in a childish glee sort of way). So bubbly and enthusiastic.
1 KB (kilobytes) = 1000 bytes (10^3) 1KiB (kibibytes) = 1024 bytes (2^10) Due to confusion computer architects have now defined a new term "kibibyte" to represent powers of 2, whereas the "kilobyte" represents powers of 10.
09:22 Uh, oh. 16-bits is only spacious enough for 65,536 characters, not 'over one million'. But that's the first mistake I've spotted. LOVE these videos and your presentation!
Doesn't 2's compliment effectively create a sign bit anyway? 5 = 0 0000101 -5 = 1 1111011 You can still use the last bit to determine the sign of the number
The original idea was to only use the first bit as a sign (that was called 1's complement), but that left us with an issue of there being -0 and +0 (10000000 and 00000000 respectively). This was then changed to the 2's complement format, which handily kept the sign bit property but got rid of the -0 +0 problem.
Yes it does (I'm majoring in computer engineering). But you are referring to only one type of signed bit representation called signed magnitude representation. 2's complement and 1's complement are still considered signed bit representations. en.wikipedia.org/wiki/Signed_number_representations
My God that was a lot in one video. I haven't invested much time or thought in this subject before, and trying to understand not only binary translation, but the different bit systems and the whole encoding thing was a lot. I hope you guys may go a bit more in depth on these. Until then, I'll be watching this a couple more times
I'm 32 and I missed out on the Tech revolution because I was poor and lived in the rural south. I'm not completely useless but this series is really helping me catch up! thanks
I’m currently studying information technology at my local trade school, and I just had this urge to learn binary (I’m not entirely sure why, I just had the urge to). Once again, Crash Course helps! :)
This show is really awesome. Because I'm of the generation where mass public computers were born, a lot of it comes from comon sense. But, this show explains the logic behind which I love. Thank you
Brief Pedantic correction: Most Computers don't used Signed Numbers anymore, they use Two's Compliment. Its kind of the signed numbers Carrie Anne talks about but where your n'th digit represents its value but negative. This means that -8 in a 4-bit notation would be 1000, -7 is 1001, -6 is 1010, etc. This allows for positive and negative numbers to be add together using simple addition without any complicated rules(so long as you don't get an overflow error). Pedantic correction over.
What they said is correct. In two's complement, the MSB still encodes sign. They very carefully did not go into specifics about the other bits for integers. One of the authors talks about it in another comment thread.
They dont. 9 would be to large for a four bit signed number to hold and would cause the overflow error I mentioned. Its the upper bounds and lower bounds of numbers she mentioned in the video. A 4 bit number in two's complimnet can only represent values from -8 to 7. For a more practical example of this error look up the Gandi nuke bug from civ 5. tldr. taking a small unsigned number and subtracting more then it causes a massive positive output.
I love that everyone is freaking out about two's complement thing. This is a crash course! It's not going to have every single detail, and that's kind of complicated to explain and isn't really all the relevant for most people.Just saying bits are signed is enough.
THANK YOU for speaking passionately and with great interest in what you are actually saying, it keeps me awake and engaged. Wish other channels would take note!
Regarding the 1000 vs 1024 controversy: on the one hand, it's vastly more useful to think in terms of powers of 2. On the other hand, the sheer joy of writing or saying "kibibyte" in a formal context creates a high that can last for days.
I don´t normally leave comments but I feel like thanking someone for this. Amazing! So easy to understand even if I have no idea about computers. Keep up the good work, guys!
I love this course so much!! Each video I learn a looot & can combine infos that I've seen before. It's so very well displayed & explained, THANKS Carrie Anne! ;)
Most screens actually display pictures in 24bit not 32 bit though, 8 bits per colour. Some file formats include an extra 8 bits for alpha (transparency). Then there is the kilo/kibi debate. Also a number with a decimal point isn't floating point, it's non-integer. It can be represented in a computer in floating point but you can also use fixed point.
I'm very late to the party, but I would like to clarify a few things. First, modern color bandwidth is 24 bits (3 bytes, 1 for each of red, green, and blue, which creates the RGB and HEX colour systems). Second, disambiguating "memory" and "storage" is important in this context; in this video, "memory" refers to hard drives (commonly called "storage"), sold in gigabytes or terabytes, to which "memory addresses" do not apply. Finally, the Unicode UTF-8 standard deserves its own video (of which it has many) and can encode 1 character to as little as 1 byte (ASCII-compatible) upto 32 bits (or 4 bytes).
Some symbols are encoded across two 16 bit units. I think the two most significant bits in the first 16 bit unit encode whether it requires another 16 bits.
quarthinos Wikipedia says, "Unicode defines a codespace of 1,114,112 code points in the range 0hex to 10FFFFhex." That's a little more than 2^20. Without a lot of bit shuffling, that would fit most conveniently in 3 bytes. UTF-32 uses 32-bits, fixed length. UTF-16 uses 16 bits, with, as you say, a scheme for identifying some characters as 32-bit. Not sure why they did not do UTF-24, except for the common processor aversion to transfers not on a word boundary, and perhaps a desire to leave room for extraterrestrial languages. (actually, I do know that slightly suboptimal packing of the symbol space can make coding/decoding a lot easier)
Yeah, that was a minor misscripting there. Unicode has 2^32 code points (maximally, it's actually got a huge amount of empty space in there, too). And utf8 (not utf16) is the most common (because no matter how much data you think you use in your Java program, I assure you the amount of utf8 webpage data beats that), which can use 1-5 bytes for each code point, and the utf16 mentioned in the video can actually use 2-6 bytes for each code point, rather than the implied "only 2," because both utf8 and utf16 are special maps onto the full map of 32 bits. utf8 and utf16 can both represent any of the code points in the 32 bit map, by having a special sequences of significant bits mean "I need multiple bytes to represent this code point".
Pixels are stored in 32 bits, but 8 of them are for the transparency channel, which is not used for photographs (although it can be used for filters on instagram, snapchat, photoshop etc). The other 24 bits are divided into 3 8-bit channels that can represent light intensity and color in many different ways, the most well-known being RGB (red/green/blue).
6:40 (float) Won't this representation method create Redundancies ? like if i have 625 as significand and 1 as exponent i will get the number 6250 but if i have 6250 as significand and 0 as exponent i will also get 6250 .... how to sort that out ?
and they still dont even answer how those place holders represent the exponent or the significant and by which process the computer multiplies or by which law the representation of the character 1 os on and 0 is off. Millions of unanswered questions. So hard to find videos that actually teach you something.
Great video. Small nitpick, UTF-16 ran out of two-byte representations already which IIRC is why UTF-32 was created. Supplementary plane codepoints such as emoji require four bytes. But I think (as other commenters have mentioned) UTF-8 seems to be the most ubiquitous standard now due to its more space-conserving representation and backwards-compatibility with ASCII.
@Peterolen Sure, I just interpreted the comment about this from the episode to imply that UTF-16 can store all codepoints using a two-byte representation ("one chunk" of 16 bits) which as you've just said is not the case anymore. UTF-32 is the only encoding with a guaranteed one-to-one mapping between single "chunks" of 32 bits/4 bytes with UTF codepoints. Meaning it's also the only one that you can use raw string length to determine how many characters are in the string (though this doesn't account for combining characters such as diacritical marks.)
I remember taking a beginner's computer science class back in 6th grade (1994) and dropped out because my teenage brain thought it was boring. I wish I had developed an interest back then now that I can objectively see the impact computer science has had in our collective society.
"The most common version of Unicode uses 16 bits" Isn't this incorrect? The most common version of Unicode is UTF-8, which can use anywhere from 8 to 32 bits, depending on the character.
Quick correction on Unicode: Depending on how you define "used", 16-bit encoding (either UTF-16 or UCS-16) may not be the most common. While it the main format on Windows, and used heavily in Asia, most transmission of text is done in UTF-8, which is an 8-byte encoding (sort of).
This series is just perfect. It's a topic I'm passionate about. Can't wait to get into the meat of the topic! Such a great introduction. Thank you all for making this happen!
I kinda wish you had at least briefly mentioned twos-complement notation, as your description of the implemention of negative numbers in binary was misleading, and I think it's a very mathematically interesting notation. Great video though!
There's a comment up thread from one of the authors for this episode's script: They considered it, but decided against it to prevent confusion. It doesn't really matter unless you're trying to make adders, and I don't know that they're gonna get that far into the weeds.
As a software engineer, I cannot help but be amazed at how you managed to break the information down clearly and logically to it's most important components while still being completely excited and in awe of the stuff you are talking about. My old professors and teachers could seriously learn a thing or two from you!
Her enthusiasm while taking about unicode is inspiring.
This series has made me seriously consider computer science as a possible degree option-Thanks guys/gals of Crashcourse
Go for it! It's not as scary as it seems at first :D
It'll be a back off it theatre/surgical assistant doesn't work out. Definitely looking into it seriously now though :D
Be warned - computer science is becoming a much more competitive field as labour becomes more plentiful from rapidly developing countries in Asia.
Well its always something to consider, it seems like an area thats going to ain popularity fairly quickly, it could just end up being a hobby, but thanks for the heads up, always appreciate people sharing their experience s
Im doing electrical engineering and find that it's a good mix of computer science and engineering since we do alot of the things computer science majors due (except for programming) and there is an insanely good job market for electrical engineers
My brain hurts and my decision to watch 20 of these at 1am is still not regreted
Same tho its 2:30 am for me but progress never sleeps!
It’s 3:21 am for me
Pi , please make your banner 3.14159265358979 etc.
Rachel Alaine 6:18 am
Programmer here who finds this series seriously interesting. It's also nice to see very informed tutor and friendly way she covers subjects "in details". Keep up the good work!
I find this so compulsively watchable even though it features math. -John
vlogbrothers "Imaginary numbers are just as beautiful as imaginary stories." -- John Green (probably slightly misquoted).
I'm glad you like it even with the math. I hope you stick around because the most fun parts of CS aren't so math heavy :)
vlogbrothers for whatever reason, I always enjoyed the math behind Binary. Beautifully simple.
I'm learnding.
John I love your world history videos, they are so informative and engaging :) and it's inspiring how you lead us to reflect on our own lives making history more powerful. I wish I was so engaged when I was younger! I love this current series for computer science as well. Thank you Crash Course :D!!!!!!!!!!!!
vlogbrothers if my grammer and adjectives were a grain of sand, John Green's would be the universe.(sorry Hank)
She's awesome! The people behind the animations and editing don't get the recognition they deserve either - really high quality stuff :).
“Of course not everything is a positive number - like my bank account in college.” Oof.
I read this as she was saying it lol, clever.
called out
I am a 26 year old trying to get a bachelor's in computer science and this video series really makes computer science sound so simple.
Thank you so much! I'm 23 years old and FINALLY binary system makes sense to me! So pleased!!! Feels like learning a new language in 10 minutes.
Now that you know binary, you should be able to convert among binary, hexadecimal, and octal with ease.
Man, so much love for this series. Never has encoding or Unicode been so exciting!
Also, we cannot stress enough how absolutely REVOLUTIONARY it is for both our operations and the objects of our operations to be encoded basically in the same way. The algorithm that encodes your mp3 AND the encoded file itself are written with ones and zeroes. In a digital context, _nothing_ is really sub-symbolic anymore, everything is written.
As soon as she explained how Unicode is used for colors and numbers I paused the screen and was blown away at each individual pixel on my phone, knowing that each individual one was its own line of 64 ones and zeros. When I hit the play button, the true scope of what I'm holding in my hands to type this hit me.
You can tell you’re really passionate about what you’re teaching, and I love that
CAN WE JUST TAKE A SECOND TO TALK ABOUT THE STOCK FOOTAGE OF A PERPLEXED BUSINESSMAN WORKING ON A LAPTOP WITH HIS FEET IN A PUBLIC POOL FOR SOME REASON
Just embrace it.
Turn off caps lock
Thank you! I was thinking the same thing!!
What do you call a family of eight rabbits?
A Rabbyte!
(sorry, I'll leave now...)
Ha...
Hareowing
xD
trash joke
@@spryth2741 says the trash himself
dude I'm so glad this got out *exactly* when I started my computer eng course
Your obvious enthusiasm for CS is inspiring.
PBS needs to give all of you a raise. This series is sick!
Does anyone else feel guilty when they skip a video on Crash Course?
You ignited my interest in csci again!! I was seriously consider dropping my csci intro class because my teacher is really disorganized and doesn't know how to teach. You've given me hope!!
I came here after my computer engineering class and I've just started to understand what my lecturer was saying. thank youuu
Oh my goodness, did Carrie Anne just say, "Unicode, one code to rule them all?!" I had to pause the video to catch a breath because it was so funny. I literally shed laughing tears. Her delivery of the line was too cool, very matter-of-fact, only the slightest smirk to give away the reference to the Lord of the Rings. Thank you, Carrie Anne, that made my day!
"Of course not everything is a positive number, like my bank account in college." I know that feeling sister.
The narrative and visual sequences are very effective. This has accelerated my comprehension on this subject greatly, thank you!
Correction: When discussing the prefixes, it would have been more up-to-date to talk about how KB used to be 1,024 bytes, but it's now 1,000 bytes, and the old 1,024 measurement is now known as the kibibyte, or KiB.
+1, good summary of that confusing nonsense
KB Still commonly means 1024 bytes despite also commonly meaning 1000 bytes. Kibibyte was created to solve this ambiguity but wasn't widely adopted.
en.wikipedia.org/wiki/Kilobyte
KB = 1024, Kb = 1000. In networking kB is 1000 transfered and not 1024. Which kinda feels like cheating the enduser of internet speed, but there you go.
Nobody uses "kibi" because the change in vernacular doesn't actually help, and would just make things more difficult.
For one example, CPU cache lines are often defined as 4KB in size - that's 4096 bytes. That's useful because that represents 64 sets of 64 bytes, which can be nicely divided in to 512 64 bit numbers, or 1024 32 bit numbers, or 256 128 bit numbers - all common sizes programmers work with, and keeping it defined in powers of 2 keeps things nicely divisible. Software developers who keep track of their cache don't want to start referring to them as "4 point oh nine six KB" cache blocks, or purely in bytes (which would defeat the purpose anyway), and hardware developers don't want to change things because nobody wants to design a physical hardware bus with a bandwidth of 31-and-a-quarter 32 bit numbers just for the sake of being 24 closer to metric units. And nobody wants to say "kibibyte" because it sounds stupid.
The one and only part of the tech sector that _does_ want the change though are hard drive manufacturers - because then they can sell you a "500GB" hard drive that's actually 465GB.
KingBobXIV, you bring up some good points on why it makes sense to have a unit that means 1024 bytes.
The problem is that calling it "kilobyte" is just bad because "kilo" already means 1000 and now "kilobyte" can be interpreted in both ways. If it was called something else to begin with (Like "kibibyte" for example...) and "kilobyte" would never have meant 1024 bytes this problem would have been avoided.
Unfortunately, people called it kilo, it stuck and now it's near impossible to fix since like you said "nobody wants to say 'kibibyte' because it sounds stupid".
Also, nice profile pic, that meme is aging well.
istg my prof took three one hour lectures to cover this much and i still didn't get it but this HELPED A LOT.
I wish all teachers were like Carrie, you really get motivated to study more!
Superb series, lots of love to Carrie Anne and Crash Course ❤❤❤
At 9:27, Carrie Anne mentions that 16 bits with space over a million codes. But, actually in 16 bits the max value is 2^16 = 65536. Right?
With more than 120,000 of them in over 100 types of script plus space for mathematical symbols and even graphical characters like emoji. Then with 16 bits is it possible to refer all of them as the size is limited to 65535.
My brain is not braining
This is one of the best videos on this topic I've ever watched. Thank you for this.
Halting and Catching Fire.
Kudos to the Crash Course team for adding such references.
I hope this girl is a professor now, this is so well explained.
That moment when Carrie Ann is talking about ASCII extensions and you painfully remember the barely readable texts in your language.. and suddenly, your language is mentioned as an example. glory days :D
All the hosts right now are awesome (not that the past ones haven't been). I'm a software dude, so this isn't much new stuff for me, but it still makes me excited just hearing the "Hi, I'm Carrie Anne..." (in a childish glee sort of way). So bubbly and enthusiastic.
As a guy majoring in computer engineering, it's always refreshing to see videos like this after learning them in class awhile back.
David Park It's nice when people put CS in a nice easy-to-follow format.
Few Minute Programming Tru dat
1 KB (kilobytes) = 1000 bytes (10^3)
1KiB (kibibytes) = 1024 bytes (2^10)
Due to confusion computer architects have now defined a new term "kibibyte" to represent powers of 2, whereas the "kilobyte" represents powers of 10.
Tom Scott is like "Someone is talking about Unicode and Emoji in TH-cam without me!"
No, he's on holiday right now. Maybe when he comes back.
Now Tom Scott should collaborate with crashcourse.
Linguistics CC
No, he isn't. He actually hates the Emoji shtick. Though maybe he shouldn't have built that emoji only messenger and an emoji keyboard then ...
09:22 Uh, oh. 16-bits is only spacious enough for 65,536 characters, not 'over one million'. But that's the first mistake I've spotted. LOVE these videos and your presentation!
Most computers don't use a sign bit, they use two's complement. There is a great video by Ben Eater that explains it
Doesn't 2's compliment effectively create a sign bit anyway?
5 = 0 0000101
-5 = 1 1111011
You can still use the last bit to determine the sign of the number
Circuitrinos It does, yes. but, it makes things like adding and subtracting positive and negative numbers MUCH more convenient and easy
The original idea was to only use the first bit as a sign (that was called 1's complement), but that left us with an issue of there being -0 and +0 (10000000 and 00000000 respectively). This was then changed to the 2's complement format, which handily kept the sign bit property but got rid of the -0 +0 problem.
Spencer White not true for floating point values, like she mentioned in the video most computer architectures use IEEE 754 standard.
Yes it does (I'm majoring in computer engineering).
But you are referring to only one type of signed bit representation called signed magnitude representation.
2's complement and 1's complement are still considered signed bit representations.
en.wikipedia.org/wiki/Signed_number_representations
this series is simply amazing and unfolded the mystery of what goes inside the machine when we program something.. Thank You so much :)
My God that was a lot in one video. I haven't invested much time or thought in this subject before, and trying to understand not only binary translation, but the different bit systems and the whole encoding thing was a lot. I hope you guys may go a bit more in depth on these. Until then, I'll be watching this a couple more times
I'm 32 and I missed out on the Tech revolution because I was poor and lived in the rural south. I'm not completely useless but this series is really helping me catch up! thanks
I’m currently studying information technology at my local trade school, and I just had this urge to learn binary (I’m not entirely sure why, I just had the urge to). Once again, Crash Course helps! :)
This show is really awesome. Because I'm of the generation where mass public computers were born, a lot of it comes from comon sense. But, this show explains the logic behind which I love.
Thank you
Brief Pedantic correction:
Most Computers don't used Signed Numbers anymore, they use Two's Compliment. Its kind of the signed numbers Carrie Anne talks about but where your n'th digit represents its value but negative.
This means that -8 in a 4-bit notation would be 1000, -7 is 1001, -6 is 1010, etc. This allows for positive and negative numbers to be add together using simple addition without any complicated rules(so long as you don't get an overflow error).
Pedantic correction over.
What they said is correct. In two's complement, the MSB still encodes sign. They very carefully did not go into specifics about the other bits for integers. One of the authors talks about it in another comment thread.
Yeah I know that's why its a pedantic. She's right but not exactly right.
So how does the computer avoid confusing the numbers -7 and 9, if they're both 1001?
They dont. 9 would be to large for a four bit signed number to hold and would cause the overflow error I mentioned. Its the upper bounds and lower bounds of numbers she mentioned in the video. A 4 bit number in two's complimnet can only represent values from -8 to 7.
For a more practical example of this error look up the Gandi nuke bug from civ 5.
tldr. taking a small unsigned number and subtracting more then it causes a massive positive output.
The Ghandi bug was in the original Civilization, not in Civ 5.
Damn that is pure pedagogy thanks a lot !
I really really love this Crash Course!!
I only regret not finding this sooner! Thanks!
Loved this. Answers most of my questions. Hats off to previous generations.
very good explanation
I love that everyone is freaking out about two's complement thing. This is a crash course! It's not going to have every single detail, and that's kind of complicated to explain and isn't really all the relevant for most people.Just saying bits are signed is enough.
2s complement is very relevant to binary computing crash course and worth talking about
THANK YOU for speaking passionately and with great interest in what you are actually saying, it keeps me awake and engaged. Wish other channels would take note!
Regarding the 1000 vs 1024 controversy: on the one hand, it's vastly more useful to think in terms of powers of 2. On the other hand, the sheer joy of writing or saying "kibibyte" in a formal context creates a high that can last for days.
So much
interesting! I am a french student (15yo) and i loved this video! I can thanks to the translator who allowed me to understand this video!
I am digging this video series so far. Lovely work.
Outstanding series! Thank you to the presenter and all those behind the camera!
Love the TNG - Best of Both Worlds reference :D
You are so knowledgeable! Thank you
I LOVE this series!! I watched the Harvard CS50 on edx but this helps me understand the topics they talked about more in depth.☺️
TE // TeenagEdifier if you're interested I make some basic programming videos too!
watched harvard cs50 as well, did not learn a lot about computer science but a lot of what is wrong with our younger generation
@@billhutchens9666 so you dont reccomend harvard cs50? was about to start with it
@@ramonebneter3019 i am old school, you might like it. i would not avoid something because of someone else's opinion
I don´t normally leave comments but I feel like thanking someone for this. Amazing! So easy to understand even if I have no idea about computers. Keep up the good work, guys!
Great and such informative video. I have read all this in school, but good to revise again! Thanks
"like my bank account in college" (negative) 😂 I love you Anne 👍
I love this course so much!!
Each video I learn a looot & can combine infos that I've seen before.
It's so very well displayed & explained, THANKS Carrie Anne! ;)
watching this series is just me going “OH” “so that’s where it comes from”
im using this for my computer science course next year and i uave to say this makes it really interseting
Joe Greaves If you go to collage I think you should learn grammar
College*
Anybody else LOOOVES the way she teaches?
Most screens actually display pictures in 24bit not 32 bit though, 8 bits per colour. Some file formats include an extra 8 bits for alpha (transparency).
Then there is the kilo/kibi debate.
Also a number with a decimal point isn't floating point, it's non-integer. It can be represented in a computer in floating point but you can also use fixed point.
Learning binary is like discovering a new language, it's such an awesome feeling!
US debt SHAAAAADE :) I love this series! Thank you, CC and Thought Cafe!
Awesome, this is the best Crash Course since CC Astronomy.
This episode hurt my brain but I tried my best to understand it haha
I'm very late to the party, but I would like to clarify a few things. First, modern color bandwidth is 24 bits (3 bytes, 1 for each of red, green, and blue, which creates the RGB and HEX colour systems). Second, disambiguating "memory" and "storage" is important in this context; in this video, "memory" refers to hard drives (commonly called "storage"), sold in gigabytes or terabytes, to which "memory addresses" do not apply. Finally, the Unicode UTF-8 standard deserves its own video (of which it has many) and can encode 1 character to as little as 1 byte (ASCII-compatible) upto 32 bits (or 4 bytes).
You said the Unicode uses 16 bits, with space for over a million codes, but 16 bits can only encode 65,536 different symbols.
Some symbols are encoded across two 16 bit units. I think the two most significant bits in the first 16 bit unit encode whether it requires another 16 bits.
As Quarthinos said some some of them use two units. For example all the emoji flags are actually two unicode characters.
quarthinos Wikipedia says, "Unicode defines a codespace of 1,114,112 code points in the range 0hex to 10FFFFhex." That's a little more than 2^20. Without a lot of bit shuffling, that would fit most conveniently in 3 bytes. UTF-32 uses 32-bits, fixed length. UTF-16 uses 16 bits, with, as you say, a scheme for identifying some characters as 32-bit. Not sure why they did not do UTF-24, except for the common processor aversion to transfers not on a word boundary, and perhaps a desire to leave room for extraterrestrial languages. (actually, I do know that slightly suboptimal packing of the symbol space can make coding/decoding a lot easier)
Yeah, that was a minor misscripting there. Unicode has 2^32 code points (maximally, it's actually got a huge amount of empty space in there, too). And utf8 (not utf16) is the most common (because no matter how much data you think you use in your Java program, I assure you the amount of utf8 webpage data beats that), which can use 1-5 bytes for each code point, and the utf16 mentioned in the video can actually use 2-6 bytes for each code point, rather than the implied "only 2," because both utf8 and utf16 are special maps onto the full map of 32 bits. utf8 and utf16 can both represent any of the code points in the 32 bit map, by having a special sequences of significant bits mean "I need multiple bytes to represent this code point".
At 3:05, how come 1 added 3 times = 1 and not 0 (like she did before)?
I love how at 9:56 they synced the video of her saying "TH-cam videos" from this TH-cam video (private) then put it in this TH-cam video!
This may be the only binary we all agree on
musashi939 I think both the left and right have made it out to be a much bigger deal than it really is
Gender is what you identify as. Sex is what you biologically are.
+Josh McKown - Cue Bon Jovi's "Dead or Alive"
Its like the twin towers, there used to be two of them and now its really sensitive subject
VytenisR1
...take the like, and don't tell a soul who you got it from
Pixels are stored in 32 bits, but 8 of them are for the transparency channel, which is not used for photographs (although it can be used for filters on instagram, snapchat, photoshop etc). The other 24 bits are divided into 3 8-bit channels that can represent light intensity and color in many different ways, the most well-known being RGB (red/green/blue).
6:40 (float) Won't this representation method create Redundancies ?
like if i have 625 as significand and 1 as exponent i will get the number 6250
but if i have 6250 as significand and 0 as exponent i will also get 6250 .... how to sort that out ?
and they still dont even answer how those place holders represent the exponent or the significant and by which process the computer multiplies or by which law the representation of the character 1 os on and 0 is off. Millions of unanswered questions. So hard to find videos that actually teach you something.
Great video. Small nitpick, UTF-16 ran out of two-byte representations already which IIRC is why UTF-32 was created. Supplementary plane codepoints such as emoji require four bytes. But I think (as other commenters have mentioned) UTF-8 seems to be the most ubiquitous standard now due to its more space-conserving representation and backwards-compatibility with ASCII.
@Peterolen Sure, I just interpreted the comment about this from the episode to imply that UTF-16 can store all codepoints using a two-byte representation ("one chunk" of 16 bits) which as you've just said is not the case anymore. UTF-32 is the only encoding with a guaranteed one-to-one mapping between single "chunks" of 32 bits/4 bytes with UTF codepoints. Meaning it's also the only one that you can use raw string length to determine how many characters are in the string (though this doesn't account for combining characters such as diacritical marks.)
👍 for the ST:TNG reference 😉
I remember taking a beginner's computer science class back in 6th grade (1994) and dropped out because my teenage brain thought it was boring. I wish I had developed an interest back then now that I can objectively see the impact computer science has had in our collective society.
"Not everything is a positive value, like my bank account in college"
I am a little surprised that CraseCourse hasn't had a Math based course yet, but I am not complaining, it's just something I realized.
Math = the best
I can’t stop staring at her eyes following the prompt. She covers it up well but it’s still noticeable. These actors are getting good at this stuff.
Can we have a crash course biochemistry please!
this is my favorite cc intro
"The most common version of Unicode uses 16 bits" Isn't this incorrect? The most common version of Unicode is UTF-8, which can use anywhere from 8 to 32 bits, depending on the character.
you are a bullet! so fast giving 8 bit of knowledge in 1 sec
crazy imaging how many read/write 1's and 0's are on a disk
Quick correction on Unicode:
Depending on how you define "used", 16-bit encoding (either UTF-16 or UCS-16) may not be the most common. While it the main format on Windows, and used heavily in Asia, most transmission of text is done in UTF-8, which is an 8-byte encoding (sort of).
I almost chocked on that national debt burn.
Yes, I'm easily amused...
thank you sooo much , from india . 👌👌👌👌😊😊❤❤
Considering I'm a novice, my mind id blown by the idea of a 16 bit universal code!
"inconceivable!"
This series is just perfect. It's a topic I'm passionate about. Can't wait to get into the meat of the topic! Such a great introduction. Thank you all for making this happen!
I kinda wish you had at least briefly mentioned twos-complement notation, as your description of the implemention of negative numbers in binary was misleading, and I think it's a very mathematically interesting notation. Great video though!
Joshua Baker you can't blame em considering all the info covered in one video
There's a comment up thread from one of the authors for this episode's script: They considered it, but decided against it to prevent confusion. It doesn't really matter unless you're trying to make adders, and I don't know that they're gonna get that far into the weeds.
What was misleading about it? The MSB in 2's complement does indicate the sign. 1111 (-1) + 0001 (1) = 0000 (0)
This is such a wonderful series, bravo!
4:57 that gave me anxiety seeing that laptop about to slide
So glad this is a series you're doing crash course, i sit my GCSE exams in may/june and want to do game design and computer science as a career
the pain...
i have never watched this kind of awesome videos..