Funnily at 10:10 there might be a mistake - because the number "1 0000 111 0000" after adding "00000 111" lengthens to "1 0000 111 00 111", so it "feels" like there is additional 0 between 2 triples of "1". And it doesn't feel like we needed to expand due to number being too big (256,512 - we are in between). But I didn't have the time to check it.
Somebody else also noticed that the condition in the C version of the algorithm is wrong. `str[i] < '0' && str[i] > '9'` will always return false, since it's checking if str[i] < 48 and str[i] > 57, which is never true. The condition should be `str[i] < '0' || str[i] > '9'` My apologies for these mistakes.
@@zionmelson7936 I was formatting 1 and 0 separately, so one could see there was additional number there. I didn't go for actual formatting like it should be.
While on the topic, I know it's a bit early for the channel to explain it now, but whenever you get to architectures, please don't forget endianness explanation, there are always explanations of how but not of why. Great video as always!!
Please can you make a video on bitshifting and bitmasking? I assumed bitshifting would be used for this, but your explanation of the algorithm here was excellent. Thanks!
It's funny that right now at my job, I am dealing with serializing ASCII characters and you are making this video. I'm really glad I'm here George. Nicely done.
This is not casting, this is converting. Casting is a grammatical operation (forcing the compiler to think that a data has a certain type, but not actually doing any conversation).
Amazing. I’m literally addicted to learning like this through your videos. They’re awesome ! I can’t wait for the next one and yes I would love a video on conversion of the binary values back do string to understand how the print function works !
Jesus is the only way to salvation and to the father. Please repent today and turn away from your sins yo escape judgement 🙏🙏 There is no other way to get to the father but through him.
@@oglothenerd Negative integers are simple. Like @M0sterFr3ak mentioned, it is 2-compliment which you need to find the binary representation of the number (for example, for a 8 bit number 10, its binary representation is 00001010). With the binary representation, all you need to do is to invert all the bits (from 00001010 to 11110101) and to add one at the LSB (from 11110101 to 11110110), than you will have -10.
This is an excellently planned out documentary. The planning and wit required to explain ASCII conversions and binary maths is excellent. And what a great narrative voice too!
I'm really happy I found this channel... I somewhat knew how it worked, but this just makes it really clear. You are great at explaining things. I am eagerly waiting for more videos
Another way to do: 1. Take the string as argument 2. Access every character 3. Use fixed values with switch cases for every character till '0' to '9' like switch(str[i]) case '1' : 001 4. Do bit shifting to create a BCD value containing all characters 5. Convert BCD to binary 6. return binary It may or may not be faster
6:57 who doesn't understand how it works, look for binary arithmetics. Basically binary arithmetic addition follow the next steps: 1. 0 + 0 = 0 2. 0 + 1 = 1 3. 1 + 0 = 0 with carry 1 (one will be used for the addition of bit on next radix - number that is on the left side from one we were looking at) 4. 1 + 1 = 0 with carry 1 5. 1 + 1 + 1 = 1 with carry 1
Just want to say that you are the one i was searching for. You answers same questions as mines and in a way that i wanted. Hope you would get more known
When it gets to converting decimal fractions as strings to floats things get a lot more complicated. Looking forward to seeing a new video about this case in the future!
This is actually easy how I would think Since "0" is 48 we subtract 48 from it get the real value first then multiplying to the correct power of 10. So once the number is inputted "1234" turn them to binary 1 10 11 100 and multiply and adding(but computer does to know what index number to start with which isn't so hard) and we get the number before input another number. These process happened really fast we cannot notice them
Using IEEE-754 binary floating point 32 or 64 format, you would have to manually decode the floating point. First bitcast the floating point to an unsigned integer of the same size, I.e float -> ui32 or double -> ui64, then using the encoding specification you extract the sign, exponent and mantissa from the integer.
This is actually easy how I would think Since "0" is 48 we subtract 48 from it get the real value first then multiplying to the correct power of 10. So once the number is inputted "1234" turn them to binary 1 10 11 100 and multiply and adding(but computer does to know what index number to start with which isn't so hard) and we get the number before input another number. These process happened really fast we cannot notice them I mean we can even start backwards just tell it(computer) how long the number is ourselves but that means we have to know tell the length parameter so that way is better
0:07 Yes... Just yes. Maybe this will be SUPER slow but yes) I have this in mind: 1. Represent each character in string with 4-bit binary number (Using Unicode) 2. Make BCD number from all characters 3. Convert BCD to binary. Now you have a number. For example: "532" 1. || "5" = 0101 || 3 = 0011 || 2 = 0010 || 2. 0101 0011 0010 (BCD to Binary algorithm) 3. "532" = 1000010100 __________ Now I'll watch video) ---------------------------------- Ps. Subtracting 48 is a very cleaver solution!! Now we can do same thing as i did. But initially i just wanted use table to store Unicode and number like this: | Unicode Number | Number in Binary | And use this table to convert each symbol to a number but yeah we can just subtract '0' encoding to get a number!
Great video! I would really like to see a video explaining the problem with null values inside languages and how to avoid them, that would be very educative!
I work on a php application where someone in the past reimplemented the string to number conversion... And if you have questions... Yes, it involved a loop with a bunch of ifs to check each digit Yes, they messed it up Yes, changing the usages of the function to "(int)$value" fixed a lot of bugs Yes, the person who did it (acording to git blame) still works there but was promoted to manager No, we dont do code reviews or anything like that
This channel is perfect to watch alongside taking CS50 to start my programming journey. Pretty excited about understanding everything in this video and learning more. Thanks for the quality videos.
literally Str(number) - 0x30 for 0-9, Str(uppercase letter) - 0x41 for A-Z, Str(lowercase)-0x61 for a-z Converting between the two is as simple as char(lower) = char(upper) ^ 0x20
Before watching the response, this was the algorithm I came up with: ``` base = 10 str = "1030" println(string_to_int(str, base)) fn string_to_int(str: string, base: int) { let number = 0 each (index, char) of str { let digit = lookup_from(char) let exp = base ** len(str) - index - 1 number += digit * exp } return number } ```
I would like a future video about converting an int to a string, but I am more interested in the much more complicated process of converting a float to a string.
My man your videos are awesome. Can you do an explanation on how the clock is used to move the process forward from the transistor level? For example, how do transistor gates use the clock to take the next instruction into the instruction register at the right time?
Another video! I'm glad I checked your channel, since there was no notification. Typical of TH-cam sadly. Though it probably has to do with delay between the last part and this video. TH-cam deprioritizes notifications if you normally have 1 week cadence and then suddenly release video month later. Honestly being a TH-camr is a ton of work.
Great, that's a perfect illustration of what happens internally with the atoi() function. Ah, I noticed there is minor difference between converting a numeric string to a binary integer vs converting a numeric string to a BCD number. And that is multiplying by 10 vs shifting by 4 bits (since BCD numbers represents each numeric digit every 4 bits). I find it rather interesting, with the IBM mainframe, existing a single machine instruction (CVD) which can convert a numeric string (up to 31 digits) to BCD number. Likewise, there's another instruction (CVB) which can convert these BCD number into integers.
"Shipping to Alaska, Hawaii, Puerto Rico, and International addresses is currently not available." -> pity I was actually looking for a new chair Anyway, good video, it's nice to see easier topics now and then.
ASCII allows for the use of a bitmask to get the number itself. The probably preferred way to convert these BCD numbers to an integer is reverse double dabble. There's a wiki article about it. This algorithm gets rid of expensive and area intensive (depending on your architecture, first for CPU, second for FPGA/custom silicon) multiplications and relies on fast/small shifts and add/sub operations.
How to convert a number to a string: The key instrument is integer division. Let's consider the number 4327. Dividing by 10 we obtain 432 and remainder 7. Now, we already know how to convert a single digit to its corresponding ASCII code: just add 48 or ord('0'). So in this one step we obtained the so called least significant digit (7) and are left with 432. Now, we just have to repeat the same procedure until we are left with no more digits (when the last division yields 0 as the quotient). PS: Integer division is just a single processor instruction and actually gives both the quotient and the remainder in one go so it's pretty fast.
Well, actually, there is a limit for integer numbers (as well as float), at least in C. And there is also negative numbers. So the more proper function is a little bit more complex. I wrote mine like this: int64_t StrToNum(char *Str) { int64_t Result = 0; uint32_t Index = 0; bool IsNegative = false; if (Str[0] == '-') { IsNegative = true; Index = 1; } while ((Str[Index] != '\0') && (Str[Index] >= '0') && (Str[Index]
Great video, and it is a very introductory version of the algorithm. However, this is not an efficient algorithm. The reason is due to the fact that the alu can't parallelize the multiplications and the additions. You should see Andrei Alexandrescu's lecture on this! But this can be a cool continuation of this video.
Thanks for the advice, I'll take a look at the lecture as soon as I get some free time. I'm assuming it is related to SIMD but if not I'm sure I'll enjoy it anyways.
I really appreciate your videos. They answer a lot of the questions that stuck in my mind. I have another confusion related to streams and buffer in C language. The unusual behavior of scanf when it encounter new lines charactor( ). Can you please make a video on streams?
Please make a video about big and little endianness, I always forget the order and don't understand the order of bits itself in comparison to the byte order.
I think it's more intuitive to multiply the numbers by magnitudes of 10 first and then adding them up. After that the better algorithm that you showed in the video would've been more clear I think
done the string to float double and it myself but a different approach stuff skiped in this video - Sign of a value for applaing a Sign multyplay output value by -1 if the '-' is found at the start of a string - decimal parsing the same way as string to int but - do it 2 times and when . was found instead of multiplying value just divide decimal it by 10 for each Ituretion and cheak if value is not to large
I've always found it rather beautiful that ASCII encodes decimal characters as 0x30 to 0x39 in hex, so mentally you can just remove 0x3 and know what the number is.
The C robustness check should have an || not an &&, and the Python one will raise a ValueError if the digit is between "2" and "8". And it doesn't need a f-string. Also, I wonder how much electricity we'd have saved globally if 0-9 were binary 0-9.
8:55 Wouldn't it be more efficient or the same in terms of time complexity, by starting to do these multiplications from the right (number 7) to the left and just multiply by powers of 10^i, where i grows from 0 until it reaches the the last digit, which is 4 in this case? so you have 7 * 10^0 + 2 * 10^1 + 3 *10^2 + 7 * 10^3.
Great explanation as always. I have a little question about 11:07. Are the conditions supposed to be like that? In the C example `str[i] < '0' && str[i] > '9'` will always return false, since it's checking if str[i] < 48 and str[i] > 57, which is never true. Maybe `str[i] >= '0' && str[i] '9'` will return true on chars that are NOT numeric chars, since it's checking if char > 48 and char > 57, equivalent to `char > 57`, equivalent to `char > '9'`. I suggest `'0'
@@CoreDumpped Oh I just realised I wrote the suggestions for checking for numerical chars, not the opposite. So in C it would be like you wrote in the reply, `str[i] < '0' || str[i] > '9'` and in Python basically the same, `char < '0' or char > '9'`. Or you could be fancy and use De Morgan's laws, `not('0'
Funnily at 10:10 there might be a mistake - because the number "1 0000 111 0000" after adding "00000 111" lengthens to "1 0000 111 00 111", so it "feels" like there is additional 0 between 2 triples of "1". And it doesn't feel like we needed to expand due to number being too big (256,512 - we are in between). But I didn't have the time to check it.
Yeah, somehow that 0 got in between. I didn't noticed this while editing so thanks. I'll pin this comment.
Somebody else also noticed that the condition in the C version of the algorithm is wrong.
`str[i] < '0' && str[i] > '9'` will always return false, since it's checking if str[i] < 48 and str[i] > 57, which is never true. The condition should be `str[i] < '0' || str[i] > '9'`
My apologies for these mistakes.
your byte format sucks bruv 😐
@@zionmelson7936 I was formatting 1 and 0 separately, so one could see there was additional number there. I didn't go for actual formatting like it should be.
@@CoreDumpped No worries, Core. Programming is hard.
This channel is criminally underrrated. This is top tier content for free
To everyone in this chat, Jesus is calling you today. Come to him, repent from your sins, bear his cross and live the victorious life
Toda la maldita razón del mundo, amigo
@@JesusPlsSaveMewe got people glazing Jesus before gta 6
"And on this channel, we hate black boxes."
*subscribed*
While on the topic, I know it's a bit early for the channel to explain it now, but whenever you get to architectures, please don't forget endianness explanation, there are always explanations of how but not of why. Great video as always!!
Yeah, there is a video about endianness already on the list.
Ah, that Little Endian vs Big Endian discussion. ;)
There is simply no why, computing machines should exist in one of the ways. Either one is a choice
Please can you make a video on bitshifting and bitmasking? I assumed bitshifting would be used for this, but your explanation of the algorithm here was excellent. Thanks!
It's funny that right now at my job, I am dealing with serializing ASCII characters and you are making this video. I'm really glad I'm here George. Nicely done.
im learning c and tried to do i kind of failed and after that he makes that video
how did you send a comment 6 hours before the video uploaded?
@@vladsiaev12they pay for early access
@@vladsiaev12 probably a member of the channel
This is not casting, this is converting. Casting is a grammatical operation (forcing the compiler to think that a data has a certain type, but not actually doing any conversation).
Casting sometimes requires conversion.
“10” - 2 in JavaScript both casts *and* converts “10” into 10 in order to return 8
Amazing. I’m literally addicted to learning like this through your videos. They’re awesome ! I can’t wait for the next one and yes I would love a video on conversion of the binary values back do string to understand how the print function works !
A video on how computers represent negative and floating numbers. That would be amazing!
Jesus is the only way to salvation and to the father.
Please repent today and turn away from your sins yo escape judgement 🙏🙏 There is no other way to get to the father but through him.
@@JesusPlsSaveMe I cannot tell if this is a funny way of saying that my idea is insane, or if this is genuinely an ad for Christianity.
For negative numbers look into 2-compliment and for floating point number look into IEEE 754
@@xM0nsterFr3ak I figured out the basics, but a video on how that stuff is actually dealt with in the CPU would be amazing!
@@oglothenerd Negative integers are simple. Like @M0sterFr3ak mentioned, it is 2-compliment which you need to find the binary representation of the number (for example, for a 8 bit number 10, its binary representation is 00001010). With the binary representation, all you need to do is to invert all the bits (from 00001010 to 11110101) and to add one at the LSB (from 11110101 to 11110110), than you will have -10.
This is an excellently planned out documentary. The planning and wit required to explain ASCII conversions and binary maths is excellent. And what a great narrative voice too!
I'm really happy I found this channel... I somewhat knew how it worked, but this just makes it really clear. You are great at explaining things. I am eagerly waiting for more videos
Another way to do:
1. Take the string as argument
2. Access every character
3. Use fixed values with switch cases for every character till '0' to '9'
like
switch(str[i])
case '1' : 001
4. Do bit shifting to create a BCD value containing all characters
5. Convert BCD to binary
6. return binary
It may or may not be faster
Once you get into SIMD instruction extensions, then a plethora of performance optimizations become available to you.
I love channels that demystify these things
tks
Person reveal. Your a young lad. One of those prodigies I keep hearing about.
I talked to my colleagues about this exact problem, specifically the one you mentioned in the end, great video!
This channel is pure gold.
6:57 who doesn't understand how it works, look for binary arithmetics.
Basically binary arithmetic addition follow the next steps:
1. 0 + 0 = 0
2. 0 + 1 = 1
3. 1 + 0 = 0 with carry 1 (one will be used for the addition of bit on next radix - number that is on the left side from one we were looking at)
4. 1 + 1 = 0 with carry 1
5. 1 + 1 + 1 = 1 with carry 1
Not the topic I expected after the last videos, but still a very welcome one.
One of the best channels, hands down!
You my friend have done the impossible. You have actually made programming make sense.
Just want to say that you are the one i was searching for. You answers same questions as mines and in a way that i wanted. Hope you would get more known
From now i respect my computer, doing this all process within micro seconds...
Thanks for the best video...
11:50 ...yes please! :)
I can sleep in peace now, I had exactly this question today and yes chair I was looking for double w.
When it gets to converting decimal fractions as strings to floats things get a lot more complicated. Looking forward to seeing a new video about this case in the future!
This is actually easy how I would think
Since "0" is 48 we subtract 48 from it get the real value first then multiplying to the correct power of 10. So once the number is inputted "1234" turn them to binary 1 10 11 100 and multiply and adding(but computer does to know what index number to start with which isn't so hard) and we get the number before input another number. These process happened really fast we cannot notice them
This is the way. Would love to see a performant way to do the same with floating points numbers. This kind of video is what I really like to watch.
Using IEEE-754 binary floating point 32 or 64 format, you would have to manually decode the floating point. First bitcast the floating point to an unsigned integer of the same size, I.e float -> ui32 or double -> ui64, then using the encoding specification you extract the sign, exponent and mantissa from the integer.
Hi, thanks for this video. What tools do you use for your animations? They are amazing.
11:04 in the if you wirte &&(and) instead of ||(or). Great video!
This is actually easy how I would think
Since "0" is 48 we subtract 48 from it get the real value first then multiplying to the correct power of 10. So once the number is inputted "1234" turn them to binary 1 10 11 100 and multiply and adding(but computer does to know what index number to start with which isn't so hard) and we get the number before input another number. These process happened really fast we cannot notice them
I mean we can even start backwards just tell it(computer) how long the number is ourselves but that means we have to know tell the length parameter so that way is better
0:07 Yes... Just yes. Maybe this will be SUPER slow but yes)
I have this in mind:
1. Represent each character in string with 4-bit binary number (Using Unicode)
2. Make BCD number from all characters
3. Convert BCD to binary.
Now you have a number.
For example:
"532"
1. || "5" = 0101 || 3 = 0011 || 2 = 0010 ||
2. 0101 0011 0010
(BCD to Binary algorithm)
3. "532" = 1000010100
__________
Now I'll watch video)
----------------------------------
Ps. Subtracting 48 is a very cleaver solution!! Now we can do same thing as i did.
But initially i just wanted use table to store Unicode and number like this:
| Unicode Number | Number in Binary |
And use this table to convert each symbol to a number but yeah we can just subtract '0' encoding to get a number!
This is so well explained, I don't think I'll ever be able to forget this.
The way I agree
This channel is very underrated
Great video! I would really like to see a video explaining the problem with null values inside languages and how to avoid them, that would be very educative!
I work on a php application where someone in the past reimplemented the string to number conversion...
And if you have questions...
Yes, it involved a loop with a bunch of ifs to check each digit
Yes, they messed it up
Yes, changing the usages of the function to "(int)$value" fixed a lot of bugs
Yes, the person who did it (acording to git blame) still works there but was promoted to manager
No, we dont do code reviews or anything like that
Goldfield casually existing on TH-cam 😮💨
This channel is perfect to watch alongside taking CS50 to start my programming journey. Pretty excited about understanding everything in this video and learning more. Thanks for the quality videos.
literally Str(number) - 0x30 for 0-9, Str(uppercase letter) - 0x41 for A-Z, Str(lowercase)-0x61 for a-z
Converting between the two is as simple as
char(lower) = char(upper) ^ 0x20
Before watching the response, this was the algorithm I came up with:
```
base = 10
str = "1030"
println(string_to_int(str, base))
fn string_to_int(str: string, base: int) {
let number = 0
each (index, char) of str {
let digit = lookup_from(char)
let exp = base ** len(str) - index - 1
number += digit * exp
}
return number
}
```
great job thank you
i would love an explanation about formatting numbers into strings as well!
I would like a future video about converting an int to a string, but I am more interested in the much more complicated process of converting a float to a string.
My man your videos are awesome. Can you do an explanation on how the clock is used to move the process forward from the transistor level? For example, how do transistor gates use the clock to take the next instruction into the instruction register at the right time?
Another video! I'm glad I checked your channel, since there was no notification. Typical of TH-cam sadly. Though it probably has to do with delay between the last part and this video. TH-cam deprioritizes notifications if you normally have 1 week cadence and then suddenly release video month later. Honestly being a TH-camr is a ton of work.
Great, that's a perfect illustration of what happens internally with the atoi() function. Ah, I noticed there is minor difference between converting a numeric string to a binary integer vs converting a numeric string to a BCD number. And that is multiplying by 10 vs shifting by 4 bits (since BCD numbers represents each numeric digit every 4 bits).
I find it rather interesting, with the IBM mainframe, existing a single machine instruction (CVD) which can convert a numeric string (up to 31 digits) to BCD number. Likewise, there's another instruction (CVB) which can convert these BCD number into integers.
Nice, I will show my class this. Well explained.
Yes we need that too and don't forget to upload the remaining part of cpu episode
It reminds me about the college times! I really like this stuff, thank you!
Great video, as always. Got me curious to understand how the process works with negative numbers.
Beautiful explanation, especially if that code at the end. Thank you very much
Revolutionary idea of getting the actual number
very powerful explanation. thanks jhon
"Shipping to Alaska, Hawaii, Puerto Rico, and International addresses is currently not available." -> pity I was actually looking for a new chair
Anyway, good video, it's nice to see easier topics now and then.
I would love to see the video about the reverse algorithm!
your AI voice is fine. dont change it... GOLD content as always!
please do explain the process from getting from an integer to "string"/output. Keep up the great work!
sum 48 to it and convert to char
dude, youre going to the moon, and i'm liking your videos all the way there
Thank God I never thought about this before I saw the title of this video
ASCII allows for the use of a bitmask to get the number itself. The probably preferred way to convert these BCD numbers to an integer is reverse double dabble. There's a wiki article about it. This algorithm gets rid of expensive and area intensive (depending on your architecture, first for CPU, second for FPGA/custom silicon) multiplications and relies on fast/small shifts and add/sub operations.
How to convert a number to a string: The key instrument is integer division. Let's consider the number 4327. Dividing by 10 we obtain 432 and remainder 7. Now, we already know how to convert a single digit to its corresponding ASCII code: just add 48 or ord('0'). So in this one step we obtained the so called least significant digit (7) and are left with 432. Now, we just have to repeat the same procedure until we are left with no more digits (when the last division yields 0 as the quotient).
PS: Integer division is just a single processor instruction and actually gives both the quotient and the remainder in one go so it's pretty fast.
Man I love this channel so much, this would've been so helpful back when I was learning to do this kinda stuff lol
Well, actually, there is a limit for integer numbers (as well as float), at least in C. And there is also negative numbers. So the more proper function is a little bit more complex.
I wrote mine like this:
int64_t StrToNum(char *Str) {
int64_t Result = 0;
uint32_t Index = 0;
bool IsNegative = false;
if (Str[0] == '-') {
IsNegative = true;
Index = 1;
}
while ((Str[Index] != '\0') && (Str[Index] >= '0') && (Str[Index]
Thank you so much, this was a question I had from some time ago. I would love to see the continuation of this video :)
Always high quality content 😊
Can you make a video about how to virtual memory works in OS? Thanks a lot. All of your videos are so useful.
I would like you to explain and give an example of the end process that you asked about.
Underrated channel
Thats an amazing video, I was just wondering how that would work with negative numbers?
Your videos are a blessing!
I had to learn this when making my own programming language and i wish i had found this video sooner .-.
Great video, and it is a very introductory version of the algorithm. However, this is not an efficient algorithm. The reason is due to the fact that the alu can't parallelize the multiplications and the additions. You should see Andrei Alexandrescu's lecture on this! But this can be a cool continuation of this video.
Thanks for the advice, I'll take a look at the lecture as soon as I get some free time. I'm assuming it is related to SIMD but if not I'm sure I'll enjoy it anyways.
this channel is really good!
Subscribed, wanna see the second part
I really appreciate your videos. They answer a lot of the questions that stuck in my mind.
I have another confusion related to streams and buffer in C language. The unusual behavior of scanf when it encounter new lines charactor(
). Can you please make a video on streams?
epic explanation
Please make a video about big and little endianness, I always forget the order and don't understand the order of bits itself in comparison to the byte order.
Good content, please keep it up!
Please make a video about the reverse function, Binary to Numerical String.
the sequential method in the video also solve the issue ,when the input string is like '0987''
This is masterclass. Can you please share your ressoures oder some book to read?
Amazing! Thank you very much for doing this!
Great content as always!
This is just soo beautiful. 😍
I think it's more intuitive to multiply the numbers by magnitudes of 10 first and then adding them up. After that the better algorithm that you showed in the video would've been more clear I think
11:55 spoiler, it's the double dabble. Look for Sebastian lagues visualizing data with displays video
Simply awesome
Thanks again for this amazing content
Nicely done, thank you ❤
Please create a video explaining how CPUs handle floating-point numbers.
done the string to float double and it myself
but a different approach
stuff skiped in this video
- Sign of a value
for applaing a Sign
multyplay output value by -1 if the '-' is found at the start of a string
- decimal parsing
the same way as string to int
but
- do it 2 times
and when . was found instead of multiplying value just divide decimal it by 10 for each Ituretion
and cheak if value is not to large
I've always found it rather beautiful that ASCII encodes decimal characters as 0x30 to 0x39 in hex, so mentally you can just remove 0x3 and know what the number is.
I would love to see an explanation for thr reverse!
Arigatouu keep em coming 🔥🔥🔥
please continue>
Hello sir, I am your fan🥹🥹. I wanted to ask you something, can I use the information you mentioned in your videos to make content in Uzbek?
i saw the primeagen on the screen there. neat.
Thanks for your video
The C robustness check should have an || not an &&, and the Python one will raise a ValueError if the digit is between "2" and "8". And it doesn't need a f-string.
Also, I wonder how much electricity we'd have saved globally if 0-9 were binary 0-9.
Please please do a video explaining operating system
8:55 Wouldn't it be more efficient or the same in terms of time complexity, by starting to do these multiplications from the right (number 7) to the left and just multiply by powers of 10^i, where i grows from 0 until it reaches the the last digit, which is 4 in this case? so you have 7 * 10^0 + 2 * 10^1 + 3 *10^2 + 7 * 10^3.
Great explanation as always.
I have a little question about 11:07. Are the conditions supposed to be like that?
In the C example `str[i] < '0' && str[i] > '9'` will always return false, since it's checking if str[i] < 48 and str[i] > 57, which is never true. Maybe `str[i] >= '0' && str[i] '9'` will return true on chars that are NOT numeric chars, since it's checking if char > 48 and char > 57, equivalent to `char > 57`, equivalent to `char > '9'`. I suggest `'0'
Yes, mi bad. The condition should be || (or) instead of && (and).
@@CoreDumpped Oh I just realised I wrote the suggestions for checking for numerical chars, not the opposite.
So in C it would be like you wrote in the reply, `str[i] < '0' || str[i] > '9'` and in Python basically the same, `char < '0' or char > '9'`. Or you could be fancy and use De Morgan's laws, `not('0'