Adding new functions, not changing any existing function, I wouldn't call a breaking change. I thought the solution would go towards using decimal, to add precision, but this way is almost as good.
It is called a breaking change because it can affect some source code in F#. I believe it also requires your app to be recompiled (which you would do anyway).
@@IAmTimCorey OK, I'm only in C#, and since I have to manually edit from (7.5) to (7, 500), it's a manual change from one overload to the other, and recompile doesn't change anything, if I don't edit. From what I understand from your video.
Any time you have a number with a decimal point, you will have some level of precision issues. Even the decimal type is only precise up to 28 places. Typically, the 14 places of a double is good enough, but there are edge cases.
looks like with this fix I catch the general idea of making alternative methods which have the same functionality as the old ones. maybe you have some other examples of updated implementation?
There are other fixes like this in .NET 9 even. I'll be covering them later, but you can look at how params works now to see another example like this.
Because of the way computers store and manipulate numbers as binaries instead of decimals they are shown to us. At certain fractals, the other number base system is more accurate/suitable than the other These are some of the things formal software engineering education teaches, but self-taught programmers usually skip/have no clue about. Most common version of this is that you should never evaluate floats or doules for equivalency, only as being inside a specific sigma margin of error from each other (so many game bugs and inconsistent behaviour is due to this beginner mistake)
The precision "issue" always exists. It is just that it doesn't really show up at the level we notice it at for most numbers. That's the difference. Most of the time we don't care about numbers that small. But sometimes, the precision issues add up to something visible to us. As others have said, this is why it is important to actually understand the tradeoffs with various numeric types.
Good question. My guess is because of two reasons. First, decimal is expensive. You don't get that additional precision for free. Also, remember that even with additional precision, you don't get absolute precision. Second, double is the commonly used decimal-type number in C# because it is precise enough for most situations. If you had a double and you tried to pass it in as a decimal, it would not work. You cannot pass a less-precise number into a more-precise type. Therefore, you would need to do conversions on every value being passed in that was not already a decimal. That would add a lot more overhead and expense to the already more expensive process.
It can be considered a bug because it leads to unexpected behavior. If you put in 101.832, you expect 1 minute 41 seconds and 832 milliseconds. That's not what you get. It can also NOT be considered a bug if you aren't expecting exact precision with your conversions. That's why I said "maybe even a bug fix", because it depends on your expectations and your point of view.
For years now, I have been putting out two videos per week. I put out a "coding" video on Mondays at 8am EST and I put out a DevQuestions video on Thursday at 8am EST. That pattern hasn't changed in years. More recently, people have asked me for shorter, 10-minute training videos rather than just long videos. I've accommodated that request and people have said that they appreciate this type of video. I have also mixed in the occasional full course on TH-cam. For example, just recently I released a full course on Binary in C# on TH-cam. In that case, because I wanted people to get the content more quickly than once a week, I released three per week. That was a special occasion. More recently, I've also started releasing TH-cam shorts in addition to the normal two videos per week. I'm still working on the pattern that makes sense for them. By the way, I keep these patterns consistently in almost all cases, including when I am on a vacation or traveling to speak at conferences. For example, I spoke at a conference last week. I was away from the office from Wednesday to Friday, yet a video came out on Thursday and this video came out on the following Monday. Bottom line is that I feel like you are unfairly judging what I do based upon a misinterpretation or a misunderstanding.
I know you are joking, but this is a really good way to protect existing code while allowing good code going forward. Breaking changes are to be avoided whenever possible. This is a great solution.
Adding new functions, not changing any existing function, I wouldn't call a breaking change. I thought the solution would go towards using decimal, to add precision, but this way is almost as good.
It is called a breaking change because it can affect some source code in F#. I believe it also requires your app to be recompiled (which you would do anyway).
@@IAmTimCorey OK, I'm only in C#, and since I have to manually edit from (7.5) to (7, 500), it's a manual change from one overload to the other, and recompile doesn't change anything, if I don't edit.
From what I understand from your video.
Correct.
I had assumed double would've been enough to avoid those precision errors. Guess I was wrong :P
Any time you have a number with a decimal point, you will have some level of precision issues. Even the decimal type is only precise up to 28 places. Typically, the 14 places of a double is good enough, but there are edge cases.
Thanks Tim!
You are welcome.
looks like with this fix I catch the general idea of making alternative methods which have the same functionality as the old ones. maybe you have some other examples of updated implementation?
There are other fixes like this in .NET 9 even. I'll be covering them later, but you can look at how params works now to see another example like this.
@@IAmTimCorey got it. thanks.
Pardon me if I missed it, but I didn't quite understand why this precision issue exists with only certain values.
Because of the way computers store and manipulate numbers as binaries instead of decimals they are shown to us. At certain fractals, the other number base system is more accurate/suitable than the other
These are some of the things formal software engineering education teaches, but self-taught programmers usually skip/have no clue about. Most common version of this is that you should never evaluate floats or doules for equivalency, only as being inside a specific sigma margin of error from each other (so many game bugs and inconsistent behaviour is due to this beginner mistake)
It's the difference between double and decimal. Decimal stores the value different from double. However, decimal uses more memory.
The precision "issue" always exists. It is just that it doesn't really show up at the level we notice it at for most numbers. That's the difference. Most of the time we don't care about numbers that small. But sometimes, the precision issues add up to something visible to us. As others have said, this is why it is important to actually understand the tradeoffs with various numeric types.
Why didn't they use decimal?
Good question. My guess is because of two reasons. First, decimal is expensive. You don't get that additional precision for free. Also, remember that even with additional precision, you don't get absolute precision. Second, double is the commonly used decimal-type number in C# because it is precise enough for most situations. If you had a double and you tried to pass it in as a decimal, it would not work. You cannot pass a less-precise number into a more-precise type. Therefore, you would need to do conversions on every value being passed in that was not already a decimal. That would add a lot more overhead and expense to the already more expensive process.
Interesting and occasionally helpful. I wouldn't, however, consider the previous behavior as a bug, it does as intended.
It can be considered a bug because it leads to unexpected behavior. If you put in 101.832, you expect 1 minute 41 seconds and 832 milliseconds. That's not what you get. It can also NOT be considered a bug if you aren't expecting exact precision with your conversions. That's why I said "maybe even a bug fix", because it depends on your expectations and your point of view.
@IAmTimCorey fair enough
Tims getting busy these days.. not too long ago we had longer videos, more than a few a week and the free content was better but either way cheers
For years now, I have been putting out two videos per week. I put out a "coding" video on Mondays at 8am EST and I put out a DevQuestions video on Thursday at 8am EST. That pattern hasn't changed in years. More recently, people have asked me for shorter, 10-minute training videos rather than just long videos. I've accommodated that request and people have said that they appreciate this type of video.
I have also mixed in the occasional full course on TH-cam. For example, just recently I released a full course on Binary in C# on TH-cam. In that case, because I wanted people to get the content more quickly than once a week, I released three per week. That was a special occasion.
More recently, I've also started releasing TH-cam shorts in addition to the normal two videos per week. I'm still working on the pattern that makes sense for them.
By the way, I keep these patterns consistently in almost all cases, including when I am on a vacation or traveling to speak at conferences. For example, I spoke at a conference last week. I was away from the office from Wednesday to Friday, yet a video came out on Thursday and this video came out on the following Monday.
Bottom line is that I feel like you are unfairly judging what I do based upon a misinterpretation or a misunderstanding.
Can't fight bug? Bandage it!😁
I know you are joking, but this is a really good way to protect existing code while allowing good code going forward. Breaking changes are to be avoided whenever possible. This is a great solution.
@IAmTimCorey I think, underground, it's using usual if-else. 😅