The error is funnier in languages that have unsigned types The sum/2 will end somewhere inside the array and not throw any errors, just search weird places
The amd64 ISA has hardware overflow flags on registers. So there is saturating arithmetic in some languages that just returns maximum value of the type in the event of a rollover.
YES! It's even sneakier if using unsigned types because the values won't even become negative but wrap around to smaller but still wrong values so that often you don't get page faults but simply wrong answers.
It's funny how so many comments point out the performance cost of adding one arithmetic operation. They overlook the fact that arithmetic operation between two already loaded registers is almost instant vs the cache miss monster which is accessing the large array at random positions. You won't measure any significant difference.
@@Herio7 Speed may be a reasonable choice sometimes if it's significant and you can assert that your use case won't be problematic regarding correctness. But even then, speed won't even change by a significant amount here.
There is a LeetCode binary search problem specifically designed to teach you about this. left + (right - left) / 2 is algebraically equivalent to (left + right) / 2 and avoid overflow. It's a handy identity to memorize for coding interviews.
6:13 "when your integer becomes 33 bits" He probably means when it requires 32 bits unsigned or more than 32 bits signed. The 2.1 billion he mentions before is roughly 2^31 so he's mostly talking about the range limits of signed 32-bit integers. Unsigned 32-bit integers go up to over 4 billion.
I love that remark about not having a bug in HIS code because Python protects you from it. But also recognizing this is can be a real problem sometimes and teaching us how to avoid it.
It’s kind of self justifying though. It’s often considered a bad thing that Python doesn’t subject you to the horrors of systems programming. But if Python was trying to make you aware everything’s secretly an object and full of smarts we wouldn’t use it!
@@Freshbott2 Python is intended to mix beginner friendly with being a fully capable language. Preventing oversight type bugs like this is just part of being approachable to those who wouldn't have the experience
Fun fact: l + (r-l)/n is a general formula for equal division where n is the number of subsections. (l+r)/2 is just a special case that happens to work for binary search, but when you move up to ternary search (two division points), (l+r)/3 no longer cuts it and you need the general formula.
@@scottbeard9603lets say L=10 and R=40. You want to divide into 3 equal pieces instead of 2. Formula (L+R)/3 would tell you to split at (40+10)/3 = 50/3 = 16 (and 33 if you do 2(50/3). General formula L + (R-L)/N will tell you to split at 10+30/3 = 20 (and 30 if you do 10+2(30/3)), giving you a split into 3 equal parts.
@@scottbeard9603 since someone else already replied with an example - here have the working out of why it doesnt work for 3, but does work for 2^^ lets have a look at the case of n = 2: l + (r-l)/2 = (2l)/2 + (r-l)/2 = (2l + r - l)/2 = (l + r)/2 now lets look at what happens when you try the same with n = 3: l + (r-l)/3 = (3l)/3 + (r-l)/3 = (3l + r - l)/3 = (2l + r)/3 != (l+r)/3 in general: l + (r-l)/n = (nl)/n + (r-l)/n = (nl + r - l)/n = ((n-1) l + r)/n so technically that way of writing it down does work... as long as you dont omit the n-1 mind you, having the computer work out that division point that way will only get way worse as you add more subdivisions - better to keep the numbers small^^
In java in particular, you could just do: (l+r) >>> 1. This should give the correct answer even if l+r overflows, by treating the resulting sum as unsigned during the "division by 2" step.
@@MichaelFJ1969 That's the thing: They're not 32 bits - they are 31 bits. Java does not have unsigned integers. And because of the properties of the twos complement bit representation used in signed numbers, l+r will have the same representation as if l and r were unsigned - but division does not have this property. Luckily we can do a bitshift to emulate it. I also know that llvm can be weird about unsigned pointer offsets (or so the rust docs say), so this trick will probably (?) also work in C/C++ - but I'd have to dig into the docs to make sure of that.
That's actually what OpenJDK's (the reference Java implementation) binary search does too. If anyone's curious what >>> does in Java, it's the unsigned right shift. Right shifting by 1 is the same as dividing by two, but a typical right shift (>>) or division by two will leave the leading bit intact, so if the number was negative, it will stay negative. An unsigned right shift fills the leading bits with zeroes, so when the value is interpreted as a signed value, it's always positive.
This scenario reminds me about calculating the average using floating point values. The first instinct would be just to sum everything then divide by the amount of values, but floats get more imprecise the further from zero they go. So the average might be off if the intermediate sum is too big. A better approach might be going by partial average, since the intermediate values are smaller. But there are other techniques too that go over my head. I remember one day finding a paper with dozens of pages just detailing how to average floating point numbers, it's one of those problems that at first appear to be simple but are anything but that.
TBF, when Java was written in the 90s, there weren't too many cases where arrays with 2^30 elements were being used. It's not really too surprising it went undetected for so long.
At the time, Java's demand for 32-bit processing was remarkable and often wasteful. Today, waste has been normalized to the point people get offended if you remark a 100GB patch is excessive, and Java has been awkwardly forced into smartcards.
In the 90s, engineers would have never done l+r halved and floor bs, they would have done r-l, shift, and indexed address, because any Intel 8086 can do this in the address generator unit almost twice as fast, in fact some processors would even fuse this instruction sequence. Java, Python. Estrogen for Programs! Nothing else!
Programmers should know to check their indexes before arbitrarily throwing them into an array though, that was a problem way back even in the days of C, it’s not a Java-specific bug
@@0LoneTech I always assumed they needed the 32 bit thing because they wanted to do a lot of linked lists or something, which are pretty useless in a 16-bit address space. They sure didn't do it for speed (I hope)!
Let's try to offer an alternative that does what so many seem to be expecting: division before the addition: m = l/2 + r/2 This breaks when l and r are both odd as there are two individual roundings, for example 3/2+5/2=3 in integer arithmetic, which prevents the algorithm from making progress beyond that point. What you could do is add the last bit back with some bitwise operations (which boils down to "if both l and r are odd, then add 1 to the result"): m = l/2 + r/2 + (l & r & 1) Or just do it as in the video. The /2 will almost certainly be optimized into >>1 by the compiler if it is advantageous on the target cpu.
I'm sure most of us plebe would use reminder operator to add remainders if we for some reason wanted to complicated the code by making 4 divisions instead of only one.
@@rabinadk1 Sounds like a huge extra can of worms and also likely a bit slower (although might not be noticeable compared to the cost of memory accesses).
4:31 The problem is even worse if your language *doesn't* do overflow or bounds-checking, (more common than you might think!) in which case your code will be looking at memory outside your array & Very Bad Things will happen. The way to prevent this in your code is to subtract your current position (CP) from the size of your array, (integer) divide that by 2, add it to CP, giving your next CP. This will work for any size of array that is no larger than your largest possible positive integer. This, of course, is how you handle a task like this in assembler.
In assembler (well, in x86 at least), you can just add the numbers anyway, then do the division with a `rotate right with carry` instruction and are done. :D
Happened to me a decade ago in my home build embedded application. Eeprom with 65536 bytes. Using unsigned index variables only. No signed index at all !! The overflow wrap around is sufficient to cause that bug. Would have caused an endless loop at startup in the eeprom search. Luckily I ve spotted it in a code review befor Eprom was more than half full/used. Thanks for that video.
This is called hadd or rhadd (h=half, r=right) in OpenCL, and there's mix() for arbitrary sections of floating point values. It's an ancient issue; compare e.g. Forth's */ word, which conceptually does a widening multiply before a narrowing divide.
@@dojelnotmyrealname4018, it is very common! It's arithmetic average of two values, and any codebase is bound to have more than a few of those. What's not common at all is operating with values that risk overflowing. Especially in 64 bits; with 32 it's much more of a concern.
Binary Search can be used for more complex tasks, like finding the answer to a problem where you know the range of the possible answers and the problem boils down to solving a monotonic function. I recently stumbled upon a bug with the (l + r) / 2 when l and r can be negative numbers. I was using C++, and in C++ the integer division round in the direction of 0; so, for example, 3 / 2 = 1 and -3 / 2 = -1. But I was expecting -3 / 2 = -2, as it is the nearest integer less than the actual result. In Python that is the behaviour of the integer division: -3 // 2 = -2.
These are insidious bugs, I ran into one due to different rounding modes for floating points (towards zero, nearest, round up, round down). I guess the integer division is rounding towards zero.
That's bit shifting. It exists in all languages, but it's a "low level" operation. Anyways, since each bit in a binary number is a power of two, shifting the representation is the same as doubling/dividing by 2 depending on endianess. Now most languages implement their bitshift operators indépendantly of endianess tho
In this case of right shifting, the unit bit is discarded, so it's integeger division, the remainder is discarded. Note that the unit sigit being the same as the potential remainder only works because the divisor is the same of the base (like dividing 42/10 as remainder 2, the last digit. )
There is a little bit of significance to the number 17. If you ask people to pick a number from one to twenty, apparently 17 is the most popular choice. Plus, if you have a unbalanced twenty sided die that rolls 17 all the time, people are less likely to realize somethings up as it's not as likely to be noticed vs a die that keeps rolling 20. Especially since it will get obfuscated by different modifiers that get added to the roll so it's usually never just a 17. Also, it's prime.
@@asynchronousongs I have no sources, but it does make sense that 17 is the most "random-looking" number from the group. Evens are roundish, so they're out. Same with multiples of 5. Single digit numbers are too simple. 13 has a reputation for being either lucky or unlucky. 19 is too close to the maximum. 11's two digits match. The only number left is 17. Would be interested to see if someone can find an actual source though.
I was wondering at first why just r/2 + l/2 wouldn't work, but with integers, the parts would get floored separately and that would give wrong answers when both r and l are odd.
Plus, division is expensive compared to addition and subtraction. I'm not sure how much difference it would make in practice, but it makes sense to do it only once instead of twice if you can
@@ats10802b Not a big Java user here, but can't you index with a fewer-bit integer in Java? I thought it would be implicitly cast. The point is. you could define the l and r variables as 1-byte or 2-byte ints, then you don't need the billion-element array to recreate the bug.
IIRC Java does arithmetic in int and long only, so shorts and bytes won't actually have this problem. Also, this bug didn't get spotted for so long because nobody had enough RAM to hold arrays of such sizes.
@@tylerbird9301 I think what he's getting at is that the overflow doesn't actually happen at the indexing part of the code, but at the addition part of it. So if L and R are defined at a smaller datatype, then when you added them they'd still overflow resulting in a negative number when it gets used as the array index.
I remember coding a binary-search function by hand once (and it was probably susceptible to this edge case). I specifically wanted it to search for a specific value and return its index, OR if the value was ultimately not found, return the index of the first greater-than value (for use as an insertion point). Nothing too complicated technically, but DANG was it frustrating to debug.
I remember hitting that bug and investigating a bit on it. But it happened so rarely, that we decided not to put any effort into it. Don't remember exactly, would assume we didn't have test cases for that because you don't test standard libraries. "Millions of people use that every single day, and you think _you_ found a bug in that?!?" Still holds an appropriate amount of humility. But _sometimes_ ...
Quite interesting that this was a hidden problem in Java for so long. In microarchitectures this is a well known problem for decades as the numbers don't need to be absolut big, it's enough for both the left and right side number to be greater than half of the max. number representation. There are also patterns where you divide both l and r by 2 with bitshifting every iteration and add or substract them together according the controlflow.
Whenever he said "l+r" I already knew from experience to write some kind of code to prevent an overflow. Has no one else had to endure a C++ class where everything is trying to make your program break?
People who only know high-level languages will not likely be thinking of overflows. If they got a crazy error message about an array being indexed at negative one million or whatever they might eventually realise what is going on, but hand on my heart I'm sure it would take me a while. Hours or days. I wouldnt have anticipated it before time.
@@3rdalbum I primarily use Javascript and that language is so poorly designed that it has me writing custom code to verify an integer instead of it having that built-in type. I think because I am so used to having to fight against the language constantly and being forced to adopt an extremely defensive style of coding, I was more aware of such an error, despite javascript being a higher level language.
I remember a kind-of similar bug in Intersystems Caché: 4 * 3 / 6 returned 2, but 4 / 6 * 3 returned 2.00001 (and this little butterfly ended up destroying the city of New York). This commutative/grouping operations that are the basics in arithmetic, in our code some times they create a lot of mess when released into production environments. I think it is important to add comments in the code so the newer generations understand why we make these changes. E.g.: int m = l + (r - l) / 2; // =====> Do NOT change to m = (l + r) / 2 =====> it can potentially lead to a positive int overflow
Looks like your first example is just an example of floating point coercion (assuming you expected integers), along with the fact that floating point numbers are an approximation of a value, so due to the underlying math behind them you sometimes get tiny weird rounding errors.
One has to be a fool to think computers can support ℝ numbers. They do not, it is absolutely not possible. As such they do not support associativity or commutativity, or many other properties, even the more obvious. 1+ε-1 may not be ε. etc... Even Integer support is partial, at best. Newbies think they can solve the problem by using "fuzzy" rounding or other tricks. Nerds know they have to proof the code they wrote.
@@pvandewyngaerde You just have to keep track of the number you're currently on. For example, if you have two numbers, the average is (a + b) / 2. Let's call this A1. If you add a third number 'c', it's (a + b + c) / 3, which you can rewrite as (a + b) / 3 + c / 3. We can then rewrite the first term as A1 * (2 / 3), giving us A1 * (2 / 3) + c / 3, allowing us to divide before adding. Just to continue the example, if we call our last result A2, then when we add a fourth number 'd', we can compute A2 * (3 / 4) + d / 4.
If you are dealing with super large numbers, just use a 64-bit integer. If you can max out 9 quintillion in the addition then used BCD (binary coded decimal), it's guaranteed to max out your memory before you can max it out. Also, it will be a performance hog.
Integer overflow is how the infamous infinite lives bug on the original Super Mario Bros. for NES works. Lives are stored as a signed value so if you go over what fits you get weird symbols for your number of lives and it becomes impossible to lose cuz the way the math is set up you can’t subtract from a full negative number and wrap back into the positives, but can add to a full positive and wrap into the negatives. Since it only checks if you’re equal to 0 not lower than it, you end up with infinite lives. But it overflows into other important memory and causes other bugs as well.
That's weird that it corrupts other memory. I would have expected lives to be a single unsigned 8 bit variable. Especially since once you get more than 99 the game's print routine indexes out of bounds and starts showing adjacent graphics instead of digits. So obviously the game designers figured 'eh, nobody will get that many extra lives' and didn't bother to check.
That has its own problems. As someone else pointed out, if l=3 and r=5 and your language's integer division operator rounds down, 3/2 + 5/2 gives you 3 and the algorithm loops forever.
A topic I would love to see Computerphile cover: relocatable code. How can a binary be loaded in an arbitrary memory location and still have valid addresses for load/store/branch? How did that work in the old days vs now? What is real mode and what is protected mode? Are there different strategies on different platforms?
The binary is loaded a different address each time ( I'm assuming PIE is set when compiling) and instructions within the binary are always at a constant offset from the base of the ELF with which it's loaded As long as the elf knows it's loading address the instructions are always at a constant offset
Real mode is an outdated operating mode, Intel is removing in X86S. Modern UEFI BIOS'es make it really hard to enter real mode (aka 16 bit mode) outside of SMM. Pushing UEFI boot with secure boot forward does just that. For now SMM (system management mode) starts in real mode and very first thing a CPU does AFTER entering SMM - switches into long protected mode (64 bit mode) inside a SMM mode.
One more thing about different modes. Modern ARM cores are majorly 64bit only (doesnt support 32bit neither for whole OS nor for just applications nor for ARM TrustZone (ARM variant of SMM mode on x86)
A crossover with Numberphile would be nice here. You could have them show why (r+L)/2 = L+(r-L)/2. It's not too hard to show here, though. So I'll try: (r+L)/2 = r/2 + L/2 = r/2 + L/2 + (L/2 - L/2) = r/2 - L/2 + (L/2 + L/2) = (r-L)/2 + L = L + (r-L)/2 Bonus question: why not use the second step? 2 reasons: 1. Division is generally the slowest arithmetic operation, so you want to do as few of them as possible. 2. The fastest math uses integers. Integer division will mean the .5 part gets dropped. So, if both r and L are odd, the midpoint will be off by 1. At least, those are my answers.
Our Prof had a rubber shark he called Bruce. It was supposed to remind us of trouble with integers lurking in the deep. I can't remember the exact lesson. Prof Bill Pulling. Great guy.
Recently wrote an N-dimensional N-ary search function. It's fun to try and scale these basic algorithms, solving all the bugs that show up and improving your understanding.
Would it not be more efficient to just divide both by 2 inside then add them? Then you just have (r>>1) + (l>>1). Though you might have to add the remainder if both are odd, so (r>>1)+(l>>1) + 1&r&l Pretty much the same number operations but you’re not limited to using signed ints.
Would have thought that L/2 + R/2 would have been better than L + (R - L)/2. L/2 is just a bit shift right, so pretty fast. Handling the case where both L and R are odd is not too difficult (just add one).
@@Andersmithy I suspect Mike's implementation is more reliable across architectures and compliers. Dividing by 2 is just a bit shift so L >> 1 + R >> 1 which has got to be pretty quick. The issue is you have to add (L && 1) && (R && 1) to the result to account for both being odd numbers. But there are no branches in the code. So ans = L >> 1 + R >> 1 + ((L && 1) && (R && 1)) (The compiler *could* have a branch in the logical bit - after all, if L && 1 is zero it doesn't have to do the R && 1 part) However, Mike's code is just L + (L - R) >> 1 so, yeah - I suspect subtraction is pretty much implemented in hardware. The thing is, L >> 1 and R >> 1 could be pre-computed and stored (as two equally long arrays), then, maybe my solution would be a femtosecond faster :)
funny i had that problem a few months ago while implementing a binary search on a Microcrontroller. I had to use a Word as the Index so i only had 65535 as Max index and i noticed that the overflow was not handled correctly in the basic binary search. It didnt even ocure to me that such an obvius problem was unknown for a long time.
Basically any code with a sum is prone to this bug then. I do wish languages have a built in error checker where it checks if the output sum is smaller than either input and raises an error if found, although that would also add ungodly overhead. Seems more like an implementation detail than a logical bug though.
I mean for one you can add negative numbers, so you cant just check if its larger. Secondly there is a way to do this, its just not the default since the overflow can be desired behavior. In lower level languages you can just check the CPU register that tells you if there has been an overflow, in java you cant do that but it has a Math.addExact(a,b) that does exactly what you want, it throws an error if there was an overflow. there isnt much overhead in that function, i assume, at least not much more than the java overhead.
On the DEC PDP and VAX machines the default language was BASIC and that by default checked for overflow. It may not be trendy but that may have been a good thing. I did have COBOL code that used PIC 9 (i.e. a single character as a a digit from 0 to 9) and that happily looped trying to get to the 10th element of an array as "9" + "1" just saves "0" into the single character.
In Java and C# the array length is an int. You can easily solve the problem by casting to long first. If you need larger arrays or matrices, they are usually sparse arrays or matrices, which need a more sophisticated datatype implementation anyway.
"You can easily solve the problem by casting to long first."...until the boundaries of long are breached, then we are back to square one. Rule #1: if you are writing a general-purpose routine, always, always write in a 'safe' way. Never, ever assume a limit when none has been implemented.
@@ChrisM541 As i said, the array length is guaranteed to be an integer in Java and C#. The range of int and long are exactly specified. Doesn't make sense to program 'safer' than the language spec. In fact. the current OpenJDK version even takes advantage of the fact, that int is signed: int mid = int mid = (low + high) >>> 1;
Hey, It was a really great explanation, appreciate the effort. There is one small doubt, what if we do L/2 + R/2, this should also work as both L and R in the range so L/2 and R/2 are in the range too.
@alstuart @miamore_un **Assuming Java:** It gives the incorrect result for both even and odd numbers, because L and R aren't defined variables. That's easy to fix by correcting the variable names to l and r. (Java variable names are case-sensitive.) It also gives the incorrect result for odd numbers due to integer division, but that's also easy to fix by changing it to l/2.0+r/2.0 . Regardless, their intent is correct. Who knows, maybe they were writing code in a language that doesn't do integer division like that, and which has case-insensitive variable names!
People nowadays completely ignore hardware and operating system, which has 80% impact on actual performance. OOP languages like Java, Python and Javascript led us to this madness.
I wouldn't have thought to use pointers that way, obviously you subtract l from r and add that offset/2 to l. No overflows possible if you use pointer arithmetic correctly
Funnily enough I just last week watched some C++ convention talk, might've been Kevlin Henney or someone else, mentioning this exact issue where the integers were so big they overflowed before getting the average. Might've actually been about satellite arrays. Perhaps it was Billy Hollis after all, but someone anyway. I was thinking maybe you'd just halve them first, but then again it's possibly two float arithmetic operations which isn't as lovely. Although, you could probably get away with a >> 1 type of trick and get almost free out of jail. Anyway Pound's method is pretty obviously better when it's just one halving and addition/subtraction.
this is more of a standard practice of embedded software engineering than a "bug" in the binary search. you always consider intermediate overflows when doing data unit conversions or working with fixed point decimal numbers.
l + (r - l) * 0.5 That is a standard linear interpolation function. Basically, you walk the value from 'l' to 'r', with the given progress value, which is 0.5 with the example above.
Java primitives are signed. Also, operating with signed numbers can be less error prone than with unsigned numbers. For example, in C/C++ when you take unsigned number and subtract another unsigned number but greater to it, you'll get the wrong result. For example doing the following operation with unsigned 8 bit numbers: 1-3 won't be -2, it will be 254.
Firstly, the problem remains with unsigned integers; the incorrectly calculated index just might access defined memory and lead to more confusing misbehaviour, such as an infinite loop or incorrect answer. Secondly, it can be useful to apply an offset to an index, such as the (r-l)/2 value here, and in other algorithms it wouldn't be odd for such a step to be negative. Thirdly, Java doesn't know the number is an index until it's used to index, and the algorithm isn't array specific. There do exist languages which can restrict index types to match, like Ada or Clash. In Python negative indices index from the right.
The even funnier part is if they were unsigned the bug might have been harder to spot, as the overflow still happens, but this time it starts from 0, so it will give a valid index inside the array, thus not throwing an error. The result would still be incorrect, just harder to notice since no exception was thrown so it would be obvious something was fishy
@@antoniogarest7516That just shifts the over/underflow boundaries around, though. -100-100 isn't 56 either. Java was designed with a 32 bits is enough attitude, Python switches to arbitrary precision, and Zig allows you to specify (u160 and i3 are equally valid types). Ada is also quite happy to have a type going from 256 to 511.
I wonder if any languages have a compiler optimization that tries to simplify mathematical operations: so it would take "l + (r - l) / 2" and turn it back into "( l + r) / 2" 🤔
That style of optimization exists, e.g. in gcc's -ffast-math, which enables some for floating point processing. However, the buggy code has undefined behaviour which the corrected code does not, so this erroneous change should not be produced by an optimization pass.
Generally speaking the answer is no. Compiler optimizations shouldn't change the behavior of the code, so something like this would be considered a bug. Optimizations WILL do things such as replacing multiplication with bitshifting when possible, or reordering math when it doesn't make a difference (for example, multiple consecutive addition operations)
@@antonliakhovitch8306 That optimization would be correct only if the numbers are ideal numbers, behaving exactly like math says. The computer language integers do not do that, they operate in modulo arithmetics.
I like to just implement binary search using bitwise operations. So, for the index that you are looking at, just go through the bits one by one (starting at most significant bit and starting with index zero), set them to 1, and then reset them to 0 if the index is too high. Just make sure to check whether you are within bounds. This way, you don’t need math and therefore can‘t run into integer overflows.
That sounds so wasteful and also weird. Say have an index of 12, everything up to 16 is too big (that's like at least 27 comparisons), then you reach 8, 8 is okay, 8 + 4 is too big, so we turn 4 off again (right?), then 8+2 is okay, and 8+2+1 is okay. So we end up with 11? Or do we stop at 8, which again is not correct. What reason do we ever have to turn off the 8?
What would have happened if the code had used unsigned integer as the type? Just incorrect results, right ? Is that why they chose to use signed integer so if something goes wrong, at least you know about it with an exception?
I love how people comment about using shifts instead of dividing by 2, not realizing that any sane compiler or interpreter of any sane programming language in the last 25 years does this anyway. Don't try to outsmart the compiler in optimizations. You will fail. Let it optimize for you and only then check if there's anything you could improve (and if you can, you probably don't need this advice). Anyway, shifts would probably be a better choice in this code from a style point of view: (l+r)>>>1 looks better than l+(r-l)/2. This only works in Java though.
That would break in the case of L and R being odd. If L and R were 5 and 7, doing integer division on both would return 2 and 3 because it removes the decimal, and the calculated midpoint would be 5.
L+R and R-L should have the same remainder mod 2 and will truncate the same way i believe L + (R/2 - L/2) maybe would be different than (L + R)/2? Testing an example L=1, R=6 (L+R)/2 = 3.5 → 3 L +( R/2 - L/2) = 1 + 3 - 0 = 4 Yup
Java has 'unsigned bit shift right' operator '>>>' which works perfectly for division by powers of 2. (l + r) >>> 1 will work just fine. Intermediate result, the sum of two integers can at most overflow by a single bit which occupies the sign bit. No information is lost, it's just treated as negative number. So, when shifted back with single zero it produces correct positive number.
One other thing I would add is a defensive early return that checks if left is less than right as we assume, because if it is greater than right then it will lead to an underflow. if (l > r) { return new Error("Left pointer is greater than Right pointer. It is a programming bug."); } int midpoint = l + (r - l) / 2;
That won't ever be the case as long as we're searching the _index_ of an array, which can only be a positive integer. First; the following expression: (L + (R - L) / 2) will never underflow, as long as L>=0 and R>=0. This is because L is added back into the result of (R - L) / 2. So even if (R - L) / 2 would be a negative number we add back L to correct for it. This is simply because L is added back into the result of ((R - L) / 2). So even if the result of (R - L) / 2 is negative, it is corrected by the fact that we add back L. Secondly, it would be a compiler/runtime bug, because of the while loop counts from L until R which are bounded by 0...length of array, and only shrunk towards the midpoint resulting in a positive integer. In case we can have negative indices, then we'd need to condtionally check the bound and then (R + L) / 2 instead, otherwise fallback to the previous equation.
If you use signed integers, it would never happen in C, because signed integer overflow is undefined. Basically, compiler (CAN) assume that `l` will never be less than `r` because you never change them in that way. So, be carefull with assumtions like that.
Doesn't happen in assembler because we have a carry flag (RCR to divide 33-bit result by two). Also we know the indices are unsigned values. So, we can use the full range of the register size.
rubbish. Overflow happens in assembler in exactly the same way. Yes you have an overflow flag BUT you have to check it, it doesnt stop the overlfow in the first place and I bet you a car that most assembler programmers would not bother to check for overflow unless they have some idea the numbers were going to get large. Just like its a non event in a language like C unless you know you will be processing large numbers.
@@turdwarbler sure, we need a more verbose definition of the function which matches our assumptions. Only through intrinsics does C have access to the instructions or machine state to handle function definition with the assumptions made - the compiler is not going to grok what the math intends to use the instructions designed for this purpose.
It's actually not a more complicated process it's actually less computationally complex, divides are computationally complex so the smaller your divide is the better (L+R)/2 will always be larger than (R-L)/2, division in a computer is done with N number of additions, so the the new method is actually much more simple for a computer to perform ((R-L)/2)+L
This only changes the place where a bug appears. If an array is defined indexed from -1,000,000,000 to 1,000,000,000 (minus a billion to plus a billion) then then very first step in a binary search will overflow with the new formula. But the original formula will not. To fully fail safe the code would require an if statement to choose the expression that does not overflow...
Why not just divide both numbers by 2 then add them? (I know normally subtraction is faster than division but since it's dividing by 2 just bit shifting should work, plus if you use bit shifting of an integer instead of division then you can do away with flooring the result as well).
How do you handle finding muliple matches in a binary search? I assume when find a match that you have to start a linear search to the left and right of the that and continue until you find something that doesnt match?
You can keep going with binary search, though many implementations only return the midpoint upon match, not the high and low range. Searching for any match, left or right edge are distinct goals, often expressed as looking for the first or last index a value could be inserted while maintaining order.
@@landsgevaer I guess you can do an unsigned right shift (>>> in Java) instead of dividing. This would work fine if both numbers were nonnegative (which is the case with array indices), but breaks for negative numbers.
@@vytah You can shift the l and the r by one bit as an alternative to /2, but then you still need the plus operator to combine them, right? That is essentially the same as using l // 2 + r // 2 (integer division) in python instead of (l+r) // 2 as in the initial video.
that doesn't quite work as integer division isn't -associative- distributive (1 + 1) / 2 = 1 (1 / 2) + (1 / 2) = 0 To avoid overflow while keeping the division you'll need to do (l / 2) + (r / 2) + ((r & l) % 2) which can be optimized by any decent compiler to (l >> 1) + (r >> 1) + (r & l & 1) but at this point the solution shown in the video seems a lot better to me as it's simply l + (r - l) >> 1
If the type of the length was an unsigned int, could it just become a loop without ever letting the user know? (example: if the new m for the user's luck becomes the first average, it would loop again until it got back to the back sum and go back to the initial average) Edit: fix the type of L, R and M should be unsigned int, actually (and length as well, I suppose?)
I didn't understand, being from a c programming background how is array size limited by a language? Isn't that an addressable restriction depending on the architecture?
@@xtifr Still a way to solve the problem by making sure the numbers do not overflow. May not be computationally superior, but it *IS* a solution to the problem. I'll admit that doing an integer subtraction then a division, then an addition is computationally more efficient. I was looking at it from a pure mathematics perspective.
I guess I didn't notice you writing it that way in the first video, but I've known about this weird way of getting half the distance between two indices for a long time and specifically avoided it because of this known issue. I generally use C and adding a single extra instruction that takes one clock is not a big deal for a compiled language. Even at a billion entries it's more than fast enough to not warrant worrying about it. This is one of those edge cases that can ruin your year and is probably one of the reasons that Knuth came up with that stupid phrase. I guess I should've paid more attention in that first video and chided you for it, but I was fighting my Python installation that didn't want to use numpy. Bad Mike!
If l and r are integers, and we do integer division, then l/2 + r/2 may not necessarily be the midpoint of l and r. In Java for example, 5/2 + 9/2 = 2 + 4 = 6, but the midpoint of 5 and 9 is supposed to be 7.
Is should also work if you change the division by 2 to a logical right shift by 1: >>> 1 because logical right shift doesn’t treat the top bit as a sign
Couldn't you do L>>2 + R>>2 (bit shift L and R over by one and add them together)? Effective (L+R)/2 is also equal to L/2 + R/2. Normally for efficiency you want to reduce if's and division unless the division is a multiple of 2.
Dividing by two via this method is not recommended. The Java compiler already optimizes divisions, and l/2+r/2 is much more human-readable. Remember, the person updating your code 5 years from now might be an intern still in college; write it with them in mind. This is also true for C++. I'm assuming it's also true for most other languages. There is a Stack Overflow question for "which-is-better-option-to-use-for-dividing-an-integer-number-by-2" which goes into more detail about which is more appropriate for the situation. I'll post a URL directly to the question in my next comment, as TH-cam comments containing URLs are held for review.
If your machine model uses two's complement to represent negative numbers (i.e. left bit for the sign, like almost every computer; Java even requires it on a language level), then another solution would be, to replace the division-by-2 by an unsigned shift-right operation. This works, because an addition of non-negative numbers can at most overflow by one bit, i.e. into the sign-bit, but never beyond that, e.g. 01111111+01111111=11111110. An unsigned shift-right operation by 1 always shifts-in a zero on the left, thereby undoing the overflow while being mathematically equivalent to a truncated division-by-2 (exactly what we need), e.g. 11111110>>>1=01111111.
Why not just use the normal maths answer of r/2+l/2? Is it just an attempt to save on a slightly costly division operand or am I missing something else?
I would rather go with L/2 + R/2. That is just 2 Bit shift operations and a addition afterwards. Especially if I'm coding for embedded hardware. The compiler will probably catch that for me, but i can make sure, that it will happen by applying the bit shift manually. I don't know if a embedded processor can actually perform a bitshift on load of a variable. Or bitshift after a addition. That might be a specific instruction you could perform. Maybe the compiler is intelligent enough to see, what you're trying to do and compiles that error away.
nice edge case. I wonder if the speed penalty introduced by the extra arithmetic could be reduced by computing the truncated average with right shifts and bitwise AND? like (l >> 1) + (r >> 1) + ( l & 1 & r) (Is this even correct? I did not doubt this enough, maybe.)
It is correct and slightly faster than even (l + r) / 2, but we can do better than that. Integer.MAX_VALUE + Integer.MAX_VALUE does not overflow if we view the bit patterns as representing unsigned ints. So we can simply do: (l + r) >>> 1 Summing all l in: 0 < l < 100000 with all r in: l < r < 100000, this is 35% faster than l + (r - l) / 2, and 21% faster than (l + r) / 2 on my system. So (l + r) / 2 had a speed penalty to begin with.
Is there not a better way than “l+ (r - l)/2”? Couldn’t you instead just do “l/2 + r/2” and since dividing by 2 is just a bit shift it would only require two bit shifts and an addition, rather than one addition, one subtraction and a divide/bit shift? Also since the divide/bit shift and then addition it would ensure that the integer doesn’t overflow just like the used formula does.
Just a random little fact: You can use underscores to make numbers more readable in java (for example: 1_000_000.)
It's also a thing in Python 3.6+, JS, Rust, C++14 (though with an apostrophe instead of an underscore), and probably a bunch of others
@@Qbe_Rootand c#
Niice.
That will come in handy for a project I'm working on.
Many thanks! :D
🖤
In pretty much any language! Rust, python, C#, etc.
The error is funnier in languages that have unsigned types
The sum/2 will end somewhere inside the array and not throw any errors, just search weird places
Takes longer to discover the bug, but it's more fun I promise!
That's exactly the comment, I was going to write. 🤝🖖
The amd64 ISA has hardware overflow flags on registers. So there is saturating arithmetic in some languages that just returns maximum value of the type in the event of a rollover.
YES! It's even sneakier if using unsigned types because the values won't even become negative but wrap around to smaller but still wrong values so that often you don't get page faults but simply wrong answers.
This is why in C++ unsigned integer overflow is undefined behaviour.
It's funny how so many comments point out the performance cost of adding one arithmetic operation. They overlook the fact that arithmetic operation between two already loaded registers is almost instant vs the cache miss monster which is accessing the large array at random positions. You won't measure any significant difference.
Yep. That might have been a valid concern 40/50 years ago, but not today
I'm more baffled that those people preferred speed over correctness in logN algorithm...
@@Herio7 Speed may be a reasonable choice sometimes if it's significant and you can assert that your use case won't be problematic regarding correctness. But even then, speed won't even change by a significant amount here.
true that, its the people who never measured performance themself and rely on their (very) limited knowledge of the insanely sophisticated CPU
The people who will spend an hour and a megabyte to save a microsecond and part of a byte
There is a LeetCode binary search problem specifically designed to teach you about this.
left + (right - left) / 2 is algebraically equivalent to (left + right) / 2 and avoid overflow. It's a handy identity to memorize for coding interviews.
Doing this before leetcodes
Now try (left & right) + ((left ^ right) >> 1)
Or better yet: (left>>1) + (right>>1)
note that it can still overflow in general.
A/2 + B/2 however can't ( i think )
@@__Just_This_Guy__ have you tested it on two odd numbers
6:13 "when your integer becomes 33 bits" He probably means when it requires 32 bits unsigned or more than 32 bits signed. The 2.1 billion he mentions before is roughly 2^31 so he's mostly talking about the range limits of signed 32-bit integers. Unsigned 32-bit integers go up to over 4 billion.
I love that remark about not having a bug in HIS code because Python protects you from it. But also recognizing this is can be a real problem sometimes and teaching us how to avoid it.
It’s kind of self justifying though. It’s often considered a bad thing that Python doesn’t subject you to the horrors of systems programming. But if Python was trying to make you aware everything’s secretly an object and full of smarts we wouldn’t use it!
@@Freshbott2 Python is intended to mix beginner friendly with being a fully capable language. Preventing oversight type bugs like this is just part of being approachable to those who wouldn't have the experience
Fun fact: l + (r-l)/n is a general formula for equal division where n is the number of subsections. (l+r)/2 is just a special case that happens to work for binary search, but when you move up to ternary search (two division points), (l+r)/3 no longer cuts it and you need the general formula.
Someone please explain this with an example. It looks so obvious but I don’t know what it means 😂
@@scottbeard9603lets say L=10 and R=40. You want to divide into 3 equal pieces instead of 2. Formula (L+R)/3 would tell you to split at (40+10)/3 = 50/3 = 16 (and 33 if you do 2(50/3). General formula L + (R-L)/N will tell you to split at 10+30/3 = 20 (and 30 if you do 10+2(30/3)), giving you a split into 3 equal parts.
@@scottbeard9603 since someone else already replied with an example - here have the working out of why it doesnt work for 3, but does work for 2^^
lets have a look at the case of n = 2:
l + (r-l)/2 = (2l)/2 + (r-l)/2 = (2l + r - l)/2 = (l + r)/2
now lets look at what happens when you try the same with n = 3:
l + (r-l)/3 = (3l)/3 + (r-l)/3 = (3l + r - l)/3 = (2l + r)/3 != (l+r)/3
in general:
l + (r-l)/n = (nl)/n + (r-l)/n = (nl + r - l)/n = ((n-1) l + r)/n
so technically that way of writing it down does work... as long as you dont omit the n-1
mind you, having the computer work out that division point that way will only get way worse as you add more subdivisions - better to keep the numbers small^^
"Fun fact" -- you obviously don't know the meaning of the word "fun"
Well, I had fun.
5:23 the graphic is wrong. it'd result in a negative sum as r>l. it should say r-l.
In java in particular, you could just do: (l+r) >>> 1.
This should give the correct answer even if l+r overflows, by treating the resulting sum as unsigned during the "division by 2" step.
I think you're wrong: If l and r are both 32 bits in size, then l+r will be 33 bits. Where do you store this intermediate upper bit?
@@MichaelFJ1969 That's the thing: They're not 32 bits - they are 31 bits. Java does not have unsigned integers. And because of the properties of the twos complement bit representation used in signed numbers, l+r will have the same representation as if l and r were unsigned - but division does not have this property. Luckily we can do a bitshift to emulate it.
I also know that llvm can be weird about unsigned pointer offsets (or so the rust docs say), so this trick will probably (?) also work in C/C++ - but I'd have to dig into the docs to make sure of that.
That's actually what OpenJDK's (the reference Java implementation) binary search does too.
If anyone's curious what >>> does in Java, it's the unsigned right shift. Right shifting by 1 is the same as dividing by two, but a typical right shift (>>) or division by two will leave the leading bit intact, so if the number was negative, it will stay negative. An unsigned right shift fills the leading bits with zeroes, so when the value is interpreted as a signed value, it's always positive.
Java doesn't have unsigned integers? Wow that is terrible.
iam convinced best teachers are really good communicators
This scenario reminds me about calculating the average using floating point values. The first instinct would be just to sum everything then divide by the amount of values, but floats get more imprecise the further from zero they go. So the average might be off if the intermediate sum is too big. A better approach might be going by partial average, since the intermediate values are smaller. But there are other techniques too that go over my head.
I remember one day finding a paper with dozens of pages just detailing how to average floating point numbers, it's one of those problems that at first appear to be simple but are anything but that.
Kahan sum is your friend.
Yep. It's really a matter of "pick your poison".
TBF, when Java was written in the 90s, there weren't too many cases where arrays with 2^30 elements were being used. It's not really too surprising it went undetected for so long.
At the time, Java's demand for 32-bit processing was remarkable and often wasteful. Today, waste has been normalized to the point people get offended if you remark a 100GB patch is excessive, and Java has been awkwardly forced into smartcards.
In the 90s, engineers would have never done l+r halved and floor bs, they would have done r-l, shift, and indexed address, because any Intel 8086 can do this in the address generator unit almost twice as fast, in fact some processors would even fuse this instruction sequence. Java, Python. Estrogen for Programs! Nothing else!
Programmers should know to check their indexes before arbitrarily throwing them into an array though, that was a problem way back even in the days of C, it’s not a Java-specific bug
@@0LoneTech I always assumed they needed the 32 bit thing because they wanted to do a lot of linked lists or something, which are pretty useless in a 16-bit address space. They sure didn't do it for speed (I hope)!
Before 64 bit operating systems, you couldn't have an array that big anyway.
Let's try to offer an alternative that does what so many seem to be expecting: division before the addition:
m = l/2 + r/2
This breaks when l and r are both odd as there are two individual roundings, for example 3/2+5/2=3 in integer arithmetic, which prevents the algorithm from making progress beyond that point.
What you could do is add the last bit back with some bitwise operations (which boils down to "if both l and r are odd, then add 1 to the result"):
m = l/2 + r/2 + (l & r & 1)
Or just do it as in the video. The /2 will almost certainly be optimized into >>1 by the compiler if it is advantageous on the target cpu.
I'm sure most of us plebe would use reminder operator to add remainders if we for some reason wanted to complicated the code by making 4 divisions instead of only one.
I planned to use a floating point to remove doing the bitwise operation and later cast it back to int.
@@rabinadk1 Sounds like a huge extra can of worms and also likely a bit slower (although might not be noticeable compared to the cost of memory accesses).
My first idea🤣 Thx for highlighting the problem it would have and for the solution ;0)
I wish I had teachers like you guys on my university. Thank you for making such great quality videos.
4:31 The problem is even worse if your language *doesn't* do overflow or bounds-checking, (more common than you might think!) in which case your code will be looking at memory outside your array & Very Bad Things will happen. The way to prevent this in your code is to subtract your current position (CP) from the size of your array, (integer) divide that by 2, add it to CP, giving your next CP. This will work for any size of array that is no larger than your largest possible positive integer. This, of course, is how you handle a task like this in assembler.
this comment is the whole video summed up.
In assembler (well, in x86 at least), you can just add the numbers anyway, then do the division with a `rotate right with carry` instruction and are done. :D
Happened to me a decade ago in my home build embedded application. Eeprom with 65536 bytes. Using unsigned index variables only. No signed index at all !! The overflow wrap around is sufficient to cause that bug. Would have caused an endless loop at startup in the eeprom search. Luckily I ve spotted it in a code review befor Eprom was more than half full/used. Thanks for that video.
ho li fuk that takes me back. i remember eprom's from my TSR days in the middle 1980s
Rust has a function for "midpoint between two numbers" for this exact situation somehow.
😂 what
C++ just recently added std::midpoint function as well.
This is called hadd or rhadd (h=half, r=right) in OpenCL, and there's mix() for arbitrary sections of floating point values. It's an ancient issue; compare e.g. Forth's */ word, which conceptually does a widening multiply before a narrowing divide.
It's almost as if this particular operation is remarkably common actually.
@@dojelnotmyrealname4018, it is very common! It's arithmetic average of two values, and any codebase is bound to have more than a few of those. What's not common at all is operating with values that risk overflowing. Especially in 64 bits; with 32 it's much more of a concern.
Binary Search can be used for more complex tasks, like finding the answer to a problem where you know the range of the possible answers and the problem boils down to solving a monotonic function. I recently stumbled upon a bug with the (l + r) / 2 when l and r can be negative numbers. I was using C++, and in C++ the integer division round in the direction of 0; so, for example, 3 / 2 = 1 and -3 / 2 = -1. But I was expecting -3 / 2 = -2, as it is the nearest integer less than the actual result. In Python that is the behaviour of the integer division: -3 // 2 = -2.
@@GeorgeValkov yep, I learned about it after I found the bug.
These are insidious bugs, I ran into one due to different rounding modes for floating points (towards zero, nearest, round up, round down). I guess the integer division is rounding towards zero.
@@GeorgeValkovwhat's that? I'm new to c++, so i don't really understand what that means
That's bit shifting. It exists in all languages, but it's a "low level" operation. Anyways, since each bit in a binary number is a power of two, shifting the representation is the same as doubling/dividing by 2 depending on endianess. Now most languages implement their bitshift operators indépendantly of endianess tho
In this case of right shifting, the unit bit is discarded, so it's integeger division, the remainder is discarded. Note that the unit sigit being the same as the potential remainder only works because the divisor is the same of the base (like dividing 42/10 as remainder 2, the last digit. )
The way of explaining this was really good.
There is a little bit of significance to the number 17. If you ask people to pick a number from one to twenty, apparently 17 is the most popular choice. Plus, if you have a unbalanced twenty sided die that rolls 17 all the time, people are less likely to realize somethings up as it's not as likely to be noticed vs a die that keeps rolling 20. Especially since it will get obfuscated by different modifiers that get added to the roll so it's usually never just a 17. Also, it's prime.
wait what why? source?
@@asynchronousongswhy would someone go on the internet and confidently spread misinformation 🙃
@@asynchronousongs I have no sources, but it does make sense that 17 is the most "random-looking" number from the group. Evens are roundish, so they're out. Same with multiples of 5. Single digit numbers are too simple. 13 has a reputation for being either lucky or unlucky. 19 is too close to the maximum. 11's two digits match. The only number left is 17. Would be interested to see if someone can find an actual source though.
17 is known to be "the least random number", the "Jargon File" say.
I was expecting him to choose 42. 😢
I was wondering at first why just r/2 + l/2 wouldn't work, but with integers, the parts would get floored separately and that would give wrong answers when both r and l are odd.
wouldn't
r
i thik it should be r >> 1 + l >> 1
Plus, division is expensive compared to addition and subtraction. I'm not sure how much difference it would make in practice, but it makes sense to do it only once instead of twice if you can
@@dualunitfold5304 that is true, but integer division by 2 can be done by a single bitshift?
@@nimcompoo Yeah you're right, I didn't think about that :D
For demonstration purposes, could also have defined l and r as byte or short integers...
The array index are always int
@@ats10802b Not a big Java user here, but can't you index with a fewer-bit integer in Java? I thought it would be implicitly cast.
The point is. you could define the l and r variables as 1-byte or 2-byte ints, then you don't need the billion-element array to recreate the bug.
i don't think you can have anything other than int for indicies
IIRC Java does arithmetic in int and long only, so shorts and bytes won't actually have this problem.
Also, this bug didn't get spotted for so long because nobody had enough RAM to hold arrays of such sizes.
@@tylerbird9301 I think what he's getting at is that the overflow doesn't actually happen at the indexing part of the code, but at the addition part of it. So if L and R are defined at a smaller datatype, then when you added them they'd still overflow resulting in a negative number when it gets used as the array index.
I remember coding a binary-search function by hand once (and it was probably susceptible to this edge case). I specifically wanted it to search for a specific value and return its index, OR if the value was ultimately not found, return the index of the first greater-than value (for use as an insertion point). Nothing too complicated technically, but DANG was it frustrating to debug.
Sounds like you should've used a hash table
I remember hitting that bug and investigating a bit on it. But it happened so rarely, that we decided not to put any effort into it.
Don't remember exactly, would assume we didn't have test cases for that because you don't test standard libraries. "Millions of people use that every single day, and you think _you_ found a bug in that?!?"
Still holds an appropriate amount of humility. But _sometimes_ ...
Quite interesting that this was a hidden problem in Java for so long. In microarchitectures this is a well known problem for decades as the numbers don't need to be absolut big, it's enough for both the left and right side number to be greater than half of the max. number representation.
There are also patterns where you divide both l and r by 2 with bitshifting every iteration and add or substract them together according the controlflow.
Whenever he said "l+r" I already knew from experience to write some kind of code to prevent an overflow.
Has no one else had to endure a C++ class where everything is trying to make your program break?
C++ programmers have a different mindset 😂
@@ali.ganbarov _C++_ has a different mindset, it will try to break _you_
People who only know high-level languages will not likely be thinking of overflows. If they got a crazy error message about an array being indexed at negative one million or whatever they might eventually realise what is going on, but hand on my heart I'm sure it would take me a while. Hours or days. I wouldnt have anticipated it before time.
@@3rdalbum I primarily use Javascript and that language is so poorly designed that it has me writing custom code to verify an integer instead of it having that built-in type.
I think because I am so used to having to fight against the language constantly and being forced to adopt an extremely defensive style of coding, I was more aware of such an error, despite javascript being a higher level language.
I remember a kind-of similar bug in Intersystems Caché: 4 * 3 / 6 returned 2, but 4 / 6 * 3 returned 2.00001 (and this little butterfly ended up destroying the city of New York).
This commutative/grouping operations that are the basics in arithmetic, in our code some times they create a lot of mess when released into production environments.
I think it is important to add comments in the code so the newer generations understand why we make these changes. E.g.:
int m = l + (r - l) / 2; // =====> Do NOT change to m = (l + r) / 2 =====> it can potentially lead to a positive int overflow
Looks like your first example is just an example of floating point coercion (assuming you expected integers), along with the fact that floating point numbers are an approximation of a value, so due to the underlying math behind them you sometimes get tiny weird rounding errors.
One has to be a fool to think computers can support ℝ numbers. They do not, it is absolutely not possible. As such they do not support associativity or commutativity, or many other properties, even the more obvious. 1+ε-1 may not be ε. etc...
Even Integer support is partial, at best.
Newbies think they can solve the problem by using "fuzzy" rounding or other tricks.
Nerds know they have to proof the code they wrote.
No need to add comment. Add a unit test with big number. It will fail when someone changes it.
I can see a similar overflow problem happening when summing up for an 'average'
😬
Divide by how much if you dont know the number of items yet ?
@@pvandewyngaerde
You just have to keep track of the number you're currently on. For example, if you have two numbers, the average is (a + b) / 2. Let's call this A1.
If you add a third number 'c', it's (a + b + c) / 3, which you can rewrite as (a + b) / 3 + c / 3. We can then rewrite the first term as A1 * (2 / 3), giving us A1 * (2 / 3) + c / 3, allowing us to divide before adding.
Just to continue the example, if we call our last result A2, then when we add a fourth number 'd', we can compute A2 * (3 / 4) + d / 4.
@@DanStoza That would lead to less accurate results though, computers are bad with accuracy in divisions.
If you are dealing with super large numbers, just use a 64-bit integer. If you can max out 9 quintillion in the addition then used BCD (binary coded decimal), it's guaranteed to max out your memory before you can max it out. Also, it will be a performance hog.
Nicely explained.
Integer overflow is how the infamous infinite lives bug on the original Super Mario Bros. for NES works. Lives are stored as a signed value so if you go over what fits you get weird symbols for your number of lives and it becomes impossible to lose cuz the way the math is set up you can’t subtract from a full negative number and wrap back into the positives, but can add to a full positive and wrap into the negatives. Since it only checks if you’re equal to 0 not lower than it, you end up with infinite lives. But it overflows into other important memory and causes other bugs as well.
That's weird that it corrupts other memory. I would have expected lives to be a single unsigned 8 bit variable. Especially since once you get more than 99 the game's print routine indexes out of bounds and starts showing adjacent graphics instead of digits. So obviously the game designers figured 'eh, nobody will get that many extra lives' and didn't bother to check.
5:11 You can also do ((x/2) + (y/2)). Some people may find this easier.
That has its own problems. As someone else pointed out, if l=3 and r=5 and your language's integer division operator rounds down, 3/2 + 5/2 gives you 3 and the algorithm loops forever.
A topic I would love to see Computerphile cover: relocatable code. How can a binary be loaded in an arbitrary memory location and still have valid addresses for load/store/branch? How did that work in the old days vs now? What is real mode and what is protected mode? Are there different strategies on different platforms?
Yes! I support your request!
The binary is loaded a different address each time ( I'm assuming PIE is set when compiling) and instructions within the binary are always at a constant offset from the base of the ELF with which it's loaded
As long as the elf knows it's loading address the instructions are always at a constant offset
Real mode is an outdated operating mode, Intel is removing in X86S.
Modern UEFI BIOS'es make it really hard to enter real mode (aka 16 bit mode) outside of SMM. Pushing UEFI boot with secure boot forward does just that.
For now SMM (system management mode) starts in real mode and very first thing a CPU does AFTER entering SMM - switches into long protected mode (64 bit mode) inside a SMM mode.
One more thing about different modes. Modern ARM cores are majorly 64bit only (doesnt support 32bit neither for whole OS nor for just applications nor for ARM TrustZone (ARM variant of SMM mode on x86)
At 5:23 is it supposed to be R-L instead of L-R? Maybe it works both ways I don’t know.
Yeah, it should have been R-L. It won't work both ways since (L-R)/2 will be a negative value.
A crossover with Numberphile would be nice here. You could have them show why (r+L)/2 = L+(r-L)/2.
It's not too hard to show here, though. So I'll try:
(r+L)/2
= r/2 + L/2
= r/2 + L/2 + (L/2 - L/2)
= r/2 - L/2 + (L/2 + L/2)
= (r-L)/2 + L
= L + (r-L)/2
Bonus question: why not use the second step? 2 reasons:
1. Division is generally the slowest arithmetic operation, so you want to do as few of them as possible.
2. The fastest math uses integers. Integer division will mean the .5 part gets dropped. So, if both r and L are odd, the midpoint will be off by 1.
At least, those are my answers.
Our Prof had a rubber shark he called Bruce. It was supposed to remind us of trouble with integers lurking in the deep. I can't remember the exact lesson. Prof Bill Pulling. Great guy.
I like the related ternary search used to find a minimum in a convex array/function.
Recently wrote an N-dimensional N-ary search function. It's fun to try and scale these basic algorithms, solving all the bugs that show up and improving your understanding.
Would it not be more efficient to just divide both by 2 inside then add them? Then you just have (r>>1) + (l>>1).
Though you might have to add the remainder if both are odd, so
(r>>1)+(l>>1) + 1&r&l
Pretty much the same number operations but you’re not limited to using signed ints.
Well why not l + (r-l)>>1 and cut out the comparison. The addition/subtraction is nothing compared to division
@@rb1471 right, I was thinking l could be bigger than r, but that can never happen :)
(l + r) >>> 1 works as long as they are signed (limited to max positive int value). >>> is unsigned shift.
Would have thought that L/2 + R/2 would have been better than L + (R - L)/2. L/2 is just a bit shift right, so pretty fast. Handling the case where both L and R are odd is not too difficult (just add one).
So you’re performing the same number of operations, but swapping a subtraction for division. But also you’re branching to add 1/4 of the time?
@@Andersmithy I suspect Mike's implementation is more reliable across architectures and compliers. Dividing by 2 is just a bit shift so L >> 1 + R >> 1 which has got to be pretty quick. The issue is you have to add (L && 1) && (R && 1) to the result to account for both being odd numbers. But there are no branches in the code.
So ans = L >> 1 + R >> 1 + ((L && 1) && (R && 1))
(The compiler *could* have a branch in the logical bit - after all, if L && 1 is zero it doesn't have to do the R && 1 part)
However, Mike's code is just L + (L - R) >> 1 so, yeah - I suspect subtraction is pretty much implemented in hardware.
The thing is, L >> 1 and R >> 1 could be pre-computed and stored (as two equally long arrays), then, maybe my solution would be a femtosecond faster :)
5:23 There's a typo. It should be R-L, not L-R
funny i had that problem a few months ago while implementing a binary search on a Microcrontroller. I had to use a Word as the Index so i only had 65535 as Max index and i noticed that the overflow was not handled correctly in the basic binary search. It didnt even ocure to me that such an obvius problem was unknown for a long time.
Basically any code with a sum is prone to this bug then. I do wish languages have a built in error checker where it checks if the output sum is smaller than either input and raises an error if found, although that would also add ungodly overhead. Seems more like an implementation detail than a logical bug though.
I mean for one you can add negative numbers, so you cant just check if its larger. Secondly there is a way to do this, its just not the default since the overflow can be desired behavior. In lower level languages you can just check the CPU register that tells you if there has been an overflow, in java you cant do that but it has a Math.addExact(a,b) that does exactly what you want, it throws an error if there was an overflow. there isnt much overhead in that function, i assume, at least not much more than the java overhead.
you could also use a bigint library that doesnt have any overflow, but that will most definitely have much more overhead than using a intrinsic type.
Overflows are often used as clock signals, particularly in embedded electronics, so preventing them by default isn't ideal.
There are compiling flags for most languages that do exactly that, it's just that they are debugging flags and almost no one uses them unfortunately.
On the DEC PDP and VAX machines the default language was BASIC and that by default checked for overflow. It may not be trendy but that may have been a good thing.
I did have COBOL code that used PIC 9 (i.e. a single character as a a digit from 0 to 9) and that happily looped trying to get to the 10th element of an array as "9" + "1" just saves "0" into the single character.
The logical right shift >>> will also fix it, because it'll return from negative to positive if necessary: (l+r) >>> 1.
There’s a mistake in the visual 5:23 it should be R-L not L-R
Great job with the lighting.
7:45 Why do you pronounce null as "nord"?
Naught
@@noel.friedrich
For me it sounds exactly like when someone advertises NordVPN, so I spelled it "nord".
In Java and C# the array length is an int. You can easily solve the problem by casting to long first.
If you need larger arrays or matrices, they are usually sparse arrays or matrices, which need a more sophisticated datatype implementation anyway.
"You can easily solve the problem by casting to long first."...until the boundaries of long are breached, then we are back to square one.
Rule #1: if you are writing a general-purpose routine, always, always write in a 'safe' way. Never, ever assume a limit when none has been implemented.
@@ChrisM541 As i said, the array length is guaranteed to be an integer in Java and C#. The range of int and long are exactly specified.
Doesn't make sense to program 'safer' than the language spec.
In fact. the current OpenJDK version even takes advantage of the fact, that int is signed:
int mid = int mid = (low + high) >>> 1;
You can also interpret the sum of two signed integers unsigned (has one more bit) and divide it by 2 to get it back into range
Hey,
It was a really great explanation, appreciate the effort.
There is one small doubt, what if we do L/2 + R/2, this should also work as both L and R in the range so L/2 and R/2 are in the range too.
I was thinking the same, but l + (r - l) / 2 is only 1 division, so maybe it is more efficient.
L/2 + R/2 gives the incorrect result when L and R are both odd numbers. Can you think of why?
@alstuart @miamore_un
**Assuming Java:**
It gives the incorrect result for both even and odd numbers, because L and R aren't defined variables. That's easy to fix by correcting the variable names to l and r. (Java variable names are case-sensitive.)
It also gives the incorrect result for odd numbers due to integer division, but that's also easy to fix by changing it to l/2.0+r/2.0 .
Regardless, their intent is correct. Who knows, maybe they were writing code in a language that doesn't do integer division like that, and which has case-insensitive variable names!
@@alstuart Not sure my other comment notified you correctly. Sorry if this is a double-ping!
People nowadays completely ignore hardware and operating system, which has 80% impact on actual performance.
OOP languages like Java, Python and Javascript led us to this madness.
IntelliJ/JetBrains IDEs are awesome.
I wouldn't have thought to use pointers that way, obviously you subtract l from r and add that offset/2 to l. No overflows possible if you use pointer arithmetic correctly
Funnily enough I just last week watched some C++ convention talk, might've been Kevlin Henney or someone else, mentioning this exact issue where the integers were so big they overflowed before getting the average. Might've actually been about satellite arrays. Perhaps it was Billy Hollis after all, but someone anyway.
I was thinking maybe you'd just halve them first, but then again it's possibly two float arithmetic operations which isn't as lovely. Although, you could probably get away with a >> 1 type of trick and get almost free out of jail. Anyway Pound's method is pretty obviously better when it's just one halving and addition/subtraction.
this is more of a standard practice of embedded software engineering than a "bug" in the binary search. you always consider intermediate overflows when doing data unit conversions or working with fixed point decimal numbers.
also you can do (r + l) >> 1. shifting right will effectively divide by two regardless of overflow
"That's a Python comment, not a Java comment"
THE PAIN! Every goddamn time!
Appropriate punishment for using end-of-line comment. 😂
Nice, I thought, the solution would be: m = l/2 + r/2, but maybe you get in trouble for odd number of l and r.
Integer division can't result in a decimal. In Java, 5/2 == 2, not 2.5
So for (l, r) = (1, 3), you get (1/2) + (3/2) = 0 + 1 = 1.
I thought the same at first, but addition is a simpler operation than division is why they did it this way. You could also do r - (r - l) / 2.
@@thomasbrotherton4556 Division by 2 is a simple bit shift
This was my first thought, though I was thinking with a bit shift as that should be faster: m = (l >> 1) + (r >> 1);
@@SomeNerdOutThere ... + (l & r & 1) . odd numbers fixed.
l + (r - l) * 0.5
That is a standard linear interpolation function. Basically, you walk the value from 'l' to 'r', with the given progress value, which is 0.5 with the example above.
Why does Java use signed integers for an array structure that doesn't allow negative integers?
Java primitives are signed. Also, operating with signed numbers can be less error prone than with unsigned numbers. For example, in C/C++ when you take unsigned number and subtract another unsigned number but greater to it, you'll get the wrong result. For example doing the following operation with unsigned 8 bit numbers: 1-3 won't be -2, it will be 254.
Firstly, the problem remains with unsigned integers; the incorrectly calculated index just might access defined memory and lead to more confusing misbehaviour, such as an infinite loop or incorrect answer. Secondly, it can be useful to apply an offset to an index, such as the (r-l)/2 value here, and in other algorithms it wouldn't be odd for such a step to be negative. Thirdly, Java doesn't know the number is an index until it's used to index, and the algorithm isn't array specific. There do exist languages which can restrict index types to match, like Ada or Clash. In Python negative indices index from the right.
The even funnier part is if they were unsigned the bug might have been harder to spot, as the overflow still happens, but this time it starts from 0, so it will give a valid index inside the array, thus not throwing an error. The result would still be incorrect, just harder to notice since no exception was thrown so it would be obvious something was fishy
Java doesn't do unsigned. The creators never liked the idea, and it's just the way the language is.
@@antoniogarest7516That just shifts the over/underflow boundaries around, though. -100-100 isn't 56 either. Java was designed with a 32 bits is enough attitude, Python switches to arbitrary precision, and Zig allows you to specify (u160 and i3 are equally valid types). Ada is also quite happy to have a type going from 256 to 511.
needing a second to think of the number right before 1.2B is super relatable, lol
I wonder if any languages have a compiler optimization that tries to simplify mathematical operations: so it would take "l + (r - l) / 2" and turn it back into "( l + r) / 2" 🤔
That style of optimization exists, e.g. in gcc's -ffast-math, which enables some for floating point processing. However, the buggy code has undefined behaviour which the corrected code does not, so this erroneous change should not be produced by an optimization pass.
I think you made the code more complicated and expensive by doing that.
Generally speaking the answer is no. Compiler optimizations shouldn't change the behavior of the code, so something like this would be considered a bug.
Optimizations WILL do things such as replacing multiplication with bitshifting when possible, or reordering math when it doesn't make a difference (for example, multiple consecutive addition operations)
@@antonliakhovitch8306 That optimization would be correct only if the numbers are ideal numbers, behaving exactly like math says. The computer language integers do not do that, they operate in modulo arithmetics.
@@JarkkoHietaniemi Multiple consecutive additions are fine to reorder, even with overflow
Bit captivated by the lighting in this ep. Made me feel like Dr Mike was giving me a Voight-Kampff test.
I like to just implement binary search using bitwise operations.
So, for the index that you are looking at, just go through the bits one by one (starting at most significant bit and starting with index zero), set them to 1, and then reset them to 0 if the index is too high. Just make sure to check whether you are within bounds.
This way, you don’t need math and therefore can‘t run into integer overflows.
That sounds so wasteful and also weird. Say have an index of 12, everything up to 16 is too big (that's like at least 27 comparisons), then you reach 8, 8 is okay, 8 + 4 is too big, so we turn 4 off again (right?), then 8+2 is okay, and 8+2+1 is okay. So we end up with 11? Or do we stop at 8, which again is not correct. What reason do we ever have to turn off the 8?
What would have happened if the code had used unsigned integer as the type? Just incorrect results, right ? Is that why they chose to use signed integer so if something goes wrong, at least you know about it with an exception?
I love how people comment about using shifts instead of dividing by 2, not realizing that any sane compiler or interpreter of any sane programming language in the last 25 years does this anyway. Don't try to outsmart the compiler in optimizations. You will fail. Let it optimize for you and only then check if there's anything you could improve (and if you can, you probably don't need this advice).
Anyway, shifts would probably be a better choice in this code from a style point of view: (l+r)>>>1 looks better than l+(r-l)/2. This only works in Java though.
ok so some may have already posted this but couldn't you just do (R*0.5)+(L*0.5) to get the mid point? or am i missing something ?
5:52 Isn't that the same as R/2 + L/2?
That would break in the case of L and R being odd. If L and R were 5 and 7, doing integer division on both would return 2 and 3 because it removes the decimal, and the calculated midpoint would be 5.
Are there any situations when (l+r)/2 and l+(r-l)/2 give different values when (r-l)/2 is truncated?
L+R and R-L should have the same remainder mod 2 and will truncate the same way i believe
L + (R/2 - L/2) maybe would be different than (L + R)/2?
Testing an example
L=1, R=6
(L+R)/2 = 3.5 → 3
L +( R/2 - L/2) = 1 + 3 - 0 = 4
Yup
Java has 'unsigned bit shift right' operator '>>>' which works perfectly for division by powers of 2. (l + r) >>> 1 will work just fine. Intermediate result, the sum of two integers can at most overflow by a single bit which occupies the sign bit. No information is lost, it's just treated as negative number. So, when shifted back with single zero it produces correct positive number.
One other thing I would add is a defensive early return that checks if left is less than right as we assume, because if it is greater than right then it will lead to an underflow.
if (l > r) {
return new Error("Left pointer is greater than Right pointer. It is a programming bug.");
}
int midpoint = l + (r - l) / 2;
That won't ever be the case as long as we're searching the _index_ of an array, which can only be a positive integer.
First; the following expression: (L + (R - L) / 2) will never underflow, as long as L>=0 and R>=0. This is because L is added back into the result of (R - L) / 2.
So even if (R - L) / 2 would be a negative number we add back L to correct for it.
This is simply because L is added back into the result of ((R - L) / 2). So even if the result of (R - L) / 2 is negative, it is corrected by the fact that we add back L.
Secondly, it would be a compiler/runtime bug, because of the while loop counts from L until R which are bounded by 0...length of array, and only shrunk towards the midpoint resulting in a positive integer.
In case we can have negative indices, then we'd need to condtionally check the bound and then (R + L) / 2 instead, otherwise fallback to the previous equation.
If you use signed integers, it would never happen in C, because signed integer overflow is undefined. Basically, compiler (CAN) assume that `l` will never be less than `r` because you never change them in that way. So, be carefull with assumtions like that.
why don't simply change the formula: m = (L/2) + (R/2) instead? We know (L+R) / 2 = L/2 + R/2 and L/2 < L and R/2 < R; so, it would avoid overflow.
I wish you would link the relevant video(s) in the description... At the start he refers to another video I haven't seen and need to go look for.
I guess not that big a deal... Only 3 videos back...
I remember this error.... probably the most famous bug of all time
I think I've never encounter it, because I've used shift operation (>>1) instead of divide by 2.
The compiler would too
Doesn't happen in assembler because we have a carry flag (RCR to divide 33-bit result by two). Also we know the indices are unsigned values. So, we can use the full range of the register size.
rubbish. Overflow happens in assembler in exactly the same way. Yes you have an overflow flag BUT you have to check it, it doesnt stop the overlfow in the first place and I bet you a car that most assembler programmers would not bother to check for overflow unless they have some idea the numbers were going to get large. Just like its a non event in a language like C unless you know you will be processing large numbers.
@@turdwarbler sure, we need a more verbose definition of the function which matches our assumptions. Only through intrinsics does C have access to the instructions or machine state to handle function definition with the assumptions made - the compiler is not going to grok what the math intends to use the instructions designed for this purpose.
It's actually not a more complicated process it's actually less computationally complex, divides are computationally complex so the smaller your divide is the better (L+R)/2 will always be larger than (R-L)/2, division in a computer is done with N number of additions, so the the new method is actually much more simple for a computer to perform ((R-L)/2)+L
This only changes the place where a bug appears. If an array is defined indexed from -1,000,000,000 to 1,000,000,000 (minus a billion to plus a billion) then then very first step in a binary search will overflow with the new formula. But the original formula will not. To fully fail safe the code would require an if statement to choose the expression that does not overflow...
Why not just divide both numbers by 2 then add them? (I know normally subtraction is faster than division but since it's dividing by 2 just bit shifting should work, plus if you use bit shifting of an integer instead of division then you can do away with flooring the result as well).
How do you handle finding muliple matches in a binary search? I assume when find a match that you have to start a linear search to the left and right of the that and continue until you find something that doesnt match?
You can keep going with binary search, though many implementations only return the midpoint upon match, not the high and low range. Searching for any match, left or right edge are distinct goals, often expressed as looking for the first or last index a value could be inserted while maintaining order.
You can also do bit wise operations to get the midpoint and that won’t overflow either
How?
One could check that l < midpoint < r and flip the highest bit if not satisfied, is that what you mean?
@@landsgevaer I guess you can do an unsigned right shift (>>> in Java) instead of dividing. This would work fine if both numbers were nonnegative (which is the case with array indices), but breaks for negative numbers.
@@vytah You can shift the l and the r by one bit as an alternative to /2, but then you still need the plus operator to combine them, right?
That is essentially the same as using l // 2 + r // 2 (integer division) in python instead of (l+r) // 2 as in the initial video.
that doesn't quite work as integer division isn't -associative- distributive
(1 + 1) / 2 = 1
(1 / 2) + (1 / 2) = 0
To avoid overflow while keeping the division you'll need to do
(l / 2) + (r / 2) + ((r & l) % 2)
which can be optimized by any decent compiler to
(l >> 1) + (r >> 1) + (r & l & 1)
but at this point the solution shown in the video seems a lot better to me as it's simply
l + (r - l) >> 1
@@skeetskeet9403While it's true integer (floor) division is not associative, what you showed here was it doesn't have the distributive property.
I don't know if its just me, but what about (ceil(l/2) + floor(r/2)), this should be correct right??
If the type of the length was an unsigned int, could it just become a loop without ever letting the user know? (example: if the new m for the user's luck becomes the first average, it would loop again until it got back to the back sum and go back to the initial average)
Edit: fix the type of L, R and M should be unsigned int, actually (and length as well, I suppose?)
I didn't understand, being from a c programming background how is array size limited by a language? Isn't that an addressable restriction depending on the architecture?
some languages do a lot of data management in the background with fancy data structures. some languages use dynamic arrays by default
I'm used and mostly using. The minus approach to find the middle.
Idk that thing was counter intuitive by then.
Would not a simpler fix be to just divide the two original numbers (L and R in this case) by two, and then add the result?
That's still 3 operations (two divisions and an addition), so no, not really.
@@xtifr Still a way to solve the problem by making sure the numbers do not overflow. May not be computationally superior, but it *IS* a solution to the problem.
I'll admit that doing an integer subtraction then a division, then an addition is computationally more efficient. I was looking at it from a pure mathematics perspective.
@@jeromethiel4323consider what happens if you do this with numbers 1 and 3. In integer math the result is not 2.
I'm not sure in high level languages but right bit shifting both values before adding could be faster.
I guess I didn't notice you writing it that way in the first video, but I've known about this weird way of getting half the distance between two indices for a long time and specifically avoided it because of this known issue. I generally use C and adding a single extra instruction that takes one clock is not a big deal for a compiled language. Even at a billion entries it's more than fast enough to not warrant worrying about it. This is one of those edge cases that can ruin your year and is probably one of the reasons that Knuth came up with that stupid phrase. I guess I should've paid more attention in that first video and chided you for it, but I was fighting my Python installation that didn't want to use numpy. Bad Mike!
Could you just do l/2 and r/2 then add both answers. Is that efficient?
If l and r are integers, and we do integer division, then l/2 + r/2 may not necessarily be the midpoint of l and r.
In Java for example, 5/2 + 9/2 = 2 + 4 = 6, but the midpoint of 5 and 9 is supposed to be 7.
What about doing 2 divisions and adding the two halfs?
Is should also work if you change the division by 2 to a logical right shift by 1: >>> 1 because logical right shift doesn’t treat the top bit as a sign
The sun was setting quite beautifully through the window
Couldn't you do L>>2 + R>>2 (bit shift L and R over by one and add them together)? Effective (L+R)/2 is also equal to L/2 + R/2. Normally for efficiency you want to reduce if's and division unless the division is a multiple of 2.
Dividing by two via this method is not recommended. The Java compiler already optimizes divisions, and l/2+r/2 is much more human-readable. Remember, the person updating your code 5 years from now might be an intern still in college; write it with them in mind. This is also true for C++. I'm assuming it's also true for most other languages.
There is a Stack Overflow question for "which-is-better-option-to-use-for-dividing-an-integer-number-by-2" which goes into more detail about which is more appropriate for the situation. I'll post a URL directly to the question in my next comment, as TH-cam comments containing URLs are held for review.
If your machine model uses two's complement to represent negative numbers (i.e. left bit for the sign, like almost every computer; Java even requires it on a language level), then another solution would be, to replace the division-by-2 by an unsigned shift-right operation. This works, because an addition of non-negative numbers can at most overflow by one bit, i.e. into the sign-bit, but never beyond that, e.g. 01111111+01111111=11111110. An unsigned shift-right operation by 1 always shifts-in a zero on the left, thereby undoing the overflow while being mathematically equivalent to a truncated division-by-2 (exactly what we need), e.g. 11111110>>>1=01111111.
Why not use l/2 + r/2 ? is it only because two separate divisions are computationally more intensive?
Because if l=1 and r=3, you get m=1 then r=0 and everything crashes since r
Why not just use the normal maths answer of r/2+l/2? Is it just an attempt to save on a slightly costly division operand or am I missing something else?
You could also go the more jank route and cast the l and r to longs, do the calculations, and then cast the resulting m back to an int
An important point to remember is that this sort of calculation will often appear in the middle of some deep loop, so performance is an issue.
I would rather go with L/2 + R/2.
That is just 2 Bit shift operations and a addition afterwards.
Especially if I'm coding for embedded hardware. The compiler will probably catch that for me, but i can make sure, that it will happen by applying the bit shift manually.
I don't know if a embedded processor can actually perform a bitshift on load of a variable. Or bitshift after a addition. That might be a specific instruction you could perform.
Maybe the compiler is intelligent enough to see, what you're trying to do and compiles that error away.
If L and R are both odd, you'd need to compensate and add another 1. The extra test has just nullified any benefit you thought you'd managed to get.
why not just change the order of operations? (R/2 + L/2)
Rounding. But yes.
nice edge case. I wonder if the speed penalty introduced by the extra arithmetic could be reduced by computing the truncated average with right shifts and bitwise AND?
like
(l >> 1) + (r >> 1) + ( l & 1 & r)
(Is this even correct? I did not doubt this enough, maybe.)
It is correct and slightly faster than even (l + r) / 2, but we can do better than that. Integer.MAX_VALUE + Integer.MAX_VALUE does not overflow if we view the bit patterns as representing unsigned ints. So we can simply do:
(l + r) >>> 1
Summing all l in: 0 < l < 100000 with all r in: l < r < 100000, this is 35% faster than l + (r - l) / 2, and 21% faster than (l + r) / 2 on my system.
So (l + r) / 2 had a speed penalty to begin with.
@@jmodified beautifully simple. Thanks for making the effort to benchmark!
I congratulate you for your enthusiasm!
@@jmodified I was looking for this answer.
Is there not a better way than “l+ (r - l)/2”? Couldn’t you instead just do “l/2 + r/2” and since dividing by 2 is just a bit shift it would only require two bit shifts and an addition, rather than one addition, one subtraction and a divide/bit shift? Also since the divide/bit shift and then addition it would ensure that the integer doesn’t overflow just like the used formula does.