Yeah but when you spend a year coding it, you can apply it to millions of lights on hundreds of trees and it won't take hundreds of man-hours measuring locations and entering them. (It doesn't matter if you ever actually need to do that or not.)
@@MushookieMan Even with potentially millions of points and hundreds of man-hours creating them? I'd rather find a faster and more predictable way, like arranging a path for the lights to follow using CAD software so you always know the path of the lights and can figure out everything from that. Then you take that model and "map it to your tree." Kind of the opposite way to get the same results, and seems more likely if you're doing a hundred trees anyway.
If he's running these spreadsheets live I feel like he's opening himself to displaying a few risqué patterns, maybe some human anatomy, maybe a rickroll or two (tho that'd be hard with 500 pixels)
instead of using the brightest pixel, use a base image, then look for the greatest increase in brightness, that would likely reduce the number of 'bad' pixel positions found.
When you are doing this procedure in a completely dark room and your base image is basically completely black then these 2 methods will produce the same result, since the brightness of a pixel would correspond to the difference compared to the (black) base image.
That patent notice was way more reasonable than I expected. Sounds like they're not saying you specifically infringed any patents, just that anyone who wants to maybe sell something that does this should be aware that whatever method they come up with /might/ be patented and /might/ get them into trouble. Still kinda silly to patent a method of measuring stuff, but again I was expecting some kind of patent troll, not a fairly 'innocuous' message.
Patent trolling isn't what it used to be. The courts have caught on to the practice and some times it's really difficult to pursue an infringement case when your patent isn't that strong and your biggest opertuntunity is an educational video creator who isn't actually selling or licensing an infringing product.
Nope, given the demonstration basis of the application, the patent trolls were complete douches and should be beaten to death as being part of the problem blocking progress.
@@TheOneAndOnlyNeuromod But they weren't blocking anything? How would you know they where part of the blocking progress? For all you know, this could be their first and only legal document they sent to anyone. Maybe they where wrong, but they also didn't do any harm.
For my own tree, I photographed in a dark room of course - but I made each LED light up pure blue, then had my code look for the reddest group of pixels. This worked out pretty well since the LED was bright enough to show up as white on the camera (So with a large red (and green) component), while all the reflections elsewhere were mostly blue (With small red components). I like your error correction technique and will definitely incorporate it into my own code - I'll happily convert to the GIFT format as well. I have mixed feelings on swapping to pre-rendered shows in CSV format: It's a great way to fix the compatibility issues you ran into with the code submitted last year, but it also means any particular CSV will only work completely as intended on the tree it was designed for. LibreOffice also can't display more than 1024 columns, while your tree requires 1501 - something I'm not sure counts as an issue since these will be programmatically generated anyways, but I'll bet *someone* out there (Without Excel) would have tried making one with spreadsheet formulas, especially considering this is *the* premier spreadsheet appreciation channel on TH-cam.
In all fairness, I wouldn't even attempt to work with such csv in an office program. Since you are coding something already to find coordinates, might as well just load it into pandas and go from there...
Very clever, but just turning the shutter speed or gain or aperture on the camera would be a much more controllable and principled way of achieving the same thing.
while a web-based application may not be ideal for working with this much data, I was able to make over 1501 columns in google sheets, so someone should still be able to make something with spreadsheet formulas w/o excel
@@natbroyles1814 That's encouraging to hear - given whose channel this is, I'll be very disappointed if literally nobody goes the route of making something with just spreadsheet formulas.
another downside of CSV format is it can't be dynamic (e.g., you couldn't have one that listened to the computer microphone and made a live visualisation, or one based on weather data)
Matt: "it's not super obvious that they are out of sync with this effect" Me, who's been unable to take their eyes off that one contradictory LED in the middle since the start of the video: "oh good" :') Needless to say I am very happy that the video was all about addressing this issue! Looking forward to seeing what everyone can come up with this year! :D
When Matt turns on the colors for the wires in the graph, I was confused for a moment because everything was the same color until he said that they were red and green. Very fitting for a Christmas tree ;) curse my genes for not letting me enjoy this color combination. The idea for moving the out of bounds points was really interesting
Let me get this straight. The 'Organization' has developed a technology (for finding lights in 3D space or for turning them on) so incredibly good and magical and difficult, that if a few people were inspired to try their hand at the same thing, they might independently, accidentally arrive at that same, brilliant, patent-worthy solution?
For better or worse, something doesn't need to be "incredibly good, magical and difficult" to get patent protection. It just has to be an invention, and for there to be no prior art in place. There are certain patentability criteria (including novelty, usefulness, and non-obviousness in the US), but the bar is surprisingly low. To be fair to the organisation in question, they were at least just alerting the existence of patents related to this, which is much better than what most such organisations would do. Also, there's nothing barring the discussion of the contents of a patent. Half the point of the things is to encourage the invention to be described, though the big problem there is that many patents are described so broadly as to be useless.
That legal document sounds like, we love your work, but we have patents vaguely related to this that we would be legally required to defend, so keep being awesome, just remind everyone to check existing patents so no one has to waste money on lawyers. Nicest legal document I've ever seen
still has an air about it: "it would be a shame if your knecaps would burst" extrremely douchy patent trolling version sent to a man who is not even in a commercial scpace, thinly veiled as non threat. hope their coal hart will develop a crack just now
I learned from a lawyer that checking out patents can be disadvanteous, because triple damages can be awarded for intentional infringement. Though, "unintentional infringement" sort of indicates that the patent is patenting something rather obvious that should not warrant a 20-year monopoly.
@@EmbeddedSorcery Pretty sure patenting something is done quite easily, but it's usually only once something gets taken to court that it's declared as BS and will be struck down.
I'm so happy for your tree! mine got low grades, failed a bunch of classes, and hated being in school; he stays at Walmart during the holidays. Glad to see your tree has grown into the giant it is today!
Inspired by this last year, I made my own, but being frustrated by the slowness of scanning, I was able to (nearly) triple the speed by setting 3 consecutive LEDs in each of Red, Green and Blue, so I was effectively scanning for 3 LED's at a time by finding the brightest point in each colour, rather than doing each individually.
If you also do some monte carlo runs, and just turn every pixel on, but choosing a random color from RGB CMY W for each of them, you could probably do all of the locating much faster, and all at "once", just by keeping track of which pixels were which colors in which frames. The same way you can use white noise to measure the audio response of a room, rather than doing a frequency sweep and taking a measurement at each frequency. Just take all the measurements at once, but in this case, do so with enough runs that the color sequence for each pixel is unique, so you can use that sequence to uniquely identify each pixel. If you use 7 primary colors (RGB CMY and white), how many runs do you have to do so that each pixel has like 95% chance of having a unique string of consecutive colors so that you can uniquely identify each one? That would make a good Math video!
@@gorak9000 Take the number of distinguishable colors and use that as the base of the number system to flash the LEDs ID, similar to what he does in the video but not base 2, base 32 or higher hopefully. Probably just 2 pictures needed. Perhaps only 3 tree rotations and of course use "bundle adjustment" so LEDs seen in up to 6 total pictures get their locations optimized. Most importantly, turn the camera exposure way down so it doesn't get blown out!
Oh no! I've just realised this video is a year old, and there's no follow-up video running the submissions on the new tree :( No pressure Matt, but I still hope to see this again some day.
I feel like the check using the brightest pixel might also be a fault with false positives from reflections. Perhaps each image needs to be compared with an all-off image, then you can subtract and select pixels with the greatest difference.
Yeah, I was thinking that too. There are all sorts of checks that might help. Like making sure that the next light is not too far from the previous one.
@@akshaydalvi1534 he did that for fixing problems after the fact, but you could include a weighting while scanning the pixels, so that bright pixels a long distance away from the previous scanned light are valued less than bright pixels close to the previous light
Obviously the next step would be to put the tree on a rotating platform like laser scanners use, and use code to evaluate all the lights from more than 4 angles, could keep a light on and spin the tree 360 and check how far the light gets from the center of frame to tell how far it is from the center of the tree. Spinning something plugged into the mains seems like a brilliant idea to me.
You can also do this for three lights at a time by turning one on green, one red, one blue and analyse the color channels instead of just the brightness. That should speed up the process a lot.
Using more than one camera is a problem because you'd need to either perfectly set each camera's position relative to each other, or perfectly measure each camera's position Could work though
Patent law (in the US) requires that the technology is non-obvious to someone in the associated industry. If picking the brightest pixel and using its position for the coordinates is not obvious, I don't know what is.
That doesn't mean a patent isn't issued, or it's easy to file a protest. It means you can go to court and try to convince people not in the industry it is obvious.
There's something missing. A star (traditional tree topper) could be used as a reference point to set a limit on the Z coordinates for Christmas trees. I understand Matt was going for the ultimate generalization for any shape and any lights, but it seems like a missed opportunity for this theme. Awesome in so many ways! Thank you.
An easy solution is to use the fact that each light has exactly two edges - one to each neighbour. So, the disorderly lights should be quite obvious when comparing the RGB values of each triple of lights as an outlier. The same fact can simplify locating the lights in the first stage.
That's also a good performance optimization. The lights are at a known fixed distance from each other, so you only need to check for the brightest pixel within a relatively small radius. Also reduces the maximum error quite significantly.
@@sacwingedbatsatadbitsad4346 This doesn't really work if you want everything to be automatic though, he used the original scan to determine the maximum distance in the first place, so you still need the "unbounded" scan to get started with your optimized scan... You could do some Bayesian stuff to start out with an unbounded distance and adjust it downwards as you go.
@@sacwingedbatsatadbitsad4346 That only works if you start from a known good point though. If you start from a bad point the whole tree will be ruined.
You have a wonderful ability to entertain and inspire, simultaneously! I mean this is so silly from one perspective - just blinking lights on a plastic Christmas tree. But from another perspective, this is incredibly creative and ingenious! I am genuinely moved and inspired by you, Matt! Both as a lifelong student as well as an educator! Thank you for spreading and sharing your love of mathematics and learning 😊
The patent thing is ridiculous. Programming Christmas tree lights can literally be a Computer Vision assignment, even with "sophisticated" methods like lens distortion correction and pin hole camera model measurements. Not to mention object tracking is literally built into Blender.
The only real requirement for patents is that no one has patented it before and it isn't complete bullshit (I.e. you can't patent perpetual motion machines because they aren't real). 🤷♂️ Besides, I'm sure Blender contains several patented tools, that's not really a reasonable benchmark.
@@danieljensen2626 In fairness, it would be a bit weird if an open-source project like blender contained patented tools. Good god, imagine the legal nightmares that would cause.
Finding lights in 3d space is patent used by video game manufacturers. It's the basis for all VR and motion capture tech. This is a very simple application of a multi-billion dollar industry
It can be, until you realize people would buy some solution letting them to do that "easily". THEN it's patentable. You can patent literally taking two measurements with a ruler and adding them together if the end goal is not completely obvious. In this case, "we want to control Christmas lights programmatically, people would actually pay for this and here's one implementation doing that" is probably what's covered.
Matt, I am incredibly impressed that you managed to get all 500+ LEDs to function properly as intended on your very first try with zero mistakes. Brilliant.
I'd most probably have had the leds blink when detecting the position. Looking for pulsating pixels is a lot more reliable than looking for the brightest. Camera noise and all that. Happy Yuletide!
IIRC from last year, he does the mapping in a dark room. The LEDs should be the only light-source and significantly brighter than genuine sensor noise. That means the false measurements are when the LED itself is obscured and a reflection _of that LED_ is visible. Hence blinking wouldn't help, since the reflection would also blink.
@@1FatLittleMonkey Well, in this video you saw how a large part of the LEDs we just not detected anywhere close to a realistic location. This indicates that finding brightest pixel is just not a good method in practice. Reliability would likely be improved doing "quality control" of the data, like blinking and/or minimum level.
I don't think camera noise is really a significant issue here, but that would help eliminate any persistent bright spots from any other light sources. You'd still have problems with possible reflections from the actual LED you're locating though.
@@__nog642 In which case Johnnie's suggestion of blinking lights would not help. If the scan is done in a dark room, then the reflection would merely blink with the bulb that is blinking.
Sounds like one of those overly broad patents that patent trolls love to pressure small people and organizations into settling. Or maybe it's a major corporation that has a patent on Christmas trees that do fancy light up things I've seen a lot of lately. They may have the patent, but I agree, it's probably not enforceable. What would be enforceable is a copyright on their code to do so, but to enforce that one would have to prove that they copied your code. Lighting up a Christmas tree, or 3D measurement of markers, is as you say an industry standard, and used broadly. I wish them luck should they try to enforce such a patent, and would enjoy a legal review of such a case. Also, I'm not a lawyer, nor do I play one on TV.
@@Unsensitive I doubt that very much, since Matt specifically stated that the company did not ask for the video to be taken down or for the project to stop. Without seeing the patents yourself, you can't possibly know what part of the process is patented. It's a bit small-minded to assume it's something like "tHeY pAtEnTed ChRiStMaS"
@@gwaptiva A lot of prior art does get missed; that’s where patent attorneys and IP consultants like me get to earn their money :-) In fairness to the USPTO, it can take a *lot* of effort to smoke out all the potentially relevant prior art. It’s not necessarily described in obviously related patents; you’ll often find it hiding in the background descriptions of inventions that are only tangentially related. A lot of times it’s not in patent filings at all, but rather found in products already on the market that may practice the claimed art, that neither the patent applicant nor the patent office were aware of. For selfish reasons, I’m glad that there are lots of prior art issues; I love digging through patents and the details of their claims and tracking down prior art. It’s interesting and always a great mental workout - and pays well too :-)
Matt: *has whole tree full of physical christmas lights he could use* "I made a model" *takes out regular lightbulbs tied together on a string* I have really appreciated many of the models he's made. Ever since that colab with Steve Mould, I think Matt has really took the advice about models to heart. This one got me though, lol
That untested code running on a tree video was the first of yours I saw! I’ve since read your book and bought another for a friend, and been watching the channel a while. Hoping to make some code for the tree this year! 🌲 Merry christmas
An extra step you could do in verifying if a bulb is correctly placed, would be to check, for any bulb that is only adjecent to one proximal bulb, if its proximal bulb is itself proximal to both of its adjacent bulbs. It would only affect a few bulbs of course, but it would be a few extra bulbs that are more accurately positioned.
I thought you were going to come up with a solution that found the 2 adjacent lights with the largest distance between them, the worst offenders, and calculated new "better" coordinates and updated the lights, gradually refining the model to match with observations. The initial model could have the lights randomly positioned in space as the process gradually nudges coordinates to match observations. You could keep on doing it from different angles until such time as you are happy with the model in 3d.
When he had figured out which points were wrong, I thought he was going to run a re-calibration program which would light up specific lights around the tree to get the scale of the new camera POV and retake the photo's for the misaligned LED's and update the point
Hei, considering that the differentiations between correct an incorrect led positions were done by comparing both neighbouring leds, you could afterwards add all the neighbours of the green ones back in the green set. This would work because the neighbours would always have one correct distance(otherwise the other neighbour would not be green) and one incorrect one. Using this you can decrease the length of the red dotted lines by 2 for each line. Anyway, great video, really enjoyed it.
Yes! I was going to comment something like this myself. To see why it works, consider a long line of correct points with one incorrect one in the middle, then the same string with two consecutive incorrect points, which are close enough to each other to look correct. The two edges leading to the incorrect points would be red, while the other edges would be green. With the "all-red" strategy, you correctly identify the incorrect point when there's only one, but it doesn't identify either problem point if there's two consecutive ones (because the links would be ...-green-red-green-red-green-... which doesn't have any points that are wrong on both sides). With the "any-red" strategy, you identify three incorrect points if there's only one, or four points if there's actually two, because you get all the incorrect points, plus the two correct points that happen to border any number of incorrect points. With the "any-red-minus-borders" strategy, you would first identify the incorrect points and their neighbors, even if two incorrect points were accidentally close enough to have a green link, but then reject those border points which are likely correct, leaving only the ones which are actually incorrect. This strategy would only generate a false negative if there were three or more incorrect points in a row, which all happened to be close enough together to give two consecutive green links. Which is unlikely if they're as random as they look, and I don't think we ever see that in our data. One other shortcoming of this strategy, which I think is actually the reason we see so many long lines of corrected points, is the rate of false positives. If there is only one correct point in a line of incorrect ones, that one will have red links in both sides and be labeled as incorrect. Same with two correct points, as they only have one green link between them, neither is identifiable as a border point. You need three consecutive correct points in order to interrupt a string of incorrect ones, and I suspect there are a lot of points in this data where that doesn't happen. I expect a lot of those long lines had one or two correct ones in the middle, maybe several that weren't consecutive, that were accidentally caught up in the correction method.
12:45 Oh yes, now that you mention it, i can see some of the lights not following herd. Its not like they've been bothering me the first 12 and a half minutes of the video so much that i couldn't pay attention to whatever you've been talking about.
A consistent way to scan the tree without nearly as much error would be to take a snapshot before any lights change. Then for each voxel you can cycle between full red, take a snapshot, full green, take a snapshot, and full blue, taking a snapshot one last time, giving you three samples, which lets you account for noise or color correction issues. Then you can do a weighted average (the weight being the change in color value, or 0 if it got darker) on every pixel's XY position for your three samples and you get three possible pixel positions. Then you can do one last weighted average on the three positions, the weight being the reciprocal of the distance to the last voxel's screen position, greatly eliminating any outliers, and biasing the pixel position to be closer to the previous one.
This reminds me of a project I helped out on when I was a student. The organizers wanted to make a huge, 3D RGB display by having a 3D matrix of LEDs with very thin wires. Turned out that there was a patent covering a 3D RGB displays, so they went with red LEDs instead. In our case, the distance between the LEDs was indeed measured by a ruler, when cutting the wires to the correct length. I can't remember which company actually had that patent, but judging from this video I guess it was the same. It was already 15 years ago so it will probably expire soon.
That honestly seems like a patent that shouldn't be able to exist. For anyone who might read this and doesn't understand why; it is far too general of an idea.
@@CottidaeSEA The tragic thing is, companies are forced to make as many useless patents as possible lest they be sued into nonexistence by another company who decided to make said useless patents instead. Even if the company is "good", they are forced into the system.
@@thorvaldspear Yeah, because if they don't, someone else will. Then they have to pay loads of money. The entire patent system is flawed as it is right now.
Christmas trees are by their very nature 3d objects and the practice of stringing lights on them is as old as electric lights. Prior art invalidates any patent especially as flashing bulbs are older than me. Look at 3D neon sculptures as prior art also.
@@coolmonkey619 He's saying it's problematical (absurd is the word that comes to my mind) that someone claims a patent on measuring the 3D coordinates of an object (natural law). And if the patent is about using photography to do so, it comes under the subject of obviousness (equally absurd).
They shouldn't have been issued patents maybe, but unfortunately this happens all the time, and they can still fight in court, and it could still cost a new company millions of dollars and years of time.
After relocating 'red' points into linear sequences between 'green' points, we ought to do another pass and ask each red point "If your neighbors were relocated here, would you have been a green point?" If so, we can turn the point green, Return it to its original scanned position, and redraw the red sequence into two red sequences, from Start to Returned to End green points. This lets us remove false negatives based on having a neighbor who was a straggler, and enhances the accuracy of new red sequences we draw. Excellent video!
The patent stuff is ridiculous. I’m sure Matt qualifies as ‘person having ordinary skill in the art’, and the very fact they independently invented the thing proves the patent is just an obvious non-novel invention. Unfortunately, patents are easy to get and costly to challenge. I was thinking for the animation standard, you could probably use an existing standard like Alembic.
Even if someone used matts idea, the patent is incredibly weak anyway so they’d have to prove it’s innovative. The patent would get thrown out very quickly because it’s the first thing I thought of when he started talking about the mapping process
It does sound weird it seems something you could do manually, basically look at the tree and measure coordinates 1 by 1. Not sure what can be patented in this, the lights, turning on lights 1 by 1, measuring the coordinates of an object in space, I mean I can't think of a reasonable explanation.
The thing is, this might seem obvious now, but revolutionary and patentable in a slightly different manner (but similar process) 35 years ago, and improved and updated into another patent 15 years ago, and then improved into tons of application-specific patents 10 year ago... And now it seem ridiculous that it is patented, because any kid could do it on their laptop in a few hours following a Python + OpenCV tutorial. That's the main problem of patent laws (well, laws in general), especially in software, not following technological (and general knowledge) advancements.
27 minutes in, and it would appear that your original way of finding the lbulb by looking for the brightest point was getting other light sources. Limiting it to the brightest light source within the sphere of the length of cable would stop those from being mis picked. Let’s see if I have any idea what I’m talking about. :)
@@sebastianjost You can scan twice. One time to find the average size, and then a second scan where you take advantage of the average size you deduced.
This could give rise to an entire series of videos, really. Using potentially noisy sensor data to learn things about the real world has... obvious applications. Matt could go so far as to talk about Kalman filters applied with the original pixel data.
It looks like you can optimize your solution for the outliers by checking each of the 3 coordinates indepently. It looks like typically only x, or y, or z is wrong, but not often 2 or 3 at the same time (since you don't get a null island).. However, you choose to ignore that information when you interpolate. Suggested: for every red point adjacent to a green, check if changing only x, xor only y, xor only z brings it within distance of the green. If so, change that coordinate and color it green. Repeat until no new greens are found. Only then resort to interpolatio for all others.
Each measurements plots a position on a plane, four planes in a box. By running straight, perpendicular lines through the box to see where they intersect, you get the 3d coord. Large errors should be obvious. Lights with insufficient data could be indicated at the time at the time of scanning, allowing re-measurement with a blocking branch moved out of the way.
Please make a video explaining what you were saying when you mentioned that when you “add up every conceivable distance in a ball” I sat and thought about it for a long time and I think an explanation would be super awesome
Question regarding brightness: Last time you recommended to not go above (50, 50, 50) for large amounts of LEDs because (a) that was bright enough already and (b) your power supplies didn't manage more. Does that still apply?
Asked this in Matt's github repo and got this response from Matt: "Yes, I was running the spinning effects with a max of 50 per value but you could push to 100 if you're only using one colour (for whole-tree effects). I might try to have some control to scale brightness up or down if people's CSV effects are too bright/dim."
I was inspired last year and did a 4 foot wreath this year. The brightest pixel method gave about 10% bad points so I switched to mouse clicks and coordinates. Worked well for 200, wouldn't want to do 500 x 2 rounds. Posted a video of the animation with code linked in the description. Thanks for the idea Mr Parker and Merry Christmas!
Use color to aid in your mapping. Simplest method, turn on 3 (RGB) or 6 (RGBCMY) or 7 (just add white) LEDs in a cluster, and then in one image, you can map 7 LEDs. You know the sequence of the colors, so if one is missing or obscured, just put it in the middle between the other 2 you can see. Or you can ramp up to a monte carlo simulation where you just pick a random color (from that set) for every LED in the whole string, and turn them all on at once. Figure out how many iterations you have to do so each pixel has a string of "random" colors that uniquiely identifies it (aka pixel 1 is R B Y C C G Y Y etc in consecutive frames, and that sequence is enough to uniquely locate that pixel compared to all the others). Flash up the sets of random values, take a quick video (30 frames / second), and you should be able to map all of the LEDs pretty quickly. You can use the same interpolation trick as in my simpler example to fill in obscured pixels when you can see other pixels on either side of them. I'm sure you could come up with even more elaborate methods that would work even better, and the scanning and mapping could be really fast - just a trade off with how much code you want to write and test, vs how short you want the mapping process to be
Also, if you have the human identify where the first pixel is, or just map that pixel individually first, that would aid a lot in using the cluster or monte carlo type solutions to converge on the solution faster. Or use individual mapping to find every 10th or 20th pixel, and then use another method to fill in the rest like a string of known colors - the longer I think about it, the more ideas I come up with :)
Realistically, manually mapping 200 lights with the mouse took about 15 minutes. Unless you find the problem itself more interesting than codingbthe animations (I did not) it is a bad trade. My bad points, unlike Mr Parker's all clustered together near the originin a little cloud. I am kind of assuming random noise in the sensor feed or a bug in the python library. Sensor noise could be overcome with a high sample rate and an exclusion algorithm but again... 15 minutes.
Location of light sources with cameras is done by differential imaging, not by just looking for the brightest spot. You make one image with all lights off, then an image with one light on, and you make the mathematical difference of the pixel brightness values.
Okay - I 100% LOVE the breakdown and analysis of "This is how I dealt with the bad data I got from the scan" -- it's an amazingly thorough workthrough of how one should consider writing an error correction/detection algorithm. HOWEVER. This is "I wrote a function which produces bad data, then spent incredible effort to fix the data." FIX THE FUNCTION. Fix your SCAN function, so that it gives you good data to START with, then error check and correct it. There are plenty of suggestions, I won't waste the time re-treading that. The result isn't so bad, but only because you have noise in the build and your animations are using many groups of LEDs. If you did a single-point plane translation through the LEDs you would see far more irregular motion, especially if it was a carefully-gridded-out build. Using a known-inconsistent scan function and brute-correcting the data is bad design.
This reminds me of me trying to play with machine learning and having to fight with bad datasets. I always ended up spending most of the time cleaning my data. Honestly it's probably the reason why I gave up with ML as a hobby
A suggestion for better coordinates. After adjusting the "wrong" points into the arithmetic progressions between the "correct" points you should run another pass through them and if their original position was close enough to the new position (within a parameter), then adjust them back and the series of neighbors accordingly (adjust the parameter for this to "look good"). This is because your process would definitely consider some of the correct points wrong if their both neighbors were wrong (or they were a run of two correct amidst the wrong ones). Maybe I would also do several runs of this adjustment. It will produce an even better coordinates - definitely with less of those very long straight lines series.
I'd add that knowing the lights are joined by physical wiring, we can add a simple test for the location of a point in relation to the previous point. If you know that there is no more than say, 10 cm between successive lights, this can be used to validate.
That's funny to think that what Matt came up with in a couple days, this organization spent so much time and effort on that they felt a patent was required.
Ugh, when they're selling a 7.5' tree with 400 lights on it to suckers for $915, you think they want people to know that you could do the same thing basically for free? Or someone to release some open source software that allows anyone to do the same thing with any generic lights and the Christmas tree you already have? With prices like that, I'd give them 3 or so years to flame out on their own - who spends $915 on a Christmas tree??
The perceived need for a patent is not governed by the amount of work which was necessary to generate the IP, but the perceived profitability of said IP, if deployed correctly
@@NathanTAK no. Fixed, yes, but if you couldn't have profit exclusivety for a while sadly a lot of real research will also not be done because investing elsewhere is more profitable
@@the_retag I love how you just assume that that's the end of that argument. Maybe I'm _fine_ with research going somewhere else. It is not morally tenable that people be given a monopoly over ideas-that is, the ability to extract money from other creators under the threat of violence. That is the antithesis of scientific progress. But we've _had_ that system for long enough that people-REASONABLE people, like you-think that it's OK because you've never known anything else. If we lived in a world that never invented patents, and you heard that someone wanted to introduce it, you'd be horrified. But nope. You've been conditioned to accept it.
When you started talking about the lights being close to their neighbors, it started to feel very "advent of code". It's a Christmas themed set of code challenges that become available each day in December (like an advent calendar, get it?)
I have a question regarding the eventual running of effects: What will the refresh rate of the tree be? (i.e. how many rows of the csv will be processed per second?)
@@danieljensen2626 - The update speed of "neopixel" (programmable RGB LED) chains is limited by the length of the chain. Typically, you need at least 30 microseconds per LED, plus 50 microseconds at the end of the chain (that's assuming there's no other processing going on and you're just pushing raw data down the wire at 800 KHz). For a 500-LED chain, that would be a maximum of about 65 fps. But I expect Matt's code lets you add a controllable delay between updates, so you can vary the speed of the effect dynamically without changing the rest of the code.
This is an excellent addition to the SCU (Stand-Up Cinematic Universe). I sincerely hope you continue this tradition every year for the foreseeable future.
Everyday use of Pythagoras: In my profession as a ETCP Certified Arena Rigger, Pythagoras is the core equation for everything we do when calculating steel rope lengths in bridles for suspending chain hoists over a specific location on the floor that is located between two load carrying beams in the ceiling. A bridle is like a Y, where the steel ropes transfer the weight from a vertical chain hoist to be split between two beams. This is the adaptation between an arena's beam structure and a touring show's structure. A large show can install several hundred temporary steel rope connections to the beams and they are all calculated on the fly by the head Rigger of the building. All this typically starts early in the morning on show day and is removed immediately after the show (takes around 3 hours), since all the equipment is part of the touring production and has to go in the trucks to be installed in the next arena.
I wonder if you can identify the location of the LEDs faster by taking a picture with all even LEDs on, then odds. Then do all where (LED #)/2 is even/odd. Repeat 9 times, and use bitmasks to correlate/identify each LED. You'd only need to take ~80 pictures instead of 2000.
Someone else suggested using R and G and B simultaneously to speed it up. Perhaps doing both your suggestion and that at the same time count speed it up further.
You'll likely run into all kinds of fun with occlusion and disambiguation with bulbs that are behind others if trying to map simultaneously like this. It's not impossible to handle, and a fun computer vision problem, but I wonder that taking lots of photos is easier if the option is available.
I would have taken many pictures of the lights from multiple angles and just taken the median of the observed coordinates for every light. Since some lights will be obscured from some angles you probably have to just ignore observations that don't pass some threshold for brightness.
A sequence of 9 photos will allow you to simultaneously encode all 500 lights uniquely using their BCD. This would be a much faster way to locate all the LEDs vs 500 photos for each view of the tree. To take things further, if you used the RGB capabilities of the lights, you could encode them in less than 9 photos, e.g. in base 4 (off, R, G, B), base 5 (off, R, G, B, R+G+B) or an even higher base if you allow for other combinations of the primary colors. I agree with other comments that suggest to subtract each photo from the "off" tree. I love the idea of using the physical constraints to validate the position of each LED!
Yes this makes sense and the lazy part of me wants less setup time for getting a Christmas tree configured. I'm sure this would probably work great in an ideal environment but not every camera sensor has sufficient dynamic range for capturing images in this way. Binary is more robust I assume than coloring lights and working with a number system higher than base 2. I guess we should be asking why there isn't yet colors in one and two dimensional barcodes. I'm sure this topic of base 2 versus a higher base 4 or something could be interesting in its own right as a video. In general optimization of the problem at every point in the process could be interesting to cover for the sake of learning and just as a community trying to help everyone out.
I think this could have something to do with that patent. I saw a configuration part of some of those commercial products and it looked like it was doing exactly this.
Considering he's using a webcam (which can take 30, or maybe 60, pictures per second, but let's be overly conservative and say that it takes 5 frames per light to reduce statistical noise, arriving at about 80s for 500 lights) I don't think that the scanning time is that big of an issue. Rotating the tree seems to be a manual process for him anyway (there was no mention of a motor or anything), and that probably takes 30-60 seconds to do each time anyhow. Doing anything where multiple lights are on at the same time when scanning will introduce error (no clustering method is perfect), which was already bad enough with the straightforward method. If you instead light up only 3 lights at a time (one red, one blue, one green, since the hardware already filters the image into three color channels) you could reduce it to about 30s per orientation, but again, reducing the brightness of the lights will increase the error in locating them.
To me, the statistics looked like they revealed some systematic error in the mapping code. It seems like it would have been worthwhile to build the error detection into the mapping process. For each of those chains of red, the system could run a program to collect a second set of data from four more images.
I wonder if it's just integer overflow. It's unlikely since integers in python are huge... but it looks so much like what you would get when an integer overflows.
I think the problem lies with going for the "brightest" pixel - if the LED is obscured from the camera, the "brightest" pixel is just some random pixel in the background somewhere if you're doing this with regular room lights on (considering the tree itself is rather dark), which I think accounts for why most of the wrongly located pixels are way off the tree, and tend to be towards the edge of the field of view.
@@gorak9000 but why would that result in so many positions having one coordinate at max value? Why would there be so many on the outer most surfaces of the bounding box?
I'm a years late to the party, but in a perfect world the hanging wire would make a catenary arch, (y = a*cosh(x/a), apparently) which you can compute for any length of wire. It would be interesting to see if this method is more or less accurate for long runs of wire. Maybe something clever could be done where the average hang distance a lamp sees is somehow used.
12:45 Got so happy when you agnowledged the lights not following the pattern issue that I was focusing on for the first10 minutes of the videos!!! Haha!! Well done mate, can't wait to see what the viewers will come up with this year!! 🎄 Oh and Merry Christmas!!
Great video, I love your enthusiasm and dedication you put into your videos. Whether it's flipping coins, rolling dance decorating a Christmas tree or calculating pi you always go to the extra mile to prove math can be fun.
While getting warnings about patents after the first video does seem a bit much (locating objects via light and camera angles and what not is something that is pretty basic stuff in computer vision text books), the kind of algorithm Matt then went into for this video to correct the errors for the lights is exactly the kind of stuff that would be potentially patentable. Going to that sort of algorithm seems like a somewhat natural progression, but I'm only saying that after having seen Matt do it. Until he actually vocalized his idea I hadn't made that leap. So I wouldn't be surprised if there was a patent for that particular trick or an extension of it (i.e. "Now that we know which lights we suspect of being off and have an approximate region where they should be, do another pass with the camera to figure it out, throwing out any results that would fall outside of the expected range, blah blah blah"). So I think in this case the warning could be legitimately in good faith and for patents people would think are legitimate.
@@Primalmoon I would suspect any algorithm using Cartesian coordinates and the Pythagorean theorem would be long expired, especially since the Babylonians discovered the Pythagorean theorem before Pythagoras.
Movie and video game makers have been tracking the location of (moving) points in space for well over 20 years so I can't imagine that there is a still current patent covering this that couldn't be easily dismissed by reference to prior art, putting aside that the geometry involved has been known for thousands of years.
Almost the end of january. Any word on whether it's going to happen. I'd really like to see my code on a real tree rather than the simulated one. Although that was still fun.
There are a lot of way to improve and speed up the led location process. Just to mention a couple, you could compare the picture you have taken with a picture with all the lights turned off, and then restrict the search to only those pixels whose luminosity has increased of at least some arbitrary amount. Another way is to restrict the search to only the nearby area, limited by the cable length. To speed up the search, you could use your "binary address blink" and take just une picture for every bit, then reconstruct both the position and the address from a very limited set of pictures (9 instead of 500).
YESSS!!! You actually did do GIS. That means you use the Z-axis correctly: height. That's the part of Minecraft I love to hate: those coders messed up the 3D space. (I work with maps for a living, so I am allowed to cherish such little mistakes.) But Matt did it right! Thank you!
If you have a distribution (of distances), or more accurately two distrubtions of a real distance plus a distribution of errorous distances between the lights, in order to find a separation threshold where real distances and errorous distances can be separated with smallest error, you could create bins and plot them as a histogram, and then use for instance the OTSU-Method to automatically find the best theshold or distance between the lights.
Happy Crimbo Matt! I celebrated mine on the 23rd though. I bought a "Parker Calendar" and all the holidays were quote, "nearly correct." Have a great holiday, and heres to a happy new years day on Jan 3rd!
It's mind-boggling how something like hanging a chain of LEDs around a Christmas tree and its subsequent "LED gaps ranked by size" chart still follows Ziph's law..
I think it's likely that the reason it following Ziph's law is because the Christmas tree is a cone and so randomly placed lights will have a logarithmic gap size for the same reason binary search trees have a log(n) search time. This one is fairly explainable.
Matt is the embodiment of the motto "Why spend hours to fix it manually, when you can spend a year coding it."
Yeah but when you spend a year coding it, you can apply it to millions of lights on hundreds of trees and it won't take hundreds of man-hours measuring locations and entering them. (It doesn't matter if you ever actually need to do that or not.)
@@Sauvenil It always matters. Hard code everytime
@@Sauvenil But you still can't make money off of it, because it's already been patented.
a true engineer.
@@MushookieMan Even with potentially millions of points and hundreds of man-hours creating them? I'd rather find a faster and more predictable way, like arranging a path for the lights to follow using CAD software so you always know the path of the lights and can figure out everything from that. Then you take that model and "map it to your tree." Kind of the opposite way to get the same results, and seems more likely if you're doing a hundred trees anyway.
"In theory, it cannot go wrong"
The biggest clue that Matt is a mathematician and not a programmer. Bugs are the single constant in the universe.
He's well aware of "undocumented features" 🙂
If he's running these spreadsheets live I feel like he's opening himself to displaying a few risqué patterns, maybe some human anatomy, maybe a rickroll or two (tho that'd be hard with 500 pixels)
I think that's the joke :P
Maybe you didn't get the clear tongue-in-cheek tone of "in theory"
Also weird supposition to make on a video wholly dedicated to Matt programming
instead of using the brightest pixel, use a base image, then look for the greatest increase in brightness, that would likely reduce the number of 'bad' pixel positions found.
That's a good idea. It's also trivial and likely patented.
And do it in a room with reduced light to reduce the amount of noise (false positives) around the tree.
Could also predefine the shape of the tree so it ignores pixels outsides the bounds.
When you are doing this procedure in a completely dark room and your base image is basically completely black then these 2 methods will produce the same result, since the brightness of a pixel would correspond to the difference compared to the (black) base image.
Reflections from obscured led's would not be resolved correctly though. Still need the distance based filtering to automatically correct stuff :)
That patent notice was way more reasonable than I expected. Sounds like they're not saying you specifically infringed any patents, just that anyone who wants to maybe sell something that does this should be aware that whatever method they come up with /might/ be patented and /might/ get them into trouble.
Still kinda silly to patent a method of measuring stuff, but again I was expecting some kind of patent troll, not a fairly 'innocuous' message.
It seems so vague that I am not sure it was worth it mentionning them...
@@fv9422 It was vague because Matt chose not to list the patents or name the patent holder.
Patent trolling isn't what it used to be. The courts have caught on to the practice and some times it's really difficult to pursue an infringement case when your patent isn't that strong and your biggest opertuntunity is an educational video creator who isn't actually selling or licensing an infringing product.
Nope, given the demonstration basis of the application, the patent trolls were complete douches and should be beaten to death as being part of the problem blocking progress.
@@TheOneAndOnlyNeuromod But they weren't blocking anything? How would you know they where part of the blocking progress? For all you know, this could be their first and only legal document they sent to anyone. Maybe they where wrong, but they also didn't do any harm.
For my own tree, I photographed in a dark room of course - but I made each LED light up pure blue, then had my code look for the reddest group of pixels. This worked out pretty well since the LED was bright enough to show up as white on the camera (So with a large red (and green) component), while all the reflections elsewhere were mostly blue (With small red components).
I like your error correction technique and will definitely incorporate it into my own code - I'll happily convert to the GIFT format as well.
I have mixed feelings on swapping to pre-rendered shows in CSV format: It's a great way to fix the compatibility issues you ran into with the code submitted last year, but it also means any particular CSV will only work completely as intended on the tree it was designed for. LibreOffice also can't display more than 1024 columns, while your tree requires 1501 - something I'm not sure counts as an issue since these will be programmatically generated anyways, but I'll bet *someone* out there (Without Excel) would have tried making one with spreadsheet formulas, especially considering this is *the* premier spreadsheet appreciation channel on TH-cam.
In all fairness, I wouldn't even attempt to work with such csv in an office program. Since you are coding something already to find coordinates, might as well just load it into pandas and go from there...
Very clever, but just turning the shutter speed or gain or aperture on the camera would be a much more controllable and principled way of achieving the same thing.
while a web-based application may not be ideal for working with this much data, I was able to make over 1501 columns in google sheets, so someone should still be able to make something with spreadsheet formulas w/o excel
@@natbroyles1814 That's encouraging to hear - given whose channel this is, I'll be very disappointed if literally nobody goes the route of making something with just spreadsheet formulas.
another downside of CSV format is it can't be dynamic (e.g., you couldn't have one that listened to the computer microphone and made a live visualisation, or one based on weather data)
Matt: "it's not super obvious that they are out of sync with this effect"
Me, who's been unable to take their eyes off that one contradictory LED in the middle since the start of the video: "oh good" :')
Needless to say I am very happy that the video was all about addressing this issue! Looking forward to seeing what everyone can come up with this year! :D
same
314th like
IT'S DRIVING ME INSANE!
Ooh, no, I hadn't even seen the second one, now I'm twice insane.
@Wotzinator I noticed 4. It was quite bothersome.
Found the self-selecting group mildly on the spectrum. (Source: mildly on the spectrum)
When Matt turns on the colors for the wires in the graph, I was confused for a moment because everything was the same color until he said that they were red and green. Very fitting for a Christmas tree ;) curse my genes for not letting me enjoy this color combination. The idea for moving the out of bounds points was really interesting
Wow, red and green are very different colors.
@@kellymoses8566 But to some, when cones in the eyes are too eager, they might as well both appear yellow
@@kellymoses8566 He's red/green color blind...
You don't use one of the softwares that adjust colors for your problem?
@@jursamaj how would sotware help distinguish between red and green on youtube?
Let me get this straight. The 'Organization' has developed a technology (for finding lights in 3D space or for turning them on) so incredibly good and magical and difficult, that if a few people were inspired to try their hand at the same thing, they might independently, accidentally arrive at that same, brilliant, patent-worthy solution?
For better or worse, something doesn't need to be "incredibly good, magical and difficult" to get patent protection. It just has to be an invention, and for there to be no prior art in place. There are certain patentability criteria (including novelty, usefulness, and non-obviousness in the US), but the bar is surprisingly low.
To be fair to the organisation in question, they were at least just alerting the existence of patents related to this, which is much better than what most such organisations would do.
Also, there's nothing barring the discussion of the contents of a patent. Half the point of the things is to encourage the invention to be described, though the big problem there is that many patents are described so broadly as to be useless.
Is the 'organization' called twinkly?
Welcome to software patents.
... El Psy Kongroo? (re: The Organization)
@@talideon but a patent needs to be non obvious. This method is the very first thing everyone would try to solve this problem
That legal document sounds like, we love your work, but we have patents vaguely related to this that we would be legally required to defend, so keep being awesome, just remind everyone to check existing patents so no one has to waste money on lawyers. Nicest legal document I've ever seen
still has an air about it: "it would be a shame if your knecaps would burst"
extrremely douchy patent trolling version sent to a man who is not even in a commercial scpace, thinly veiled as non threat. hope their coal hart will develop a crack just now
I learned from a lawyer that checking out patents can be disadvanteous, because triple damages can be awarded for intentional infringement. Though, "unintentional infringement" sort of indicates that the patent is patenting something rather obvious that should not warrant a 20-year monopoly.
Quack lawyers.
It doesn't matter how nice it is, patent law is a blight on humanity.
@@EmbeddedSorcery Pretty sure patenting something is done quite easily, but it's usually only once something gets taken to court that it's declared as BS and will be struck down.
Longest gift I've ever waited for was the followup video to this.
10:02 Matt is giving off some big "You get a kid a present, and all they want to do is play with the box" energy
Matt is a cat confirmed.
@@Anvilshock ...
I'm so happy for your tree! mine got low grades, failed a bunch of classes, and hated being in school; he stays at Walmart during the holidays. Glad to see your tree has grown into the giant it is today!
Inspired by this last year, I made my own, but being frustrated by the slowness of scanning, I was able to (nearly) triple the speed by setting 3 consecutive LEDs in each of Red, Green and Blue, so I was effectively scanning for 3 LED's at a time by finding the brightest point in each colour, rather than doing each individually.
Great idea
If you also do some monte carlo runs, and just turn every pixel on, but choosing a random color from RGB CMY W for each of them, you could probably do all of the locating much faster, and all at "once", just by keeping track of which pixels were which colors in which frames. The same way you can use white noise to measure the audio response of a room, rather than doing a frequency sweep and taking a measurement at each frequency. Just take all the measurements at once, but in this case, do so with enough runs that the color sequence for each pixel is unique, so you can use that sequence to uniquely identify each pixel. If you use 7 primary colors (RGB CMY and white), how many runs do you have to do so that each pixel has like 95% chance of having a unique string of consecutive colors so that you can uniquely identify each one? That would make a good Math video!
@@gorak9000 Take the number of distinguishable colors and use that as the base of the number system to flash the LEDs ID, similar to what he does in the video but not base 2, base 32 or higher hopefully. Probably just 2 pictures needed.
Perhaps only 3 tree rotations and of course use "bundle adjustment" so LEDs seen in up to 6 total pictures get their locations optimized.
Most importantly, turn the camera exposure way down so it doesn't get blown out!
With a binary search plus rgb it should only take sqrt( n/3) images from each camera
Oh no! I've just realised this video is a year old, and there's no follow-up video running the submissions on the new tree :( No pressure Matt, but I still hope to see this again some day.
There is. It's on his second channel. It's titled "I run untested viewer-submitted code on my 500-LED christmas tree"
@@HappyGardenOfLife no, that’s the original video he is referencing in this video, from Jan 2021
Yeah I was searching for it and I never found it. Weird, I haven't seen anyone else mention that. I do hope to see it one day!
Love that this is Matt's idea of real world application of Pythagorean theorem as if this is something everyone does.
it’s not?
I feel like the check using the brightest pixel might also be a fault with false positives from reflections. Perhaps each image needs to be compared with an all-off image, then you can subtract and select pixels with the greatest difference.
Yeah, I was thinking that too. There are all sorts of checks that might help. Like making sure that the next light is not too far from the previous one.
@@gizmoguyar he did that one
Also, putting the tree against a dark background will probably help
@@akshaydalvi1534 he did that for fixing problems after the fact, but you could include a weighting while scanning the pixels, so that bright pixels a long distance away from the previous scanned light are valued less than bright pixels close to the previous light
Maybe take the average of the pixel coordinates in the possible zone (near the previous light) weighted by brightness ?
Obviously the next step would be to put the tree on a rotating platform like laser scanners use, and use code to evaluate all the lights from more than 4 angles, could keep a light on and spin the tree 360 and check how far the light gets from the center of frame to tell how far it is from the center of the tree.
Spinning something plugged into the mains seems like a brilliant idea to me.
Just need to oscillate +/- 180 deg.
You can also do this for three lights at a time by turning one on green, one red, one blue and analyse the color channels instead of just the brightness. That should speed up the process a lot.
Just use more cameras at once.
Using more than one camera is a problem because you'd need to either perfectly set each camera's position relative to each other, or perfectly measure each camera's position
Could work though
@@masheroz That's no fun though, if you can get the tree spinning fast enough you could make a persistence of vision display out of it.
Patent law (in the US) requires that the technology is non-obvious to someone in the associated industry. If picking the brightest pixel and using its position for the coordinates is not obvious, I don't know what is.
That doesn't mean a patent isn't issued, or it's easy to file a protest. It means you can go to court and try to convince people not in the industry it is obvious.
@@ch94086 imagine how convincing Matt Parker's videos would be if an unrelated third party were to use it as evidence.
At 13:05 he finally points to the light I've been staring at through the entire video
There's something missing. A star (traditional tree topper) could be used as a reference point to set a limit on the Z coordinates for Christmas trees. I understand Matt was going for the ultimate generalization for any shape and any lights, but it seems like a missed opportunity for this theme. Awesome in so many ways! Thank you.
An easy solution is to use the fact that each light has exactly two edges - one to each neighbour. So, the disorderly lights should be quite obvious when comparing the RGB values of each triple of lights as an outlier. The same fact can simplify locating the lights in the first stage.
That's also a good performance optimization. The lights are at a known fixed distance from each other, so you only need to check for the brightest pixel within a relatively small radius. Also reduces the maximum error quite significantly.
@@sacwingedbatsatadbitsad4346 This doesn't really work if you want everything to be automatic though, he used the original scan to determine the maximum distance in the first place, so you still need the "unbounded" scan to get started with your optimized scan... You could do some Bayesian stuff to start out with an unbounded distance and adjust it downwards as you go.
@@sacwingedbatsatadbitsad4346 That only works if you start from a known good point though. If you start from a bad point the whole tree will be ruined.
You have a wonderful ability to entertain and inspire, simultaneously! I mean this is so silly from one perspective - just blinking lights on a plastic Christmas tree. But from another perspective, this is incredibly creative and ingenious! I am genuinely moved and inspired by you, Matt! Both as a lifelong student as well as an educator! Thank you for spreading and sharing your love of mathematics and learning 😊
The patent thing is ridiculous. Programming Christmas tree lights can literally be a Computer Vision assignment, even with "sophisticated" methods like lens distortion correction and pin hole camera model measurements. Not to mention object tracking is literally built into Blender.
The only real requirement for patents is that no one has patented it before and it isn't complete bullshit (I.e. you can't patent perpetual motion machines because they aren't real). 🤷♂️
Besides, I'm sure Blender contains several patented tools, that's not really a reasonable benchmark.
@@danieljensen2626
In fairness, it would be a bit weird if an open-source project like blender contained patented tools.
Good god, imagine the legal nightmares that would cause.
@@scalesconfrey5739 Blender contains Intel's Embree and Nvidia's OptiX, which I am fairly sure are patented.
Finding lights in 3d space is patent used by video game manufacturers. It's the basis for all VR and motion capture tech. This is a very simple application of a multi-billion dollar industry
It can be, until you realize people would buy some solution letting them to do that "easily". THEN it's patentable. You can patent literally taking two measurements with a ruler and adding them together if the end goal is not completely obvious. In this case, "we want to control Christmas lights programmatically, people would actually pay for this and here's one implementation doing that" is probably what's covered.
Matt, I am incredibly impressed that you managed to get all 500+ LEDs to function properly as intended on your very first try with zero mistakes. Brilliant.
They don't. Some of them are wrong, as you can see
@@thewhitefalcon8539 watch the rest of the video
Really hope the follow up video happens at some point.
I'd most probably have had the leds blink when detecting the position. Looking for pulsating pixels is a lot more reliable than looking for the brightest. Camera noise and all that. Happy Yuletide!
IIRC from last year, he does the mapping in a dark room. The LEDs should be the only light-source and significantly brighter than genuine sensor noise. That means the false measurements are when the LED itself is obscured and a reflection _of that LED_ is visible. Hence blinking wouldn't help, since the reflection would also blink.
@@1FatLittleMonkey Well, in this video you saw how a large part of the LEDs we just not detected anywhere close to a realistic location. This indicates that finding brightest pixel is just not a good method in practice. Reliability would likely be improved doing "quality control" of the data, like blinking and/or minimum level.
I don't think camera noise is really a significant issue here, but that would help eliminate any persistent bright spots from any other light sources. You'd still have problems with possible reflections from the actual LED you're locating though.
Reflection can still cause other points to be detected if the LED is behind a small branch.
@@__nog642
In which case Johnnie's suggestion of blinking lights would not help. If the scan is done in a dark room, then the reflection would merely blink with the bulb that is blinking.
Measuring active markers with multiple cameras in 3D space is an industry standard. Good luck enforcing those patents.
I have a patent on thinking; but the world stopped doing it ... I get nothing in royalties :(
@@zakuraayame5091 I have patented Pi but none's paying (something about mathematicians and college students being poor)
I have a patent on posting TH-cam comments. The two of you owe me a thumb up in compensation.
Sounds like one of those overly broad patents that patent trolls love to pressure small people and organizations into settling.
Or maybe it's a major corporation that has a patent on Christmas trees that do fancy light up things I've seen a lot of lately.
They may have the patent, but I agree, it's probably not enforceable.
What would be enforceable is a copyright on their code to do so, but to enforce that one would have to prove that they copied your code.
Lighting up a Christmas tree, or 3D measurement of markers, is as you say an industry standard, and used broadly.
I wish them luck should they try to enforce such a patent, and would enjoy a legal review of such a case.
Also, I'm not a lawyer, nor do I play one on TV.
@@Unsensitive I doubt that very much, since Matt specifically stated that the company did not ask for the video to be taken down or for the project to stop. Without seeing the patents yourself, you can't possibly know what part of the process is patented. It's a bit small-minded to assume it's something like "tHeY pAtEnTed ChRiStMaS"
The famous patent that allows you to "measure a distance". Its been there since ancient times. Everyone knows that.
Euklidian distance or a different metric?
Unfortunately, the US Patent Office doesn't take "prior art" into account very much...
It's American, intelligence doesn't apply
@@gwaptiva A lot of prior art does get missed; that’s where patent attorneys and IP consultants like me get to earn their money :-)
In fairness to the USPTO, it can take a *lot* of effort to smoke out all the potentially relevant prior art. It’s not necessarily described in obviously related patents; you’ll often find it hiding in the background descriptions of inventions that are only tangentially related. A lot of times it’s not in patent filings at all, but rather found in products already on the market that may practice the claimed art, that neither the patent applicant nor the patent office were aware of.
For selfish reasons, I’m glad that there are lots of prior art issues; I love digging through patents and the details of their claims and tracking down prior art. It’s interesting and always a great mental workout - and pays well too :-)
@@DEtchells I knew I should've listened to my parents and learnt a proper job 😁
Matt: *has whole tree full of physical christmas lights he could use* "I made a model" *takes out regular lightbulbs tied together on a string*
I have really appreciated many of the models he's made. Ever since that colab with Steve Mould, I think Matt has really took the advice about models to heart. This one got me though, lol
When he pulled out those two light bulbs, my first thought was "somehow this is Steve Mould's fault."
I can't find the January video, was it not done?
Merry Christmas Matt ❤ you're the best. Thanks for making me fall in love with mathematics again and again 💫
This ☝️
Happy Christmas Matt. And thank you for all the awesome maths-ness
That untested code running on a tree video was the first of yours I saw! I’ve since read your book and bought another for a friend, and been watching the channel a while. Hoping to make some code for the tree this year! 🌲 Merry christmas
An extra step you could do in verifying if a bulb is correctly placed, would be to check, for any bulb that is only adjecent to one proximal bulb, if its proximal bulb is itself proximal to both of its adjacent bulbs. It would only affect a few bulbs of course, but it would be a few extra bulbs that are more accurately positioned.
I thought you were going to come up with a solution that found the 2 adjacent lights with the largest distance between them, the worst offenders, and calculated new "better" coordinates and updated the lights, gradually refining the model to match with observations.
The initial model could have the lights randomly positioned in space as the process gradually nudges coordinates to match observations.
You could keep on doing it from different angles until such time as you are happy with the model in 3d.
When he had figured out which points were wrong, I thought he was going to run a re-calibration program which would light up specific lights around the tree to get the scale of the new camera POV and retake the photo's for the misaligned LED's and update the point
But this way he doesn‘t have to observe the tree a million times
Where is the video of running the new animations ?
Hei, considering that the differentiations between correct an incorrect led positions were done by comparing both neighbouring leds, you could afterwards add all the neighbours of the green ones back in the green set. This would work because the neighbours would always have one correct distance(otherwise the other neighbour would not be green) and one incorrect one. Using this you can decrease the length of the red dotted lines by 2 for each line. Anyway, great video, really enjoyed it.
Yes! I was going to comment something like this myself. To see why it works, consider a long line of correct points with one incorrect one in the middle, then the same string with two consecutive incorrect points, which are close enough to each other to look correct. The two edges leading to the incorrect points would be red, while the other edges would be green.
With the "all-red" strategy, you correctly identify the incorrect point when there's only one, but it doesn't identify either problem point if there's two consecutive ones (because the links would be ...-green-red-green-red-green-... which doesn't have any points that are wrong on both sides).
With the "any-red" strategy, you identify three incorrect points if there's only one, or four points if there's actually two, because you get all the incorrect points, plus the two correct points that happen to border any number of incorrect points.
With the "any-red-minus-borders" strategy, you would first identify the incorrect points and their neighbors, even if two incorrect points were accidentally close enough to have a green link, but then reject those border points which are likely correct, leaving only the ones which are actually incorrect.
This strategy would only generate a false negative if there were three or more incorrect points in a row, which all happened to be close enough together to give two consecutive green links. Which is unlikely if they're as random as they look, and I don't think we ever see that in our data.
One other shortcoming of this strategy, which I think is actually the reason we see so many long lines of corrected points, is the rate of false positives. If there is only one correct point in a line of incorrect ones, that one will have red links in both sides and be labeled as incorrect. Same with two correct points, as they only have one green link between them, neither is identifiable as a border point. You need three consecutive correct points in order to interrupt a string of incorrect ones, and I suspect there are a lot of points in this data where that doesn't happen. I expect a lot of those long lines had one or two correct ones in the middle, maybe several that weren't consecutive, that were accidentally caught up in the correction method.
Very fitting for the spreadsheet CSV format. Christmas Spectral Vector format, that is.
12:45 Oh yes, now that you mention it, i can see some of the lights not following herd. Its not like they've been bothering me the first 12 and a half minutes of the video so much that i couldn't pay attention to whatever you've been talking about.
noticed it from the start
When are you going to run the code?
A consistent way to scan the tree without nearly as much error would be to take a snapshot before any lights change.
Then for each voxel you can cycle between full red, take a snapshot, full green, take a snapshot, and full blue, taking a snapshot one last time, giving you three samples, which lets you account for noise or color correction issues.
Then you can do a weighted average (the weight being the change in color value, or 0 if it got darker) on every pixel's XY position for your three samples and you get three possible pixel positions.
Then you can do one last weighted average on the three positions, the weight being the reciprocal of the distance to the last voxel's screen position, greatly eliminating any outliers, and biasing the pixel position to be closer to the previous one.
This reminds me of a project I helped out on when I was a student. The organizers wanted to make a huge, 3D RGB display by having a 3D matrix of LEDs with very thin wires. Turned out that there was a patent covering a 3D RGB displays, so they went with red LEDs instead.
In our case, the distance between the LEDs was indeed measured by a ruler, when cutting the wires to the correct length.
I can't remember which company actually had that patent, but judging from this video I guess it was the same. It was already 15 years ago so it will probably expire soon.
That honestly seems like a patent that shouldn't be able to exist. For anyone who might read this and doesn't understand why; it is far too general of an idea.
@@CottidaeSEA The tragic thing is, companies are forced to make as many useless patents as possible lest they be sued into nonexistence by another company who decided to make said useless patents instead. Even if the company is "good", they are forced into the system.
@@thorvaldspear Yeah, because if they don't, someone else will. Then they have to pay loads of money.
The entire patent system is flawed as it is right now.
Christmas trees are by their very nature 3d objects and the practice of stringing lights on them is as old as electric lights. Prior art invalidates any patent especially as flashing bulbs are older than me. Look at 3D neon sculptures as prior art also.
Challenge it and force them to defend it. Bet you'd win.
Was just rewatching the previous year's video! What a lovely gift this was
Happy Christmas ☃️🌲
I watched it for the first time like last week. I ignored it the first time because I hadn't watched any other Matt videos yet
As someone in the patent application process, the terms non-obviousness and natural law are very applicable here.
Wat
@@coolmonkey619 He's saying it's problematical (absurd is the word that comes to my mind) that someone claims a patent on measuring the 3D coordinates of an object (natural law). And if the patent is about using photography to do so, it comes under the subject of obviousness (equally absurd).
They shouldn't have been issued patents maybe, but unfortunately this happens all the time, and they can still fight in court, and it could still cost a new company millions of dollars and years of time.
Passing Suggestion: the US Patent Office (and England's) and Matt Parker ought to do a cross over video on this.
How does it feel to be part of the problem?
After relocating 'red' points into linear sequences between 'green' points, we ought to do another pass and ask each red point "If your neighbors were relocated here, would you have been a green point?" If so, we can turn the point green, Return it to its original scanned position, and redraw the red sequence into two red sequences, from Start to Returned to End green points.
This lets us remove false negatives based on having a neighbor who was a straggler, and enhances the accuracy of new red sequences we draw. Excellent video!
The patent stuff is ridiculous. I’m sure Matt qualifies as ‘person having ordinary skill in the art’, and the very fact they independently invented the thing proves the patent is just an obvious non-novel invention.
Unfortunately, patents are easy to get and costly to challenge.
I was thinking for the animation standard, you could probably use an existing standard like Alembic.
Even if someone used matts idea, the patent is incredibly weak anyway so they’d have to prove it’s innovative. The patent would get thrown out very quickly because it’s the first thing I thought of when he started talking about the mapping process
Unfortunately, depending on the jurisdiction you're in, proving that to court may still be quite expensive.
@@NAG3V they’d have to sue him in the uk and that just wouldn’t work out for them
It does sound weird it seems something you could do manually, basically look at the tree and measure coordinates 1 by 1. Not sure what can be patented in this, the lights, turning on lights 1 by 1, measuring the coordinates of an object in space, I mean I can't think of a reasonable explanation.
The thing is, this might seem obvious now, but revolutionary and patentable in a slightly different manner (but similar process) 35 years ago, and improved and updated into another patent 15 years ago, and then improved into tons of application-specific patents 10 year ago... And now it seem ridiculous that it is patented, because any kid could do it on their laptop in a few hours following a Python + OpenCV tutorial.
That's the main problem of patent laws (well, laws in general), especially in software, not following technological (and general knowledge) advancements.
27 minutes in, and it would appear that your original way of finding the lbulb by looking for the brightest point was getting other light sources. Limiting it to the brightest light source within the sphere of the length of cable would stop those from being mis picked.
Let’s see if I have any idea what I’m talking about. :)
Was thinking the same thing. Limiting each bulb to a sphere during the initial scan might have made the secondary grouping algorithm unnecessary.
But how do you know the sphere size in advance?
@@sebastianjost You can scan twice. One time to find the average size, and then a second scan where you take advantage of the average size you deduced.
Or just turn the lights off
This could give rise to an entire series of videos, really. Using potentially noisy sensor data to learn things about the real world has... obvious applications. Matt could go so far as to talk about Kalman filters applied with the original pixel data.
I worked as an intern at Harvard this summer and created a code to find molecular knots based on similar idea.
It looks like you can optimize your solution for the outliers by checking each of the 3 coordinates indepently. It looks like typically only x, or y, or z is wrong, but not often 2 or 3 at the same time (since you don't get a null island).. However, you choose to ignore that information when you interpolate.
Suggested: for every red point adjacent to a green, check if changing only x, xor only y, xor only z brings it within distance of the green. If so, change that coordinate and color it green. Repeat until no new greens are found. Only then resort to interpolatio for all others.
Each measurements plots a position on a plane, four planes in a box. By running straight, perpendicular lines through the box to see where they intersect, you get the 3d coord. Large errors should be obvious. Lights with insufficient data could be indicated at the time at the time of scanning, allowing re-measurement with a blocking branch moved out of the way.
This would also be a great method for generating a report which could detect a systematic measurement error
"You can already see we've definitely got a Christmas tree."
Me, being red-green colorblind: So we meet again...
same lol
Please make a video explaining what you were saying when you mentioned that when you “add up every conceivable distance in a ball” I sat and thought about it for a long time and I think an explanation would be super awesome
Question regarding brightness: Last time you recommended to not go above (50, 50, 50) for large amounts of LEDs because (a) that was bright enough already and (b) your power supplies didn't manage more. Does that still apply?
This is a very good point, and rather important to anyone looking to submit an animation! Hopefully this gets answered.
He did mention he got a new power supply. Not sure what to make of that but thought I’d mention it
Asked this in Matt's github repo and got this response from Matt:
"Yes, I was running the spinning effects with a max of 50 per value but you could push to 100 if you're only using one colour (for whole-tree effects). I might try to have some control to scale brightness up or down if people's CSV effects are too bright/dim."
I was inspired last year and did a 4 foot wreath this year. The brightest pixel method gave about 10% bad points so I switched to mouse clicks and coordinates. Worked well for 200, wouldn't want to do 500 x 2 rounds. Posted a video of the animation with code linked in the description. Thanks for the idea Mr Parker and Merry Christmas!
Very nice! It's cool to see when others have taken the idea, but implemented it in their own way.
Use color to aid in your mapping. Simplest method, turn on 3 (RGB) or 6 (RGBCMY) or 7 (just add white) LEDs in a cluster, and then in one image, you can map 7 LEDs. You know the sequence of the colors, so if one is missing or obscured, just put it in the middle between the other 2 you can see. Or you can ramp up to a monte carlo simulation where you just pick a random color (from that set) for every LED in the whole string, and turn them all on at once. Figure out how many iterations you have to do so each pixel has a string of "random" colors that uniquiely identifies it (aka pixel 1 is R B Y C C G Y Y etc in consecutive frames, and that sequence is enough to uniquely locate that pixel compared to all the others). Flash up the sets of random values, take a quick video (30 frames / second), and you should be able to map all of the LEDs pretty quickly. You can use the same interpolation trick as in my simpler example to fill in obscured pixels when you can see other pixels on either side of them. I'm sure you could come up with even more elaborate methods that would work even better, and the scanning and mapping could be really fast - just a trade off with how much code you want to write and test, vs how short you want the mapping process to be
Also, if you have the human identify where the first pixel is, or just map that pixel individually first, that would aid a lot in using the cluster or monte carlo type solutions to converge on the solution faster. Or use individual mapping to find every 10th or 20th pixel, and then use another method to fill in the rest like a string of known colors - the longer I think about it, the more ideas I come up with :)
Realistically, manually mapping 200 lights with the mouse took about 15 minutes. Unless you find the problem itself more interesting than codingbthe animations (I did not) it is a bad trade. My bad points, unlike Mr Parker's all clustered together near the originin a little cloud. I am kind of assuming random noise in the sensor feed or a bug in the python library. Sensor noise could be overcome with a high sample rate and an exclusion algorithm but again... 15 minutes.
Location of light sources with cameras is done by differential imaging, not by just looking for the brightest spot. You make one image with all lights off, then an image with one light on, and you make the mathematical difference of the pixel brightness values.
Okay - I 100% LOVE the breakdown and analysis of "This is how I dealt with the bad data I got from the scan" -- it's an amazingly thorough workthrough of how one should consider writing an error correction/detection algorithm. HOWEVER. This is "I wrote a function which produces bad data, then spent incredible effort to fix the data." FIX THE FUNCTION. Fix your SCAN function, so that it gives you good data to START with, then error check and correct it. There are plenty of suggestions, I won't waste the time re-treading that. The result isn't so bad, but only because you have noise in the build and your animations are using many groups of LEDs. If you did a single-point plane translation through the LEDs you would see far more irregular motion, especially if it was a carefully-gridded-out build. Using a known-inconsistent scan function and brute-correcting the data is bad design.
23:34 When you said, "...to figure out the average distance from the center." This formula popped in my head. Good job.
WHERES THE TREE VIDEO ;-;
YES. Where is the video. Please show us the submitions.
This reminds me of me trying to play with machine learning and having to fight with bad datasets. I always ended up spending most of the time cleaning my data. Honestly it's probably the reason why I gave up with ML as a hobby
Same, only that I absolutely love building, cleaning and analyzing datasets, so im still doing ML
A suggestion for better coordinates.
After adjusting the "wrong" points into the arithmetic progressions between the "correct" points you should run another pass through them and if their original position was close enough to the new position (within a parameter), then adjust them back and the series of neighbors accordingly (adjust the parameter for this to "look good").
This is because your process would definitely consider some of the correct points wrong if their both neighbors were wrong (or they were a run of two correct amidst the wrong ones).
Maybe I would also do several runs of this adjustment. It will produce an even better coordinates - definitely with less of those very long straight lines series.
I'd add that knowing the lights are joined by physical wiring, we can add a simple test for the location of a point in relation to the previous point. If you know that there is no more than say, 10 cm between successive lights, this can be used to validate.
@@mikefochtman7164 he does this?
@@mikefochtman7164 have you watched the video?
@@ollie3x10_8 Yeah, I posted that while watching, then I saw that Matt did that very thing, only better. Sorry.
I love the fact that you differentiate between spheres and balls, it is so rare with people who do that
That's funny to think that what Matt came up with in a couple days, this organization spent so much time and effort on that they felt a patent was required.
Ugh, when they're selling a 7.5' tree with 400 lights on it to suckers for $915, you think they want people to know that you could do the same thing basically for free? Or someone to release some open source software that allows anyone to do the same thing with any generic lights and the Christmas tree you already have? With prices like that, I'd give them 3 or so years to flame out on their own - who spends $915 on a Christmas tree??
The perceived need for a patent is not governed by the amount of work which was necessary to generate the IP, but the perceived profitability of said IP, if deployed correctly
@@horrorhotel1999 Patent law must be eliminated at all costs
@@NathanTAK no. Fixed, yes, but if you couldn't have profit exclusivety for a while sadly a lot of real research will also not be done because investing elsewhere is more profitable
@@the_retag I love how you just assume that that's the end of that argument. Maybe I'm _fine_ with research going somewhere else.
It is not morally tenable that people be given a monopoly over ideas-that is, the ability to extract money from other creators under the threat of violence. That is the antithesis of scientific progress. But we've _had_ that system for long enough that people-REASONABLE people, like you-think that it's OK because you've never known anything else.
If we lived in a world that never invented patents, and you heard that someone wanted to introduce it, you'd be horrified.
But nope. You've been conditioned to accept it.
Funny how Matt's video is probably the best evidence to invalidate the patents on novelty and obviousness grounds.
When you started talking about the lights being close to their neighbors, it started to feel very "advent of code". It's a Christmas themed set of code challenges that become available each day in December (like an advent calendar, get it?)
I have a question regarding the eventual running of effects: What will the refresh rate of the tree be? (i.e. how many rows of the csv will be processed per second?)
that is a fantastic quesstion, both because an answer would be handy, and because it is a sentence that now exists.
@@9cool10 - Actually, Google shows two previous instances of "the refresh rate of the tree".
An important question! I'm guessing he went with 25fps because that's the European standard, but it would be nice to know.
@@danieljensen2626 - The update speed of "neopixel" (programmable RGB LED) chains is limited by the length of the chain. Typically, you need at least 30 microseconds per LED, plus 50 microseconds at the end of the chain (that's assuming there's no other processing going on and you're just pushing raw data down the wire at 800 KHz).
For a 500-LED chain, that would be a maximum of about 65 fps.
But I expect Matt's code lets you add a controllable delay between updates, so you can vary the speed of the effect dynamically without changing the rest of the code.
From the Harvard code linked to it is 60fps
This is an excellent addition to the SCU (Stand-Up Cinematic Universe). I sincerely hope you continue this tradition every year for the foreseeable future.
I love the fact that the acronym turned out to be GIFT. I did not see that coming!
And now I begin to wonder to myself... " *do these lights go backwards in Antarctica?* "
Waiting for the 'I turned my Christmas tree into a spreadsheet' video
Ahaha, that's a great title! :D
Oh, the algorithm for mapping is so much easier than I thought...
That tree would make my cat crazy though.
Now, I wonder what way to program the lights would excite cats the most.
The "misbehaving" lights give a wonderful charm in my opinion
Everyday use of Pythagoras:
In my profession as a ETCP Certified Arena Rigger, Pythagoras is the core equation for everything we do when calculating steel rope lengths in bridles for suspending chain hoists over a specific location on the floor that is located between two load carrying beams in the ceiling. A bridle is like a Y, where the steel ropes transfer the weight from a vertical chain hoist to be split between two beams. This is the adaptation between an arena's beam structure and a touring show's structure. A large show can install several hundred temporary steel rope connections to the beams and they are all calculated on the fly by the head Rigger of the building. All this typically starts early in the morning on show day and is removed immediately after the show (takes around 3 hours), since all the equipment is part of the touring production and has to go in the trucks to be installed in the next arena.
I wonder if you can identify the location of the LEDs faster by taking a picture with all even LEDs on, then odds. Then do all where (LED #)/2 is even/odd. Repeat 9 times, and use bitmasks to correlate/identify each LED. You'd only need to take ~80 pictures instead of 2000.
Someone else suggested using R and G and B simultaneously to speed it up. Perhaps doing both your suggestion and that at the same time count speed it up further.
Look up gray coding, and if you are adventurous you should look up Stanfords camera / projector reversing (do not remember the names for the projects)
Didn’t he do the binary coding method last year?
@@darealpoopster He did it to find and identify his LEDs with bad coordinates, but not to calculate the coordinates in the first place.
You'll likely run into all kinds of fun with occlusion and disambiguation with bulbs that are behind others if trying to map simultaneously like this. It's not impossible to handle, and a fun computer vision problem, but I wonder that taking lots of photos is easier if the option is available.
Yes!!!! I loved every bit of watching this last year!!!
I would have taken many pictures of the lights from multiple angles and just taken the median of the observed coordinates for every light. Since some lights will be obscured from some angles you probably have to just ignore observations that don't pass some threshold for brightness.
@17:25 Thanks for teaching us the difference between a ball and a sphere in a way only people who already knew will notice. :)
What a fun project! And I love the way you explained it all, showing how much fun you have/had yourself!
A sequence of 9 photos will allow you to simultaneously encode all 500 lights uniquely using their BCD. This would be a much faster way to locate all the LEDs vs 500 photos for each view of the tree. To take things further, if you used the RGB capabilities of the lights, you could encode them in less than 9 photos, e.g. in base 4 (off, R, G, B), base 5 (off, R, G, B, R+G+B) or an even higher base if you allow for other combinations of the primary colors. I agree with other comments that suggest to subtract each photo from the "off" tree. I love the idea of using the physical constraints to validate the position of each LED!
Yes this makes sense and the lazy part of me wants less setup time for getting a Christmas tree configured. I'm sure this would probably work great in an ideal environment but not every camera sensor has sufficient dynamic range for capturing images in this way. Binary is more robust I assume than coloring lights and working with a number system higher than base 2. I guess we should be asking why there isn't yet colors in one and two dimensional barcodes. I'm sure this topic of base 2 versus a higher base 4 or something could be interesting in its own right as a video. In general optimization of the problem at every point in the process could be interesting to cover for the sake of learning and just as a community trying to help everyone out.
I think this could have something to do with that patent. I saw a configuration part of some of those commercial products and it looked like it was doing exactly this.
Considering he's using a webcam (which can take 30, or maybe 60, pictures per second, but let's be overly conservative and say that it takes 5 frames per light to reduce statistical noise, arriving at about 80s for 500 lights) I don't think that the scanning time is that big of an issue. Rotating the tree seems to be a manual process for him anyway (there was no mention of a motor or anything), and that probably takes 30-60 seconds to do each time anyhow.
Doing anything where multiple lights are on at the same time when scanning will introduce error (no clustering method is perfect), which was already bad enough with the straightforward method. If you instead light up only 3 lights at a time (one red, one blue, one green, since the hardware already filters the image into three color channels) you could reduce it to about 30s per orientation, but again, reducing the brightness of the lights will increase the error in locating them.
And how would you find the light that is at index 0? How would you find the z value of any of the lights?
@@stargazer7644 this is probably why binary on/off white light base 2 was used as it's just simple and robust enough for the occasional curveball
To me, the statistics looked like they revealed some systematic error in the mapping code. It seems like it would have been worthwhile to build the error detection into the mapping process. For each of those chains of red, the system could run a program to collect a second set of data from four more images.
I wonder if it's just integer overflow. It's unlikely since integers in python are huge... but it looks so much like what you would get when an integer overflows.
@@risfutile I'm pretty sure integers in python are as big as your RAM
@@risfutile Unless you use some old Python 2 version, integers never overflow in Python.
I think the problem lies with going for the "brightest" pixel - if the LED is obscured from the camera, the "brightest" pixel is just some random pixel in the background somewhere if you're doing this with regular room lights on (considering the tree itself is rather dark), which I think accounts for why most of the wrongly located pixels are way off the tree, and tend to be towards the edge of the field of view.
@@gorak9000 but why would that result in so many positions having one coordinate at max value? Why would there be so many on the outer most surfaces of the bounding box?
I'd love to see a Bézier curve connecting all the plotted LEDs. Using that can even increase the accuracy of the correction compared to straight lines
I'm a years late to the party, but in a perfect world the hanging wire would make a catenary arch, (y = a*cosh(x/a), apparently) which you can compute for any length of wire. It would be interesting to see if this method is more or less accurate for long runs of wire. Maybe something clever could be done where the average hang distance a lamp sees is somehow used.
12:45 Got so happy when you agnowledged the lights not following the pattern issue that I was focusing on for the first10 minutes of the videos!!! Haha!! Well done mate, can't wait to see what the viewers will come up with this year!! 🎄
Oh and Merry Christmas!!
Great video, I love your enthusiasm and dedication you put into your videos. Whether it's flipping coins, rolling dance decorating a Christmas tree or calculating pi you always go to the extra mile to prove math can be fun.
This shows how ridiculous patents have become and why we need a patent reform.
It's a bit hard to say that so confidently when we don't even know what the patents were for.
While getting warnings about patents after the first video does seem a bit much (locating objects via light and camera angles and what not is something that is pretty basic stuff in computer vision text books), the kind of algorithm Matt then went into for this video to correct the errors for the lights is exactly the kind of stuff that would be potentially patentable.
Going to that sort of algorithm seems like a somewhat natural progression, but I'm only saying that after having seen Matt do it. Until he actually vocalized his idea I hadn't made that leap. So I wouldn't be surprised if there was a patent for that particular trick or an extension of it (i.e. "Now that we know which lights we suspect of being off and have an approximate region where they should be, do another pass with the camera to figure it out, throwing out any results that would fall outside of the expected range, blah blah blah").
So I think in this case the warning could be legitimately in good faith and for patents people would think are legitimate.
@@Primalmoon I would suspect any algorithm using Cartesian coordinates and the Pythagorean theorem would be long expired, especially since the Babylonians discovered the Pythagorean theorem before Pythagoras.
@@Qazqi on that point I agree regarding patents. The context of the patent would help more.
@@millwrightrick1 Check and Checkmate.
Matt will we finally get the follow-up video?
Should "Geographic Information For Trees" be pronounced Jift or gift?
Gift
Neither. It should be pronounced "geographic information for trees."
Gift, because it is a gift (present) to all of us. 😁
Salmon
It should be Hard-G ift ... but we all know the Internet will be arguing if it should be J-ift for years to come!
Never saw your videos before, then last year youtube suggested it.... I watched & roflmoa... and sent it to bunch of friends... can't wait for take 2!
Movie and video game makers have been tracking the location of (moving) points in space for well over 20 years so I can't imagine that there is a still current patent covering this that couldn't be easily dismissed by reference to prior art, putting aside that the geometry involved has been known for thousands of years.
Almost the end of january. Any word on whether it's going to happen. I'd really like to see my code on a real tree rather than the simulated one. Although that was still fun.
There are a lot of way to improve and speed up the led location process. Just to mention a couple, you could compare the picture you have taken with a picture with all the lights turned off, and then restrict the search to only those pixels whose luminosity has increased of at least some arbitrary amount. Another way is to restrict the search to only the nearby area, limited by the cable length.
To speed up the search, you could use your "binary address blink" and take just une picture for every bit, then reconstruct both the position and the address from a very limited set of pictures (9 instead of 500).
You know it's math and not computer graphics when the pixels are counted from the bottom
Still can't comprehend why the original software guys decided it was a good idea to count from the top
well of course GIFT goes from under the tree
@@thorvaldspear - Because that’s the way CRTs scan over the screen (left to right, top to bottom).
@@altosack I guess that does make sense; they didn't think about it in terms of a graph, but rather like reading text.
Even a binary tree starts at the top, so you know it's not computer science either.
YESSS!!! You actually did do GIS. That means you use the Z-axis correctly: height. That's the part of Minecraft I love to hate: those coders messed up the 3D space. (I work with maps for a living, so I am allowed to cherish such little mistakes.) But Matt did it right! Thank you!
If you have a distribution (of distances), or more accurately two distrubtions of a real distance plus a distribution of errorous distances between the lights, in order to find a separation threshold where real distances and errorous distances can be separated with smallest error, you could create bins and plot them as a histogram, and then use for instance the OTSU-Method to automatically find the best theshold or distance between the lights.
Happy Crimbo Matt!
I celebrated mine on the 23rd though. I bought a "Parker Calendar" and all the holidays were quote, "nearly correct."
Have a great holiday, and heres to a happy new years day on Jan 3rd!
Merry Christmas Matt 😁
Hey Matt, what’s happening with the testing code video? On the edge of my seat waiting!
I love how you fixed this. This is brilliant.
I love the overlap between the coordinate issues on the Christmas tree and the geometry of the Antartican eclipses.
How Matt deals with unknown coordinates: *make like a US border, straight line*
It's mind-boggling how something like hanging a chain of LEDs around a Christmas tree and its subsequent "LED gaps ranked by size" chart still follows Ziph's law..
I think it's likely that the reason it following Ziph's law is because the Christmas tree is a cone and so randomly placed lights will have a logarithmic gap size for the same reason binary search trees have a log(n) search time. This one is fairly explainable.
13:25 Matt says "its not super obvious".
Me from the time he starts the effect "THAT LIGHT IS OUTTA PHASE" *twitch* *twitch*
1 year ago today I was introduced to this channel because of this tree... I love this.
Thank You @Stand-up Maths for this great video. Love the LED lights. Merry Christmas Matt Parker. 🎄