What a wonderful project, I never realized that a sphere could actually capture the full 360 degrees. Loved to see all the visual examples! Especially the 'impossible' locations :-)
I want to believe you receive at least one photograph of someone's balls reflected in a mirror 🤣 But (more) seriously - what a fantastic video! Thank you!
Many thanks for the kind words! Yeah, I was so happy when I saw M.C. Escher's Work and confirming that you can re-project them. Over the last decade I contacted the M.C. Escher foundation 3 separate times to setup a VR view of his works and sent them online demos, but they shot down the idea each time. They are really protective of his work and copyright.
This is the shortest 30+ minute video I have ever watched. At the end I could not believe it was already over, I want more! Stellarium also uses the mirror ball projection. I was looking into that a few years ago and came across Paul Bourke back then. Thank you so much for all the additional references! I was familiar with most of the people already, but not their work on this.
Your delivery at the beginning especially is somewhere between wildly confusing and the most interesting thing I've ever seen. Matches the content of the video, too, which is awesome. Great work, and good luck!
Found this gold. Really awesome stuff for computer graphics enthusiast. The delivery, effort, and humor is on point! Definitely one of the best SoMe I've seen
I haven't watched all the videos but judging in relation to those I have watched (and my shitty entry) this one wins by a landslide. the motivation is perfect (i definitely saw magic) and everything was super clear.
wow this is incredibly well done, is available in multiple languages, has a companion WebApp, and even has a paper backing it all up!!! this guy is so talented. well done!
Super interesting video! One application I'm surprised you didn't mention (which i haven't tried but assume must be possible) is to place several balls within the cameras view! Then you'd get multiple 360 panoramas from different viewing angles all just using a single camera! Could be used fore some pretty crazy low budget streaming setups!
True, there are multiple topics I dropped to fit within 30 mins ( and still failed^^ ) The main problem with this approach is sin(α/4) being a cruel mistress and resolution dropping exponentially each jump you make. But it's definitely doable.
One of the most useful videos and one of the most useful science you'll ever need. Especially if you are a filmmaker or game designer "Control perception, and you control reality."
Regarding the multisource video feed without parallax using this mirror all technique, I feel like that could have some very interesting robotics applications
True! Though to be fair, the mathematical model says there is no parallax, but there is a new kind of parallax introduced by the sphere's size, though I couldn't cover this without blowing past the 30min limit. I just finished the Japanese right now version and will upload it on Monday, after which I'll present this to a bunch of research colleagues. We'll see, though it's a technique with very niche usability, maybe it does end up in a practical application after all :]
Oh boi! You are such an interesting & funny person. Love the topic and your explanation. Wish I had watched it about 2 months ago when I was in Japan so we could go out and meet each other. But if you will be around Bali in next 4 months let me know! I would like to connect! You are a truly inspiring person!
I have wrote a program to take rectangular projections out from an equirectangular images. then use that image for wall textures for a VR simulator.(floor and ceiling too) The rooms I needed, are too radioactive to be in for extended periods; so 3d scanning isn't feasible. A nice side benifit was the extracted images were a perfect fit, and layouts like wall plugs, were perfectly sized and placed. if you combine from multiple sphere locations, you could blend the photos to remove equipment for a more perfect representation.
This is so cool. I had this habit of taking pictures of myself in weird reflective surfaces and one was in a shop with a half-sphere mirror thing. I was wondering if I can use that to reconstruct the shop... Now I have my answer!
I really, REALLY, would love a video about the different kind of parallax that this technique introduces! Or at least it's name, so I can study about it.
Thank you so much! Grant and the whole 3b1b Team really created something special with this event. Bringing together the math and creator community and teasing out their best work.
The parallax-free recording bit made me think of Ambisonics, with the ball serving as an analogue to the sound-field. An Ambisonic recording encodes a full sphere of sound that can be rotated however you like after the recording is made in as few as four channels. TH-cam uses it for 360-degree video in the 4-channel form. Higher orders give better spatial resolution at the cost of needing (n+1)^2 channels for order n.
I remember a few years ago when mirrorballs were all the rage for insta and I got so many ads for them but never bothered, you'd have helped sell so many with this video! It's fun and light on the math, what I'd want to know though is how you convert a 2d representation on the camera sensor into a 3d model in the first place?
I assume you mean the 2D representation of the ball becoming the 3D environment? That's what the projection formula is for. You run it for every pixel of the screen and it maps it to the pixels of the 2D image of the mirror ball, no 3D models involved
I worded that poorly I meant how the environment gets projected onto the ball and then onto the screen, but now typing this I realize that's just the same projection reversed, since it necessarily is a bijection
@@FrostKiwi Not if you use optical zoom, though the math will get a whole lot harder as you would not be able to rely on preformatted picture to map the first mirror ball. On a completely separate thought, one other application for mirror balls could be realistic shading for augmented reality.
Wait, you recorded the same video multiple times with different languages?? That's simply amazing! I suppose the German and Russian voicing are late due to low views. We need to change that, I'll watch the video 5 times!
Would it be difficult to allow for taking 2 (or more) pictures of the same mirror ball from 2 different angles, to fill in the missing information, and maybe provide better resolution? What was right on the edge in the first, might be in the middle of the ball in the second picture... Maybe differences in perspective (camera distance from the ball) and rotation might screw things up a bit?
Yes, you can very much do that! You can see this happening at 5:30 and in the WebApp you can see the results, click the one called "Different Rotations" to try it out yourself. The merging has been documented by a bunch of people, especially "The HDRI Handbook 2" has a whole chapter for it. Basically what artists did is match Rotations, convert both to Equitectangular and fix seams in Photoshop.
Very interesting and the obvious question to me wasn't addressed in the video: if the image is the worst in the area right around and behind the sphere, yet the camera's raw 2-D pixel data already has a sharp, clear image of that area, then wouldn't a blended approach work best for the 3-D view that is attempting to "see" the area right around the sphere? IOW, use the sphere reflection code as is for all parts of the 3-D world *except* for the area around the sphere, which would use and map the undistorted 2-D camera data for that region. It seems that would be ideal.
Am I the only person who cringes when I see good data just being lopped off and thrown away? I'm sure this has been thought about before. What is your experience with this approach?
This is really intuitive logic! Unfortunately it's only possible under very specific circumstances and only kind-of, which is why it remains mostly theoretical. First of all, what you see in the rest of the image is dependant on the lens' field of view. If we go by the classic model, that field of view is zero, because the camera is orthographic and the ball is infinitely small. If we go by the updated one, then we still have part of the thing we want blocked by the sphere. You could "Zoom out" further for extra info, but that would need a new modification to the model to define what it actually is, that you see in those parts of the image. Also, depending on distance, it's very much out of focus and blurred. And finally, the reflection on the ball steals a bit of brightness, so both fill-in and the rest of the projection would need to be color calibrated. At the end of it you still will have to perform a patch-work from different image parts, blend them together and still end up with a piece missing that you have to generate somehow. So what actually happens, is artists just clone stamping in Photoshop with a different of the projection ;] Part of this is covered in Christian Bloch's "The HDRI Handbook 2.0", but I really wanted to stay under 30mins ( and failed^^ ) so it's one of many things I skipped.
Fantastic video. So interesting. Could we reconstruct a 3d model of an environment using two of these spheres with two cameras placed in the same room, separated from each other? In the same way we use photogrammetry to reconstruct a 3d model of the an object, could we use merely two spheres and two cameras to reconstruct the volume and shape of a room? Or maybe the shape of an object situated in between the two spheres (but with the cameras facing the spheres from the side, perpendicular to the line connecting the two spheres, as to reduce distorsion). If it is possible, it could be used to get 3d live models from inaccesible places, like the microwave example. On another note, the mirrorball projection is closely related to the lighting problem. How many spheres are necessary to get a complete image of a convexe/concave room? Incredible video, thanks.
Do you think the distortion at the edge is distinct enough to train a machine learning model on, so it can correctly crop images of mirror balls automatically? If a lot of people send you -nudes- images of mirror balls, that would be interesting to see. Could also be useful for live video, where you point an orthographic camera at a rolling reflective marble.
Probably very doable... The more primitive solution is to use a special case of the "Hough Transform" called the "Circle Hough Transform". It should be able to roughly detect the edge of the ball, but I never tried implementing it...
@@FrostKiwi Oh cool, i just went down the wikipedia rabbit hole on that transform. And also canny edge detection. And..., well, you know, rabbit hole. Thanks for the answer, by the way!
But what if you make a special shape mirror that will have unified information density at its front and edge? I mean, spherical ball has too much info at the front part and too less at the edge, so what if we make nose more pointy and edge more steep? To enlarge area where light can reflect from 90 to 170 degree from the camera normal to get better quality? Hmm, sounds like some sort of an ellipsoid mirror, huh. I thought also about putting a lense on the edge to get info from 180 degree but it'll block light from other angles. Maybe if we bore mirror's core to move blind spot from 180 degree from camera to exact location of a camera, i mean, we dont really need to look at a camera itself on an image, so, if we make a hole in ellipsoid mirror directly infront of a camera - camera will look behind the mirror through that hole and close this blind spot. Sorry for my english, its not my native language, i'm russian.
where does one buy a mirrorball? what keyword do i put into google to find some? "mirrorball" turns up fasceted disco balls is this a butler ball? is "mirror sphere" what im looking for? but ones used in movie making have a handle
If your goal is just capturing light information to light 3D scenes via HDRIs, then going to a garden supply store and buying a cheap 10$ garden globe is sufficient. Those are not perfect mirrors, but depending on your use-case you might get away with cheap alternatives. There are many ways to buy one, depending on the goal. Mine was a Chrome Steel ball bearing bought from redhill balls for the big 10cm one and stainless steel 5cm one for the small one (You can actually see the exact label at 25:37). (Only their 10cm and smaller ones are polished enough to be mirror) The big 10cm one was ~250$ though, small one 50$, If I recall correctly. A colleague attached a mounting via an electric stud welder. Chrome steel rusts a bit by the way and needs to be stored in a plastic bag with a little bit of oil. Then there are professional movie light probe sets. Not sure of their price, but this is what the movie industry uses.
the open captions really ruin some of the visualizations :/ if you spot the reflection of this ball, somewhere in the room... you can even try to reconstruct the occluded areas. bit of an "enhance" moment tho.
Yes! A telecentric lens would indeed not suffer from that, but the Asterix still applies, in that the parallax introduced by the ball's size is still there. Also, depending on factors like size, telecentric lenses have a very shallow depth of field, which introduced its own challenges of imaging the center and edge of the ball sharply.
Great idea to use mirror ball to capture 3D information. Can this mirror ball be applied to LiDAR to crest colorized 3D data? This is Hirakata. I was next to you in the Turkish airline from Istanbul to Tokyo.
A great honor that you visist me in cyberspace! Theoretically, this is very much possible. Practically this poses a bunch of challenges, besides the modification of the time-of-flight algorithm of the LiDAR device. Instead of 360°, the LiDAR would be measuring 360 degree's worth of precision inside a small cone, because it's looking at the sphere. So every part of the LiDAR sensor would need to get a boost in precision, especially mechanical parts. The oblique angle at the edge of the sphere could potentially mess with a couple of assumptions made by the LiDAR device. The laser spot size would grow by sin(𝛼/4) towards the edges, just like resolution drops in the same manner. So something would have to be done about the resulting Energy loss in the time-of-flight algorithm and the way noise is handled. Technically nothing speaks against it. If someone were to implement this, they would have to solve some tough engineering challenges.
Do you think this could have applications in Astronomy? One could imagine shooting mirrorballs into space and observing it with a telescope. Or is a camera with too much distance from the ball a limiting factor? I think you will really love this video from PBS Spacetime: th-cam.com/video/4d0EGIt1SPc/w-d-xo.htmlsi=dNkkoYbPDPr8Qc-y
This is the best half-hour I've spent in a long while. Everything here is spectacular and makes me want to purchase a mirror-ball immediately. I am not quite wrapping my head around we can get the reverse side of the sphere when the image is cropped, as I can't seem to figure out what incidence angle will result in a ray that goes behind the ball (other than the tangent, as was mentioned). Are we just getting the behind-the-ball scenes from the region outside of the camera?
Many thanks for the kind words! And no, all this information comes from just an image of half the mirror ball. That's what makes it magical ;] You can see what's behind the sphere near the edges. The image around the ball remains unused. Play around with the distortion visualization in the Webapp to get a feel for it, it's really not intuitive. 360° from half a ball is what you get according to the model, in reality we perspective projection lenses, so it's a bit less, 346° in the opening example. I shouldn't have mentioned the tangent, it's a theoretical discussion. Take a look at the visualization at 15:02. The ray being reflected downwards lands "behind the sphere" doesn't it? So that's how you can see behind the sphere with only an image of half of it.
@@FrostKiwi I think I have it. It was breaking my brain for a while because I was thinking of light as always reflecting off _in the direction of the normal_ rather than reflected _around_ the normal. That makes much more sense and it's very strange to think about. Thank you for taking the time to reply.
The MC Echer thing and likely the room images as well could be fixed at the distorted part by simply including the mirror ball back into the room could it not? In the Echer drawing its a black sphere but you should just be able to past the mirror ball image on the mirror ball to have the full room which, with some distortions... could then be walked around in as you have 3D point information for the entire room depending on where it was on the sphere in relation to the room.
Reinserting the ball definitely works! Here are some samples: imgur.com/a/MrxHR0d The blind spot represents what we cannot see, not the ball itself. It messes with a bunch of concepts, so I didn't introduce it. It also doesn't work for sphere segments with big blind spots, as the perspective mismatch messes with the result. As for the walking around, it there is no way to retrieve any depth information from a single mirror ball image, as that requires some kind of parallax to be present. What you could totally do is project the result unto a room-sized box and walk around in that (With many distortions for things not matching the box wall). This is done a lot in VFX work and in video games "parallax corrected cubemaps" ( th-cam.com/video/ZH6s1hbwoQQ/w-d-xo.html ), as recently implemented in Counter Strike 2 for reflections, play along the same line of thinking. However, the depth information there comes from you creating the room sized box, not the image itself.
Many thanks! There are many ways to do so. Mine was a Chrome Steel ball bearing bought from redhill balls for the big 10cm one and stainless steel 5cm one for the small one (You can actually see the exact label at 25:37). (Only their 10cm and smaller ones are polished enough to be mirror) The big 10cm one was ~250$ though, small one 50$, If I recall correctly. A colleague attached a mounting via an electric stud welder. Chrome steel rusts a bit by the way and needs to be stored in a plastic bag with a little bit of oil. Then there are professional movie light probe sets. Not sure of their price, but this is what the movie industry uses. If your goal is just capturing light information to light 3D scenes via HDRIs, then going to a garden supply store and buying a cheap 10$ garden globe is sufficient. Those are not perfect mirrors, but depending on your use-case you might get away with cheap alternatives.
i watched the whole video i still dont understand why the edges of the circle image has any information on what's behind the sphere you dont have line of sight with what's behind the sphere, so how is it getting to the camera? i need a diagram drawing out lines showing the path of light rays from behind the ball to the camera
Definitely counterintuitive! Take a look at 2:32 There is the big Square Light at the top of the 2D image. But it is clearly behind the sphere if we consider, that the camera is at the door. That's how you see thing beyond the 180° line. Play around with the WebApp to get a feel for it. As for a diagram, take a look at 4:50 . There is the light ray going down. Take a look where it ends up, it ends up behind the sphere, doesn't it? That's how you see what's behind.
I have a project using similar sets of projections in a video editor with graphs based on quantum spherical harmonics, and have some cool patterns. It would be useful to hear your thoughts on some of these results since I am just getting in to this subject
what happens when you have multiple spheres? Can you make a 360º+depth image using only 2? At the very least, you should be able to eliminate the blind spot in the image
You can already eliminate the Blindspot by photographing the same ball from 2 different positions, as seen in the Webapp example "different rotations". But yeah, changing the position of the ball or using two of them gives you depth information. You can see a simple version of that in the very last paper I showcased at the end of chapter 4 about the light reconstruction using 2 Billard balls. Changing existing SFM algorithms that work in a spherical projection context already exists! There are structure from motion algorithms which work on Fisheye lenses. extending that to mirror balls isn't too much of a jump.
Yes, this is one of the things Steve Mould covers when talking about this topic and how to calculate the max safe hole size via (Hole Width / Wavelength)⁴ [ th-cam.com/video/8bXhsUs-ohw/w-d-xo.html ] However, that's for smartphone sized camera lenses only. In my case the lens is just too big. Doing the trick with the wide aperture was easier in my case.
Sure, I'll add a Lat-Long Export function this week :] If possible, pls create an issue in GitHub to track this feature request: github.com/FrostKiwi/Mirrorball/issues
I finished the export feature. You can now export as a Equirectangular / LatLong projection for use in other software or as a high resolution screenshot. Please enjoy
26:50 What if instead of 2 cameras, one sphere, you had 2 spheres and one camera? maybe you could capture 3d images from a single perspective? Bet the camera would make it weird, though.
Yes but no. As mentioned in 26:10, according to the model, there is no parallax, so no matter where you look from you get the same I information, thus not getting depth information and thus not being able to capture 3D. In reality, there is *some* parallax due to the real-life size of the ball. So in reality, two cameras *would* be able to perceive depth, given the dimensions + distance of the sphere. However, pulling depth information from that would be really hard and with smaller balls, essentially impossible.
@@FrostKiwi Oh, you misunderstood What I meant to say, was imagine you have a 3d image camera setup Instead of putting cameras on it, you'd put reflective balls Then with just 1 camera, you would get 2 perspectives from the reflections on the separate mirrors
What a wonderful project, I never realized that a sphere could actually capture the full 360 degrees. Loved to see all the visual examples! Especially the 'impossible' locations :-)
I am beyond star struck. Many many thanks for the kind words!
This encouragement will stick with me for a loooong time to come.
I love your videos Posy!
I love your videos posy!! I took your mouse cursors and did silly things like add a little smilyface to the classic pointer :)
I want to believe you receive at least one photograph of someone's balls reflected in a mirror 🤣
But (more) seriously - what a fantastic video! Thank you!
Definitely my personal favorite of SoME3. I love your editing and narration style. And of course, the memes.
Many thanks for the kind words 🤗
That MC Escher bit was incredible. Really great video, loved the balance between theory + examples :)
Many thanks for the kind words!
Yeah, I was so happy when I saw M.C. Escher's Work and confirming that you can re-project them. Over the last decade I contacted the M.C. Escher foundation 3 separate times to setup a VR view of his works and sent them online demos, but they shot down the idea each time. They are really protective of his work and copyright.
This is the shortest 30+ minute video I have ever watched. At the end I could not believe it was already over, I want more!
Stellarium also uses the mirror ball projection. I was looking into that a few years ago and came across Paul Bourke back then. Thank you so much for all the additional references! I was familiar with most of the people already, but not their work on this.
Phenomenal production quality, this video should have hundreds of thousands of views fr
That rope you bought clearly did its job, because you reined me in real fast and tied everything together quite nicely.
We’re just going to ignore the “LLVM” book with Rem and Shea on it? Ok.
And obviously this was the best video I’ve seen in months, nice work mate.
I had the pleasure of listening to Paul Debevec speak at my college. Practically every image he presented involved a mirror ball!
He is a personal hero of mine.
Ohh, I'm so jealous T___T
Your delivery at the beginning especially is somewhere between wildly confusing and the most interesting thing I've ever seen. Matches the content of the video, too, which is awesome. Great work, and good luck!
45 seconds in and this is the most amazing thing I've seen. "Oh this guy's gotta have like 2 million subscribers, right?" Not even a thousand.
Slowly getting there ;]
Came here from the 3B1B winner announcement thing. This video is so good!
Using security cameras for this! Reconstructing the painters room!
This video is SUPER underrated omg
Awesome video @Frost Kiwi . Instant subscribe. Please keep doing more awesome stuff.
Coooool!! And music from Posy, hehe. 💎
TH-cam recommended me gold. You even made the same video in different languages, the production quality is outstanding. You've earned my subscription.
You used the music that POSY created for his videos, I love it! That music goes so well with the sense of wonderment
Found this gold. Really awesome stuff for computer graphics enthusiast. The delivery, effort, and humor is on point! Definitely one of the best SoMe I've seen
Many thanks for the kind words!
I haven't watched all the videos but judging in relation to those I have watched (and my shitty entry) this one wins by a landslide. the motivation is perfect (i definitely saw magic) and everything was super clear.
Many thanks for the encouraging words
Dude, please continue making videos! Superb quality..
Dude this is awesome, I actually used this theory in a project while back, would have been so much easier with this video. Thanks for the great work!
wow this is incredibly well done, is available in multiple languages, has a companion WebApp, and even has a paper backing it all up!!! this guy is so talented. well done!
Love the Posy music!
Super interesting video! One application I'm surprised you didn't mention (which i haven't tried but assume must be possible) is to place several balls within the cameras view! Then you'd get multiple 360 panoramas from different viewing angles all just using a single camera! Could be used fore some pretty crazy low budget streaming setups!
True, there are multiple topics I dropped to fit within 30 mins ( and still failed^^ )
The main problem with this approach is sin(α/4) being a cruel mistress and resolution dropping exponentially each jump you make. But it's definitely doable.
This video is fantastic! It's a quirky presentation of a quirky topic. You're clearly enjoying yourself!
This was my favorite video from SoME3, veryyyy cool
0:42 BLAHAJ SPOTTED
This is incredible. I love this so much
One of the most useful videos and one of the most useful science you'll ever need. Especially if you are a filmmaker or game designer
"Control perception, and you control reality."
Learned so much from this video and it’s so high quality!!! Amazing
i just instantly fall in love with this video, i have never seen such a beautiful topic
Regarding the multisource video feed without parallax using this mirror all technique, I feel like that could have some very interesting robotics applications
True!
Though to be fair, the mathematical model says there is no parallax, but there is a new kind of parallax introduced by the sphere's size, though I couldn't cover this without blowing past the 30min limit.
I just finished the Japanese right now version and will upload it on Monday, after which I'll present this to a bunch of research colleagues. We'll see, though it's a technique with very niche usability, maybe it does end up in a practical application after all :]
Amazingly filmed and edited video, with a great theme, and well-done writing.This deserves much more views!!!
This is crime this video only has 3k views, this quality deserves at least couple hundred thousands
Such comments are true motivation, thank you so much 😻
As for the views, slowly getting there, view by view ;]
Woah that was incredible. You deserve millions of subs
Posy music makes me happy.
Within 10 seconds, I immediately felt the Posy influence; and of course you shouted out using his music just seconds later lol.
Absolutely amazing videos by amazing person. I can't understand why you are so underrated.
Oh boi! You are such an interesting & funny person. Love the topic and your explanation. Wish I had watched it about 2 months ago when I was in Japan so we could go out and meet each other. But if you will be around Bali in next 4 months let me know! I would like to connect! You are a truly inspiring person!
Great video!
I have wrote a program to take rectangular projections out from an equirectangular images.
then use that image for wall textures for a VR simulator.(floor and ceiling too)
The rooms I needed, are too radioactive to be in for extended periods; so 3d scanning isn't feasible.
A nice side benifit was the extracted images were a perfect fit, and layouts like wall plugs, were perfectly sized and placed.
if you combine from multiple sphere locations, you could blend the photos to remove equipment for a more perfect representation.
"are too radioactive to be in for extended periods"
Mamma Mia what are you doing Σ(O_O)
@@FrostKiwi We make radioisotopes for cancer scans and such.
@@rodboticThat's so cool! Sounds like a great job with valuable service to humanity.
Fantastic video :-) Subscribed!
This is so cool. I had this habit of taking pictures of myself in weird reflective surfaces and one was in a shop with a half-sphere mirror thing. I was wondering if I can use that to reconstruct the shop... Now I have my answer!
Great job with the vid and good luck with the channel! You have a good thing going with it :)
This is a good video to pair with veritasiums video on the worlds roundest object lol
Yes it really is
Great video, with good visuals and execution. Good work!
I really, REALLY, would love a video about the different kind of parallax that this technique introduces! Or at least it's name, so I can study about it.
Not sure there is a specific name for of 🤔
A parallax introduced by the spherical mirror having a physical size...
@@FrostKiwi Golden opportunity! Make a video and you name it yourself 😁😁
AHH I LOVED THIS VIDEO SO MUCH. This is why I love Some.
Thank you so much! Grant and the whole 3b1b Team really created something special with this event. Bringing together the math and creator community and teasing out their best work.
amaaaaziiiiiiiing I have never heard of this before
My Panopticon has never been so optimized until now!
I thought I recognized Posy's music!
Loved this video. Very nicely explained. Would love too see the extended version :D
The parallax-free recording bit made me think of Ambisonics, with the ball serving as an analogue to the sound-field. An Ambisonic recording encodes a full sphere of sound that can be rotated however you like after the recording is made in as few as four channels. TH-cam uses it for 360-degree video in the 4-channel form. Higher orders give better spatial resolution at the cost of needing (n+1)^2 channels for order n.
So glad 3B1B sent me here!
Fav SoME video thus far!
"Even a fox can understand LLVM," the book caught my interest.
Yeah, the artwork is really fun! The contents are outdated though...
I bought mostly for the memes
Amazing video!
This is amazing!
I remember a few years ago when mirrorballs were all the rage for insta and I got so many ads for them but never bothered, you'd have helped sell so many with this video! It's fun and light on the math, what I'd want to know though is how you convert a 2d representation on the camera sensor into a 3d model in the first place?
I assume you mean the 2D representation of the ball becoming the 3D environment? That's what the projection formula is for. You run it for every pixel of the screen and it maps it to the pixels of the 2D image of the mirror ball, no 3D models involved
I worded that poorly I meant how the environment gets projected onto the ball and then onto the screen, but now typing this I realize that's just the same projection reversed, since it necessarily is a bijection
i love this so much 😭😭❤️
this video deserves 9 trillion views
Many thanks for the kind words.
Slowly getting there :D
Any share is highly appreciated
Awesome video! Спасибо!
an orthographic lens can be approximated by using a ball smaller than the lighting gathering unit of the lens.
Hear me out: what if your mirror ball reflection captures another mirror ball? It's mirror balls all the way down!
Yes, that does work :]
Unfortunately, sin(𝛼/4) is a cruel mistress, so image resolution takes a nosedive with each jump...
@@FrostKiwi Not if you use optical zoom, though the math will get a whole lot harder as you would not be able to rely on preformatted picture to map the first mirror ball.
On a completely separate thought, one other application for mirror balls could be realistic shading for augmented reality.
wow this is well made
Wait, you recorded the same video multiple times with different languages?? That's simply amazing! I suppose the German and Russian voicing are late due to low views. We need to change that, I'll watch the video 5 times!
Yooo Posy’s music. Love it
Would it be difficult to allow for taking 2 (or more) pictures of the same mirror ball from 2 different angles, to fill in the missing information, and maybe provide better resolution? What was right on the edge in the first, might be in the middle of the ball in the second picture...
Maybe differences in perspective (camera distance from the ball) and rotation might screw things up a bit?
Yes, you can very much do that! You can see this happening at 5:30 and in the WebApp you can see the results, click the one called "Different Rotations" to try it out yourself.
The merging has been documented by a bunch of people, especially "The HDRI Handbook 2" has a whole chapter for it. Basically what artists did is match Rotations, convert both to Equitectangular and fix seams in Photoshop.
And I thought it was unintuitive that mirrors can "see" behind paper! This is fascinating
Very interesting and the obvious question to me wasn't addressed in the video: if the image is the worst in the area right around and behind the sphere, yet the camera's raw 2-D pixel data already has a sharp, clear image of that area, then wouldn't a blended approach work best for the 3-D view that is attempting to "see" the area right around the sphere? IOW, use the sphere reflection code as is for all parts of the 3-D world *except* for the area around the sphere, which would use and map the undistorted 2-D camera data for that region. It seems that would be ideal.
Am I the only person who cringes when I see good data just being lopped off and thrown away? I'm sure this has been thought about before. What is your experience with this approach?
This is really intuitive logic! Unfortunately it's only possible under very specific circumstances and only kind-of, which is why it remains mostly theoretical.
First of all, what you see in the rest of the image is dependant on the lens' field of view. If we go by the classic model, that field of view is zero, because the camera is orthographic and the ball is infinitely small.
If we go by the updated one, then we still have part of the thing we want blocked by the sphere. You could "Zoom out" further for extra info, but that would need a new modification to the model to define what it actually is, that you see in those parts of the image.
Also, depending on distance, it's very much out of focus and blurred. And finally, the reflection on the ball steals a bit of brightness, so both fill-in and the rest of the projection would need to be color calibrated.
At the end of it you still will have to perform a patch-work from different image parts, blend them together and still end up with a piece missing that you have to generate somehow. So what actually happens, is artists just clone stamping in Photoshop with a different of the projection ;]
Part of this is covered in Christian Bloch's "The HDRI Handbook 2.0", but I really wanted to stay under 30mins ( and failed^^ ) so it's one of many things I skipped.
Is there any simple js script available to embed ball-viewer into webpage?
Это безумно круто и просто! Спасибо тебе, дружищще!
Fantastic video. So interesting.
Could we reconstruct a 3d model of an environment using two of these spheres with two cameras placed in the same room, separated from each other?
In the same way we use photogrammetry to reconstruct a 3d model of the an object, could we use merely two spheres and two cameras to reconstruct the volume and shape of a room? Or maybe the shape of an object situated in between the two spheres (but with the cameras facing the spheres from the side, perpendicular to the line connecting the two spheres, as to reduce distorsion). If it is possible, it could be used to get 3d live models from inaccesible places, like the microwave example.
On another note, the mirrorball projection is closely related to the lighting problem. How many spheres are necessary to get a complete image of a convexe/concave room?
Incredible video, thanks.
Many thanks for the kind words
Do you think the distortion at the edge is distinct enough to train a machine learning model on, so it can correctly crop images of mirror balls automatically? If a lot of people send you -nudes- images of mirror balls, that would be interesting to see. Could also be useful for live video, where you point an orthographic camera at a rolling reflective marble.
Probably very doable...
The more primitive solution is to use a special case of the "Hough Transform" called the "Circle Hough Transform". It should be able to roughly detect the edge of the ball, but I never tried implementing it...
@@FrostKiwi Oh cool, i just went down the wikipedia rabbit hole on that transform. And also canny edge detection. And..., well, you know, rabbit hole.
Thanks for the answer, by the way!
You should have sent that VFX video along with your resume to universal studios.
But what if you make a special shape mirror that will have unified information density at its front and edge?
I mean, spherical ball has too much info at the front part and too less at the edge, so what if we make nose more pointy and edge more steep? To enlarge area where light can reflect from 90 to 170 degree from the camera normal to get better quality? Hmm, sounds like some sort of an ellipsoid mirror, huh.
I thought also about putting a lense on the edge to get info from 180 degree but it'll block light from other angles. Maybe if we bore mirror's core to move blind spot from 180 degree from camera to exact location of a camera, i mean, we dont really need to look at a camera itself on an image, so, if we make a hole in ellipsoid mirror directly infront of a camera - camera will look behind the mirror through that hole and close this blind spot.
Sorry for my english, its not my native language, i'm russian.
Oh wait, 28:00, you told about that, ok, nevermind)
where does one buy a mirrorball?
what keyword do i put into google to find some?
"mirrorball" turns up fasceted disco balls
is this a butler ball?
is "mirror sphere" what im looking for?
but ones used in movie making have a handle
If your goal is just capturing light information to light 3D scenes via HDRIs, then going to a garden supply store and buying a cheap 10$ garden globe is sufficient. Those are not perfect mirrors, but depending on your use-case you might get away with cheap alternatives.
There are many ways to buy one, depending on the goal. Mine was a Chrome Steel ball bearing bought from redhill balls for the big 10cm one and stainless steel 5cm one for the small one (You can actually see the exact label at 25:37). (Only their 10cm and smaller ones are polished enough to be mirror) The big 10cm one was ~250$ though, small one 50$, If I recall correctly. A colleague attached a mounting via an electric stud welder. Chrome steel rusts a bit by the way and needs to be stored in a plastic bag with a little bit of oil.
Then there are professional movie light probe sets. Not sure of their price, but this is what the movie industry uses.
the open captions really ruin some of the visualizations :/
if you spot the reflection of this ball, somewhere in the room... you can even try to reconstruct the occluded areas. bit of an "enhance" moment tho.
This was epic.
@7:10 you mention how distortion occurs due to the focal property of camera lenses. Would this distortion be eliminated with a telecentric lens?
Yes! A telecentric lens would indeed not suffer from that, but the Asterix still applies, in that the parallax introduced by the ball's size is still there. Also, depending on factors like size, telecentric lenses have a very shallow depth of field, which introduced its own challenges of imaging the center and edge of the ball sharply.
How to find your grad thesis? (im asking if its open access)
Great idea to use mirror ball to capture 3D information. Can this mirror ball be applied to LiDAR to crest colorized 3D data? This is Hirakata. I was next to you in the Turkish airline from Istanbul to Tokyo.
A great honor that you visist me in cyberspace!
Theoretically, this is very much possible.
Practically this poses a bunch of challenges, besides the modification of the time-of-flight algorithm of the LiDAR device.
Instead of 360°, the LiDAR would be measuring 360 degree's worth of precision inside a small cone, because it's looking at the sphere. So every part of the LiDAR sensor would need to get a boost in precision, especially mechanical parts.
The oblique angle at the edge of the sphere could potentially mess with a couple of assumptions made by the LiDAR device. The laser spot size would grow by sin(𝛼/4) towards the edges, just like resolution drops in the same manner. So something would have to be done about the resulting Energy loss in the time-of-flight algorithm and the way noise is handled.
Technically nothing speaks against it. If someone were to implement this, they would have to solve some tough engineering challenges.
Do you think this could have applications in Astronomy? One could imagine shooting mirrorballs into space and observing it with a telescope. Or is a camera with too much distance from the ball a limiting factor?
I think you will really love this video from PBS Spacetime: th-cam.com/video/4d0EGIt1SPc/w-d-xo.htmlsi=dNkkoYbPDPr8Qc-y
This is the best half-hour I've spent in a long while. Everything here is spectacular and makes me want to purchase a mirror-ball immediately. I am not quite wrapping my head around we can get the reverse side of the sphere when the image is cropped, as I can't seem to figure out what incidence angle will result in a ray that goes behind the ball (other than the tangent, as was mentioned). Are we just getting the behind-the-ball scenes from the region outside of the camera?
Many thanks for the kind words! And no, all this information comes from just an image of half the mirror ball. That's what makes it magical ;]
You can see what's behind the sphere near the edges. The image around the ball remains unused. Play around with the distortion visualization in the Webapp to get a feel for it, it's really not intuitive.
360° from half a ball is what you get according to the model, in reality we perspective projection lenses, so it's a bit less, 346° in the opening example.
I shouldn't have mentioned the tangent, it's a theoretical discussion. Take a look at the visualization at 15:02. The ray being reflected downwards lands "behind the sphere" doesn't it? So that's how you can see behind the sphere with only an image of half of it.
@@FrostKiwi I think I have it. It was breaking my brain for a while because I was thinking of light as always reflecting off _in the direction of the normal_ rather than reflected _around_ the normal. That makes much more sense and it's very strange to think about. Thank you for taking the time to reply.
@@JosephCatrambone thank you - "around the normal" was the one keyword I needed to see 😅😁💚
@@FrostKiwi Thanks! now it "clicked"!
Wow 😮 This was really cool Homie 👉
i found one!! add a photo / second-long clip from intro from "What Does It Look Like INSIDE a Spherical Mirror?" by The Action Lab.
How does this not have millions of views??
Many thanks
The MC Echer thing and likely the room images as well could be fixed at the distorted part by simply including the mirror ball back into the room could it not? In the Echer drawing its a black sphere but you should just be able to past the mirror ball image on the mirror ball to have the full room which, with some distortions... could then be walked around in as you have 3D point information for the entire room depending on where it was on the sphere in relation to the room.
Reinserting the ball definitely works! Here are some samples: imgur.com/a/MrxHR0d
The blind spot represents what we cannot see, not the ball itself. It messes with a bunch of concepts, so I didn't introduce it. It also doesn't work for sphere segments with big blind spots, as the perspective mismatch messes with the result.
As for the walking around, it there is no way to retrieve any depth information from a single mirror ball image, as that requires some kind of parallax to be present. What you could totally do is project the result unto a room-sized box and walk around in that (With many distortions for things not matching the box wall). This is done a lot in VFX work and in video games "parallax corrected cubemaps" ( th-cam.com/video/ZH6s1hbwoQQ/w-d-xo.html ), as recently implemented in Counter Strike 2 for reflections, play along the same line of thinking. However, the depth information there comes from you creating the room sized box, not the image itself.
to make the black hole less obtrusive, just overlay the sphere image again in place of the hole, because the sphere is there, its part of the world.
Great suggestion! Tried that already and some of the images have a very large blind spot, so it was filling so much, that it became confusing
@@FrostKiwi its great to be smart isnt it?
i wonder what its like?
why is it that the most interesting people on the planet play interesting intruments
You sound SO MUCH like posy
Many thanks! Huge fan of his work!
wow
Great video! So inspiring for multiple projects. Where can I get a high quality mirrorball? where did you got yours?
Many thanks!
There are many ways to do so. Mine was a Chrome Steel ball bearing bought from redhill balls for the big 10cm one and stainless steel 5cm one for the small one (You can actually see the exact label at 25:37). (Only their 10cm and smaller ones are polished enough to be mirror) The big 10cm one was ~250$ though, small one 50$, If I recall correctly. A colleague attached a mounting via an electric stud welder. Chrome steel rusts a bit by the way and needs to be stored in a plastic bag with a little bit of oil.
Then there are professional movie light probe sets. Not sure of their price, but this is what the movie industry uses.
If your goal is just capturing light information to light 3D scenes via HDRIs, then going to a garden supply store and buying a cheap 10$ garden globe is sufficient. Those are not perfect mirrors, but depending on your use-case you might get away with cheap alternatives.
i watched the whole video
i still dont understand why the edges of the circle image has any information on what's behind the sphere
you dont have line of sight with what's behind the sphere,
so how is it getting to the camera?
i need a diagram drawing out lines showing the path of light rays from behind the ball to the camera
Definitely counterintuitive!
Take a look at 2:32
There is the big Square Light at the top of the 2D image. But it is clearly behind the sphere if we consider, that the camera is at the door. That's how you see thing beyond the 180° line. Play around with the WebApp to get a feel for it.
As for a diagram, take a look at 4:50 . There is the light ray going down. Take a look where it ends up, it ends up behind the sphere, doesn't it? That's how you see what's behind.
I have a project using similar sets of projections in a video editor with graphs based on quantum spherical harmonics, and have some cool patterns. It would be useful to hear your thoughts on some of these results since I am just getting in to this subject
what happens when you have multiple spheres? Can you make a 360º+depth image using only 2? At the very least, you should be able to eliminate the blind spot in the image
You can already eliminate the Blindspot by photographing the same ball from 2 different positions, as seen in the Webapp example "different rotations". But yeah, changing the position of the ball or using two of them gives you depth information. You can see a simple version of that in the very last paper I showcased at the end of chapter 4 about the light reconstruction using 2 Billard balls.
Changing existing SFM algorithms that work in a spherical projection context already exists! There are structure from motion algorithms which work on Fisheye lenses. extending that to mirror balls isn't too much of a jump.
Very well done video, thank you! Off-topic: Does that split keyboard on your shelf have two trackpads? What is its name, or where could I find it?
It's a wireless Kyria v3 with 2 Cirque Trackpads. The Kyria is a splitkb DIY kit.
@@FrostKiwi Thanks!
ENHANCE!!!!!
for a microwave you can just cut a hole in front of the shielding because it will still be small enough to block out most of the radiation
Yes, this is one of the things Steve Mould covers when talking about this topic and how to calculate the max safe hole size via (Hole Width / Wavelength)⁴ [ th-cam.com/video/8bXhsUs-ohw/w-d-xo.html ] However, that's for smartphone sized camera lenses only. In my case the lens is just too big. Doing the trick with the wide aperture was easier in my case.
Can you please add a export feature to the web site. I want to post to Facebook my 360 picture :P
Sure, I'll add a Lat-Long Export function this week :]
If possible, pls create an issue in GitHub to track this feature request: github.com/FrostKiwi/Mirrorball/issues
Tracking the implementation in github.com/FrostKiwi/Mirrorball/issues/16 ❤
I finished the export feature. You can now export as a Equirectangular / LatLong projection for use in other software or as a high resolution screenshot. Please enjoy
26:50 What if instead of 2 cameras, one sphere, you had 2 spheres and one camera? maybe you could capture 3d images from a single perspective? Bet the camera would make it weird, though.
Yes but no.
As mentioned in 26:10, according to the model, there is no parallax, so no matter where you look from you get the same I information, thus not getting depth information and thus not being able to capture 3D.
In reality, there is *some* parallax due to the real-life size of the ball. So in reality, two cameras *would* be able to perceive depth, given the dimensions + distance of the sphere.
However, pulling depth information from that would be really hard and with smaller balls, essentially impossible.
@@FrostKiwi Oh, you misunderstood
What I meant to say, was imagine you have a 3d image camera setup
Instead of putting cameras on it, you'd put reflective balls
Then with just 1 camera, you would get 2 perspectives from the reflections on the separate mirrors