The Rochester Digital Cloak: A New Age of Invisibility

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ก.ย. 2024
  • To see an explanation of the science behind this technology, please visit: • The Technology Behind ...
    To read the paper on this research, please visit: www.osapublish...
    Using the same mathematical framework as the Rochester Cloak, researchers at the University of Rochester have been able to use flat screen displays to extend the range of angles that can be hidden from view. Their method lays out how cloaks of arbitrary shapes, that work from multiple viewpoints, may be practically realized in the near future using commercially available digital devices.
    The Rochester researchers have shown a proof-of-concept demonstration for such a setup, which is still much lower resolution than the nearly perfect imaging achieved by the Rochester Cloak lenses. But with increasingly higher resolution displays becoming available, the “digital integral cloak” they describe in their new Optica paper will continue to improve.
    While the Rochester Cloak offered a simple way of cloaking, it was limited by the cloaking working only over small angles, and cloaking large objects would require large, expensive lenses.
    By breaking up the information into distinct pieces, it becomes possible to use currently available digital cameras and digital displays. The Rochester researchers use a camera to scan a background and then encode the information in such a way that every pixel on a screen offers a unique view of a given point on the background for a given position of a viewer. By doing this for many views and using lenticular lenses - a sheet of plastic with an array of thin, parallel semicylindrical lenses - they can recreate multiple images of the background, each corresponding to a viewer at a different position. So if the viewer moves from side to side, every part of the background moves accordingly as if the screen was not there, “cloaking” anything in the space between the screen and the background.
    In the current system, it takes PhD student Joseph Choi and his advisor Professor of Physics John Howell several minutes to scan, process and update the image on the screen, i.e. to update the background. But Choi explains they are hoping soon to be able to do this in real-time, even if at lower resolution.
    Their mathematical framework and their proof-of-concept setup also demonstrates how any object of a fixed size can be cloaked, even when in motion - so long as the shape of the object remains fixed and does not deform. To do this one side of the object would be covered in an array of sensors - effectively cameras - and the other side in pixels with tiny lenses over them. Choi’s and Howell’s approach could then be used to identify which sensors need to feed into which pixels so as to show the background as if an object wasn’t there. A similar trick has been used in advertising, but for one viewing angle only. However, by using the Rochester group’s setup, a car, for example, could be made invisible to viewers from multiple positions, not just to a person at a predetermined position.
    Help us caption & translate this video!
    amara.org/v/8Mzx/

ความคิดเห็น • 27

  • @sinistar6619
    @sinistar6619 8 ปีที่แล้ว +4

    this would be neat with something like samsung's fold-able lcd screens to actually mesh over something instead of a normal rigid one.

    • @johnhowell7005
      @johnhowell7005 8 ปีที่แล้ว +1

      That's the hope in the near future. Actually, OLEDs are the best bet.

  • @bignatec1000
    @bignatec1000 8 ปีที่แล้ว +5

    Does anyone know how to contact them, I've made a type of 360 degree mirror cloak. I'm still in high school, and it would be cool if I could go to college there.

    • @JosephChoiS
      @JosephChoiS 8 ปีที่แล้ว +4

      You would like to contact University of Rochester? You can apply at enrollment.rochester.edu/apply/. If you want to contact the author (me), you can email me at joseph.choi@rochester.edu. Good luck!

    • @cartersharpnack1578
      @cartersharpnack1578 5 ปีที่แล้ว

      how did you build it?

  • @theebigda
    @theebigda 8 ปีที่แล้ว +1

    You should contact Pacur. They make the best lenticular lens out there. It looks like you're currently using a very rough lens. You should try their 100 or 150 lenticular per inch lens material.

    • @johnhowell7005
      @johnhowell7005 8 ปีที่แล้ว +1

      +Brad Bartkus Thanks for pointing out Pacur. We went with low resolution lenticular, because we wanted many views. You need many views to get nearly continuous directional viewing. The blurriness you see is more from insufficient views than pixel density.

    • @theebigda
      @theebigda 8 ปีที่แล้ว

      What does "many" mean? We can print a dozen different angles under their 100 line lens and 16 under their 75 line lens. You may be able to do more, as you're not actually printing the different angles, so you don't have the dot size limitations that a printing plate does.

    • @johnhowell7005
      @johnhowell7005 8 ปีที่แล้ว

      "Many" in this case is 51, but we would like something like 100 views or more.

    • @theebigda
      @theebigda 8 ปีที่แล้ว

      The more information you try to fit under the lens the fewer lenticules you can have per inch. So the image gets rougher, because a rough lens breaks up the detail.

    • @johnhowell7005
      @johnhowell7005 8 ปีที่แล้ว +2

      Brad, as you point out, it is a trade off. As you are probably aware, the number of images you can put under a lens determines the depth of the objects you can put on your device. Most people don't think about depth, they only think of spatial resolution. Try and think of a 3D lenticular that had more than 1 meter's worth of depth content, which this system has. Most lenticular images only have depths of a few inches meaning you only need a few views. 3D lenticular (small range of viewing angles, but higher image density) can help out. At 10 meters viewing distance, the eye only can see roughly 3mm spatial resolution. Therefore a 10 lpi with 100 views would be ideal at 10 meters. In fact, it would look so good, you probably couldn't see it, assuming your colors matched. Also, standard 3D systems image and emit in the same plane with no regard for matching the true characteristics of the field. We have done a nonzero depth emission by measuring the field, mathematically propagating the field and emitting in a new plane.

  • @BrandonDeft
    @BrandonDeft ปีที่แล้ว

    It's in our sky every day and can be seen if you pay attention.

  • @seanfields7297
    @seanfields7297 7 ปีที่แล้ว

    Hmm....... So, there's a video playing, but you took out one image? You must be geniuses! Does it work if you walk around past the ipad?

    • @JosephChoiS
      @JosephChoiS 7 ปีที่แล้ว +1

      You point out the main problem with this version, since it's not a "real-time" device, but rather static. We had been working on ways to make it real-time, which would work for people walking in front of the camera. The physics and optics is the same as we proposed, but we just need fast processors and arrays of detectors.

    • @seanfields7297
      @seanfields7297 7 ปีที่แล้ว +1

      I still think it's only true invisibility if you can go into combat and not be seen by the enemy. They won't watch you on an ipad in a war.

  • @mr_gerber
    @mr_gerber 8 ปีที่แล้ว +2

    How many pixels do you use pr lens line? How does pixel pitch impact "angular resolution"/amount of views for a given lens radius?

    • @joeschoi
      @joeschoi 8 ปีที่แล้ว +2

      Great question. This demo used 20 lenses per inch. The angular resolution depends on this and the display pixel density of the screen. Basically, you figure out how many screen pixels fit within a lens of the microlens array, and that's how many "views" you get. The lens then determines how large of a field-of-view (FOV) you get. Then we defined the "angular resolution"" to be FOV/(Total number of Views), which physically means how far you change in angles before getting a new view. The smaller the better, meaning it looks more continuous.

  • @ernestchadwell9069
    @ernestchadwell9069 10 หลายเดือนก่อน

    Pretty good for a world where more than one person never look at the same object from different points of view.

  • @alfpolo29
    @alfpolo29 8 ปีที่แล้ว +2

    The human brain its a miracle of nature....emh...ok some brain :)

  • @adilmamedov6994
    @adilmamedov6994 3 ปีที่แล้ว

    Odi

  • @comickriyaa7301
    @comickriyaa7301 2 ปีที่แล้ว

    which lenses do you use

  • @AyXiit
    @AyXiit 8 ปีที่แล้ว +5

    To me this just seems like you take a picture of the objects and display it on the screen ( maybe I didn't understand well )
    Doest it work if the objects move ?

    • @JosephChoiS
      @JosephChoiS 8 ปีที่แล้ว +7

      The difference between that, or even a regular 3D image, and this was explained in 0:56-1:20 in the video, but may be a little difficult to understand. Basically, you need to know the position and direction of all light rays, calculate where each light ray should go on the iPad mini, and then display it with lenses. Otherwise, the blocks won't line up when you move left or right. Not much more done than a 3D display, but it is an important piece to make cloaking work.

    • @AyXiit
      @AyXiit 8 ปีที่แล้ว +3

      +Joseph Choi Oh OK, I see
      As they say it works on 51 different views ( and on an angle of 29 degrees ), does it mean multiple observers can view the picture with the objects aligned ( meaning the display doesn't just adapt to the position of one observer )

    • @JosephChoiS
      @JosephChoiS 8 ปีที่แล้ว +2

      Yes, @AyXiit, you got it quite correctly. :) Multiple viewers can see the objects aligned in the background. This is like having a glasses-free 3D TV but with rays calculated so that there is space to hide something behind while the objects behind still line up. A glasses-free 3D display alone will not be able to align its images with the objects behind it otherwise.

  • @RynaxAlien
    @RynaxAlien 5 ปีที่แล้ว +1

    How to build it?

  • @kokomanation
    @kokomanation 2 ปีที่แล้ว

    actually you don't need lenticular lease to achieve this except of the white upper tablet area