Metamaterial Apertures for Computational Imaging
John Hunt, Tom Driscoll, Alex Mrozack, Guy Lipworth, Matthew Reynolds, David Brady, David R. Smith
"This imager that we’ve designed in this experiment uses no moving parts, no lenses, and uses only a single detector. It’s equivalently a single pixel. This is made possible by combining two developing technologies. First, we use metamaterials that allow us unique control over light waves, and we use another technique called computational imaging," says Hunt.
"...there’s only a single detector, and we never move it. And the way that we make an image with this system is we have a metamaterial screen, which is covered in patches of metamaterial elements, which are each transparent to different wavelengths of light. So this means that for every color or wavelength of light coming from the scene, it’s sampled by different patches of the aperture before it gets to the detector. If you want to collect a lot of data from a scene, you have to, in a sense, multiplex the way that you’re sensing. So one way of doing that, the way that it’s done in traditional cameras, is you have many different pixels. Pixels are spatially separated from each other. Instead of doing this sort of spatial detector multiplexing, what our system does is sort of frequency multiplexing so that each frequency or wavelength of light that comes into that imaging system samples a different portion of the scene. And then we use some very elegant math, which is developed in the field of computation imaging, to turn that data into a two-dimensional picture of all the scattering elements in the scene.
...for every frequency that we tune our detector to, the metamaterial aperture focuses a different set of points from our scene down onto that detector. So we make a sequence of measurements for different frequencies, and we get a sequence of different intensity measurements that correspond to the sum of the points in the scene that are being focused onto the detector for each frequency."