IEEE Spectrum publishes Light Co. founder Rajiv Laroia's article "Inside the Development of Light, the Tiny Digital Camera That Outperforms DSLRs." Few quotes:
"...molded plastic lens technology had been nearly perfected over the previous five years to the point where these lenses were “diffraction limited”—that is, for their size, they were as good as the fundamental physics would ever allow them to be. Meanwhile, the cost had dropped dramatically: A five-element smartphone camera lens today costs only about US $1 when purchased in volume. (Elements are the thin layers that make up a plastic lens.) And sensor prices had plummeted as well: A high-resolution (13-megapixel) camera sensor now costs just about $3 in volume.
By using many modules, the camera could capture more light energy. The effective size of each pixel would also increase because each object in the scene would be captured in multiple pictures, increasing the dynamic range and reducing graininess. By using camera modules with different focal lengths, the camera would also gain the ability to zoom in and out. And if we arranged the multiple camera modules to create what was effectively a larger aperture, the photographer could control the depth of field of the final image.
The first and current version of the Light camera—called the L16—has 16 individual camera modules with lenses of three different focal lengths—five are 28-mm equivalent, five are 70-mm equivalent, and six are 150-mm equivalent. Each camera module has a lens, an image sensor, and an actuator for moving the lens to focus the image. Each lens has a fixed aperture of F2.4.
Five of these camera modules capture images at what we think of as a 28-mm field of view; that’s a wide-angle lens on a standard SLR. These camera modules point straight out. Five other modules provide the equivalent of 70-mm telephoto lenses, and six work as 150-mm equivalents. These 11 modules point sideways, but each has a mirror in front of the lens, so they, too, take images of objects in front of the camera. A linear actuator attached to each mirror can adjust it slightly to move the center of its field of view.
Each image sensor has a 13-megapixel resolution. When the user takes a picture, depending on the zoom level, the camera normally selects 10 of the 16 modules and simultaneously captures 10 separate images. Proprietary algorithms are then used to combine the 10 views into one high-quality picture with a total resolution of up to 52 megapixels.
Our first-generation L16 camera will start reaching consumers early next year, for an initial retail price of $1,699. Meanwhile, we have started thinking about future versions. For example, we can improve the low-light performance. Because we are capturing so many redundant images, we don’t need to have every one in color. With the standard sensors we are using, every pixel has a filter in front of it to select red, green, or blue light. But without such a filter we can collect three times as much light, because we don’t filter two-thirds of the light out. So we’d like to mix in camera modules that don’t have the filters, and we’re now working with On Semiconductor, our sensor manufacturer, to produce such image sensors."
"...molded plastic lens technology had been nearly perfected over the previous five years to the point where these lenses were “diffraction limited”—that is, for their size, they were as good as the fundamental physics would ever allow them to be. Meanwhile, the cost had dropped dramatically: A five-element smartphone camera lens today costs only about US $1 when purchased in volume. (Elements are the thin layers that make up a plastic lens.) And sensor prices had plummeted as well: A high-resolution (13-megapixel) camera sensor now costs just about $3 in volume.
By using many modules, the camera could capture more light energy. The effective size of each pixel would also increase because each object in the scene would be captured in multiple pictures, increasing the dynamic range and reducing graininess. By using camera modules with different focal lengths, the camera would also gain the ability to zoom in and out. And if we arranged the multiple camera modules to create what was effectively a larger aperture, the photographer could control the depth of field of the final image.
The first and current version of the Light camera—called the L16—has 16 individual camera modules with lenses of three different focal lengths—five are 28-mm equivalent, five are 70-mm equivalent, and six are 150-mm equivalent. Each camera module has a lens, an image sensor, and an actuator for moving the lens to focus the image. Each lens has a fixed aperture of F2.4.
Five of these camera modules capture images at what we think of as a 28-mm field of view; that’s a wide-angle lens on a standard SLR. These camera modules point straight out. Five other modules provide the equivalent of 70-mm telephoto lenses, and six work as 150-mm equivalents. These 11 modules point sideways, but each has a mirror in front of the lens, so they, too, take images of objects in front of the camera. A linear actuator attached to each mirror can adjust it slightly to move the center of its field of view.
Each image sensor has a 13-megapixel resolution. When the user takes a picture, depending on the zoom level, the camera normally selects 10 of the 16 modules and simultaneously captures 10 separate images. Proprietary algorithms are then used to combine the 10 views into one high-quality picture with a total resolution of up to 52 megapixels.
Our first-generation L16 camera will start reaching consumers early next year, for an initial retail price of $1,699. Meanwhile, we have started thinking about future versions. For example, we can improve the low-light performance. Because we are capturing so many redundant images, we don’t need to have every one in color. With the standard sensors we are using, every pixel has a filter in front of it to select red, green, or blue light. But without such a filter we can collect three times as much light, because we don’t filter two-thirds of the light out. So we’d like to mix in camera modules that don’t have the filters, and we’re now working with On Semiconductor, our sensor manufacturer, to produce such image sensors."