EETimes, Phys.org: University of Utah Electrical and Computer Engineering Associate Professor Rajesh Menon has developed a new camera color filter that is said to let in three times more light than Bayer CFA. The new approach is described in an open-access paper in OSA Optica Vol. 2, Issue 11, pp. 933-939 (2015): "Ultra-high-sensitivity color imaging via a transparent diffractive-filter array and computational optics" by Peng Wang and Rajesh Menon.
“If you think about it, this [Bayer CFA] is a very inefficient way to get color because you’re absorbing two thirds of the light coming in,” Menon says. “But this is how it’s been done since the 1970s. So for the last 40 years, not much has changed in this technology.”
Menon’s solution is to use a color filter that lets all light pass through to the camera sensor. He does this with a combination of software and hardware. It is a wafer of glass that has precisely-designed microscopic ridges etched on one side that bends the light in certain ways as it passes through and creates a series of color patterns or codes. Software then reads the codes to determine what colors they are.
Instead of just reading three colors, this new filter produces at least 25 new codes or colors that pass through the filter to reach the camera’s sensor, producing photos that are much more accurate and with nearly no digital grain.
“You get a lot more color information than a normal color camera. With a normal camera, you only see red, green or blue. We can do 25 or more,” Menon says. “It’s not only better under lowlight conditions but it’s a more accurate representation of color.” Ultimately, the new filter also can be cheaper to implement in the manufacturing process because it is simpler to build as opposed to current filters, which require three steps to produce a filter that reads red, blue and green light, Menon says.
Talking about small pixel results, the paper says: "As anticipated, the DFA, together with the regularization algorithm, works well for the 1.67μm sensor pixel except at the boundaries of abrupt color change, where crosstalk smears color accuracy. Scalar diffraction calculation estimates the lateral spread of the crosstalk (or spatial resolution) to be ∼13μm. This is approximately three image pixels in our configuration, since one DFA unit cell is 5μm×5μm.
However, in the areas of uniform color (areas #4 and 5), our reconstructions demonstrate negligible distortion and noise. The absolute error between reconstruction and true images averaged over the entire image space is well below 5%. For this object of 404×404 image pixels, it takes roughly 30s to complete reconstruction by regularization without implementing any parallel computation techniques on a Lenovo W540 laptop (Intel i7-4700MQ CPU at 2.40 GHz and 16.0 GB RAM) for simplicity."
Menon has since created a company, Lumos Imaging, to commercialize the new filter for use in smartphones and is now negotiating with several large electronics and camera companies to bring this technology to market. He says the first commercial products that use this filter could be out in three years.
“If you think about it, this [Bayer CFA] is a very inefficient way to get color because you’re absorbing two thirds of the light coming in,” Menon says. “But this is how it’s been done since the 1970s. So for the last 40 years, not much has changed in this technology.”
Menon’s solution is to use a color filter that lets all light pass through to the camera sensor. He does this with a combination of software and hardware. It is a wafer of glass that has precisely-designed microscopic ridges etched on one side that bends the light in certain ways as it passes through and creates a series of color patterns or codes. Software then reads the codes to determine what colors they are.
Instead of just reading three colors, this new filter produces at least 25 new codes or colors that pass through the filter to reach the camera’s sensor, producing photos that are much more accurate and with nearly no digital grain.
“You get a lot more color information than a normal color camera. With a normal camera, you only see red, green or blue. We can do 25 or more,” Menon says. “It’s not only better under lowlight conditions but it’s a more accurate representation of color.” Ultimately, the new filter also can be cheaper to implement in the manufacturing process because it is simpler to build as opposed to current filters, which require three steps to produce a filter that reads red, blue and green light, Menon says.
Talking about small pixel results, the paper says: "As anticipated, the DFA, together with the regularization algorithm, works well for the 1.67μm sensor pixel except at the boundaries of abrupt color change, where crosstalk smears color accuracy. Scalar diffraction calculation estimates the lateral spread of the crosstalk (or spatial resolution) to be ∼13μm. This is approximately three image pixels in our configuration, since one DFA unit cell is 5μm×5μm.
However, in the areas of uniform color (areas #4 and 5), our reconstructions demonstrate negligible distortion and noise. The absolute error between reconstruction and true images averaged over the entire image space is well below 5%. For this object of 404×404 image pixels, it takes roughly 30s to complete reconstruction by regularization without implementing any parallel computation techniques on a Lenovo W540 laptop (Intel i7-4700MQ CPU at 2.40 GHz and 16.0 GB RAM) for simplicity."
Menon has since created a company, Lumos Imaging, to commercialize the new filter for use in smartphones and is now negotiating with several large electronics and camera companies to bring this technology to market. He says the first commercial products that use this filter could be out in three years.