Up to now, in state-of-the-art technologies, 3D information was captured either passively by calculating the information from pictures of different perspective angles (triangulation), e.g in stereoscopic cameras, or actively by time-of-flight measurements (ToF). Both traditional technologies use up a large amount of computing power and often require postproduction processing. Now, the BASF sensor works passively with only one lens, and it does not need triangulation for the 3D data capture; this significantly decreases the demand for computing power and makes the 3D data capture very fast: with the new BASF 3D sensor technology real-time 3D imaging may find it’s way into our daily lives."
BASF 3D Sensor |
A 120 page-long patent application WO2012110924 "Detector for optically detecting at least one object" by Mohamedi Haroun Al, Ingmar Bruder, Felix Eickemeyer, Peter Erk, Stephan Irle, Andreas Pelster, Rüdiger Sens, and Erwin Thiel, explains how it works:
"the invention is based generally on the hitherto unreported and surprising insight that specific optical sensors exist whose sensor signal is not only dependent on a total light power of the illumination of the sensor region, for example of the sensor area, of these sensors but in which a pronounced signal dependence on a geometry of the illumination, for example a size of a light spot of the illumination on the sensor region, for example the sensor area, also exists. This is generally not the case for most conventional optical sensors, in particular for most inorganic semiconductor sensors, since here the sensor signal is generally dependent only on a total power of the illumination, that is to say an integral over the intensity over the entire light spot which is generally independent of the size of the light spot, that is to say the geometry of the illumination, as long as the light spot lies within the limits of the sensor region. It has surprisingly been discovered, however, that in specific optical sensors, for example organic optical sensors, such a dependence of the sensor signal occurs in which the sensor signal on the one hand rises with the total power of the illumination, but on the other hand, even given a constant total power, is dependent on a geometry of the illumination. Examples of such optical sensors are explained in even greater detail below. By way of example, the sensor signal, given the same total power, can have at least one pronounced maximum for one or a plurality of focusings and/or for one or a plurality of specific sizes of the light spot on the sensor area or within the sensor region."
The application then states that the organic dye-based sensor's response changes if the light is modulated, and using a modulation at different frequencies at two sensors, one can separate illumination and geometrical data. Then the description says:
"In contrast to known detectors, in which a spatial resolution and/or imaging of objects is also generally tied to the fact that the smallest possible sensor areas are used, for example the smallest possible pixels in the case of CCD chips, the sensor region of the proposed detector can be embodied in a very large fashion, in principle, since for example the geometrical information, in particular the at least one item of location information, about the object can be generated from a known relationship for example between the geometry of the illumination and the sensor signal."
The application does not explain much about the processing and how the distance is measured:
The last part of the application is filled by rows of organic chemistry formulas, not sure what they mean: