MIT creates algorithms that improve scanning resolution
One day, it could be possible to take a photo with your smartphone and send it to be 3D printed, if 3D imaging scanners could be improvedChristine Daniloff / MIT

Researchers from MIT have succeeded in exploiting the polarisation of light to make the images produced by regular 3D scanners 1,000 times better than they are today – a breakthrough that could make high-resolution 3D cameras in smartphones a possibility and improve cameras in driverless cars during bad weather.

At the moment, cheap 3D scanners in phones can somewhat get the job done, but they still miss out on intricate details.

"Today, they can miniaturise 3D cameras to fit on cellphones, but they make compromises to the 3D sensing, leading to very coarse recovery of geometry. That's a natural application for polarisation, because you can still use a low-quality sensor, and adding a polarising filter gives you something that's better than many machine shop laser scanners," said Achuta Kadambi, a PhD student in the MIT Media Lab and one of the system's developers.

How light polarisation works

The physical phenomena of light polarisation affects the way in which light bounces off physical objects. Reflected light behaves in a different way from regular light, which scatters in all directions.

If light strikes an object squarely, most of the light will be absorbed, but when sunlight bounces off water or asphalt, there is an unusually heavy concentration of light with a particular polarisation, and the reflected light travels in a horizontally-oriented direction, which can be dangerous in its intensity.

This is why sports enthusiasts and fisherman wear polarised sunglasses – the glasses contain a filter that blocks intense reflected light of a certain polarisation, which reduces glare to protect your eyes and help you to see better.

The ideal 3D scanner would work by taking measurements of polarised light, and then looking at all the possible combinations of two equally plausible hypotheses to determine the orientation of the object based on what makes the most sense geometrically, but this would take a very long time to compute.

So the researchers decided to solve this problem by creating computer algorithms that make coarse depth estimates based on other methods, such as the time taken for the light signal to reflect off an object and return to its source.

Algorithms produced images of a much higher resolution

To do this, they used a Microsoft Kinect, which on its own is able to identify physical features even if they are only 1cm in width, put together with an ordinary polarising photographic lens, and took three photos of an object, rotating the polarising filter each time.

The algorithms they developed compared the light intensities of the resulting images and resulted in images of much higher resolution, and when the researchers swapped out the photographic lens for a high-precision laser scanner, the algorithms still produced images of a much higher resolution.

The researchers hope that their breakthrough could be used to greatly improve the imaging sensors used in a wide range of fields, including the cameras in self-driving cars, as today vision algorithms in car computer systems find it very hard to judge vision during rain snow or fog since water particles scatter light in unpredictable ways.

The invention could also be used to make smartphone cameras that are so high in resolution that they could take a photo of an image and then send it directly to a 3D printer to be printed out.

The researchers will be presenting the Polarised 3D system in a paper at the International Conference on Computer Vision (ICCV), which takes place 13-16 December in Santiago, Chile.