I assume most of the readers of this blog are video / photo / gadget / phone / camera geeks. I am sure you didn’t miss the reviews of the latest Apple iPhone 7 Plus and Google Pixel phones. They have a lot in common, but there is one major aspect that is interesting for our applications: things are slowly moving from photography to computational photography. We are no longer just capturing light coming from optics and applying some minor processing to the pixel values to make the picture more pleasant to the viewer.
Phones must be slim and light and yet we still expect to have near DLSR quality. So, now computational photography comes into play. The iPhone 7 Plus, for example, uses two different cameras to calculate a depth of field and then tries to simulate the “bokeh” effect via software you would normally get in bulky professional cameras, by using fast optics at a wide aperture.
On the other side, when you hit the button on the Pixel phone, it is capturing a bunch of pictures and then decides what to keep from every picture in order to give the user the final result.
This challenges the concepts of originality and authenticity. The light captured by the camera is no longer the output of the photography process, but just the first step of a more complex process based on a multitude of factors. There is little doubt that this is just the beginning of a trend which will explode in the next few years. Continue reading