Tuesday, August 12, 2014
Before you toss out your latest iPhone or mobile device in anticipation of this new announcement let me first explain what this is all about. You may have heard of Lytro by now, the dudes who first came out with a camera that took digital images allowed you to choose any point of focus after you have captured the photo.
In essence, it captured a digital image with depth information, allowing a device to render the point of focus digitally. Today, the guys at Pelican are able to duplicate this same technology with the use of a smaller sensor array that fits on your smartphone or iPhone.
What's a sensor array?
Simple. It is a component that contains more than one sensor as seen below. Real tiny if you asked me. It is like having a dozen or more camera sensors all at the same time.
These sensors act in unison to capture the picture with depth information.
What they don't tell you is that you need to have WebGL enabled to view such refocusable images. This is a graphics rendering API that in included in most mobile web browsers like Opera and Chrome (which didn't work I must add). Apple unfortunately does not support it as yet.
I do not doubt that there is a future in such a technology but there are caveats.
Depth Information is Crucial
How does a digital device perceive depth? Well it needs lots of light to register this. So far, the folks at Pelican have promised a 8 megapixel still image from that one shot. Which I think is a bold claim without having to go into detail. Having an image like that will come at the expense of low light capture.
Study this picture, as you can see the woman in the foreground has lots of exposure since there is sufficient light. Then look at the depth chart below.
Each sensor, no matter how good is the claimed ISO has a finite ISO capability. The sensitivity of the 'camera sensor' will duplicate real world limitation as seen in today's DSLRs. Low light images will need longer exposure times and higher ISO. For sensor arrays, this is no different.
Smart Jacket for your iPhone
There is two ways that this can work. One, is to have it on a smartphone jacket which you can take out at any time to use or put away. The other is to have the sensor array built into your smart device.
Right now, there is no real direction that this can push forward. Making a smartphone jacket would mean you need to develop the software as an app to run it. This can be a costly affair since it may or may not work depending on the amount of processing power and RAM storage you have...which sort of explains why some apps crash on iOS and Android without any prior warning.
The other alternative sounds too far fetched for now...that is having it as part of the smart device. So far, only HTC has dabbled with dual cameras. They could add a sensor array but that's not for me to speculate.
However one thing is for sure, this technology effectively puts another nail into the DCCs, Mirrorless cameras and DSLRs coffin. Who the heck would carry these devices if your humble smartphone can do so much more in daylight?