… or will cameras as we know them be obsoleted ?

Photography in the future will be extended by:

A. Recording much more information about the moment.
B. The possibility to process the image afterwards.

Apple iPhone 4

Smart phone that enables mobile applications and includes a decent camera

Using modern computer technology it has been possible to do fantastic adjustments in digital images after the capture in no time. This option is now moving out in the field because the required computational performance has now arrived in pocket sized gadgets like smartphones. The digital darkroom will soon be available everywhere thanks to new high performance portable devices which include both a powerful computer and a power efficient graphical accelerator. Thanks to the perfect match between devices and the new app-store-business-model, advanced image processing will be available to anyone at a decent price. Browsing any of the larger app-stores, lists tons of easy-to-use photo optimizing applications that can be bought for a few dollars – this is already changing photography.

Another dimension of the smartphone is the fact that the camera is networked for immediate communication with every other device on the globe. Within seconds the image can be moved across the world and published on a number of different forums available to millions of users – more on that in another post.

We are only seeing the beginning of a field called computational photography where computers are deeply involved while doing a simple snapshot. This new processing capability can also be used to enable a new type of photography where the images are computed using sensor data in many different ways compared to the traditional lens and film approach.

Computational photography enables the camera to use multiple exposures, new types of lenses and other ways of capturing more than the image. The final image is then created by algorithms by combining all this new information into something optimizing a selected aspect of the capture situation. Popular examples are high-dynamic-range (HDR) imaging, stitched multi-exposure panoramas, multi spectral images or depth images.

HDR photography combines two or more exposures with different exposure settings, specially designed pixels or specially covered pixels into an image with improved dynamic range. Multi-exposure panorama is created by the user by shooting many overlapping images in an arc-movement which are automatically combined into one larger image replacing a super wide lens. This wide field-of-view panorama can by smart software even be converted into a 3D panorama without the need for a second lens.

The trend has now changed from adding more and more pixels into adding more and more computing power per pixel. This new trend has one large benefit, it does not increase the requirement of expensive parts in the camera like the lens.

On the drawing board of future cameras, there are many other options that can benefit from the added computing capabilities of a modern camera system. Noise optimization by merging multiple noisy exposures. Sharpness optimization by using super resolution when merging multiple images of the same subject. Anti shake solution using recorded shake information which can be used to deblurr the image.

The most different of the new methods is light-field photography where the camera has been modified to capture every aspect of the scene at once. Such a camera does not need to be focused at shooting time, the image is focused later by software. The camera uses more pixels to record not only the amount of light each pixel receives but also the direction of light in each pixel. This is then used by an algorithm to calculate a narrow-focus-image with your selected focus depth or an image that has every single pixel in focus.

This new technology may enable a different work flow – capture the complete light field as it enters the camera – change focus, recompose and much more, later at your desktop back in the office.

The question: Is this very different work flow actually improving the results or does it just consume more of your time because the important decisions are not fixed at shutter time when they are obvious, at least for a skilled photographer?

The delay and storage of everything might give you more options at the end but the question is if it is worth it? For point and shot use the question might give the opposite answer since that is performed by someone optimizing for small convenient equipment with less requirements on final quality and without the need to know much besides how to release the shutter.

This completely new snapshot approach is now commercialized by a company called Lytro (www.lytro.com). They are in the process of releasing a consumer product late this year that captures the light field: all the light rays entering the camera from a scene. Afterwards when the light field has been captured, Lytro software will be able to generate multiple 2D perspective images with user selected parameters.

The biggest drawback with Lytros approach is that the camera has to trade resolution and possibly light sensitivity against more knowledge about light directions, so do not expect high resolution images using this approach.

For some use cases the light field camera might be disruptive – think about shooting a movie without the need to focus! Another (probably far away) feature for light-field-content is viewing time focus control – an upcoming display technology may focus where you look!

By recording the depth of the scene is will be possible to alter the lighting of the scene without the need for a retake. A complete 3D scan of the scene might even enable the possibility to add/remove or deform objects during post-processing without the time consuming edit of shadows or hidden areas that is needed without the 3D knowledge.

Samsung Galaxy Camera has smartphone capabilities and is ready for your apps.