Is Software the Next Optics?

Mikko Kuhna
Researcher, Aalto University School of Science and Technology, Department of Media technology

Computational photography is a new buzz word in many universities. In an artistic viewpoint, the term has been used in creating images and visualizations using programming. Since 2005, the term has also been used by research communities working with new types of imaging techniques (Levoy, 2010). A well-suited definition is by Stanford University: “Computational photography refers broadly to sensing strategies and algorithmic techniques that enhance or extend the capabilities of digital photography”. In a way the idea behind computational photography is to challenge the basic principles of photography that have remained almost unchanged since Niépce’s invention in the 1820s.

The biggest challenge in developing new imaging techniques, is the fact that stand-alone cameras are closed platforms. This has lead to research habits where new imaging techniques have only been tested in a laboratory or at most taken in the field, but processed afterwards. Without proper field tests, many artifacts and unknowns are evident in computational photography techniques.

Some might say that camera phones have already changed photography for good, just check the most popular cameras used in Flickr , but with current feature sets, camera phones have all the possibilities to do more. Camera phones have a complete operating system, connectivity options and a great touch-screen interface. Still camera phone apps are clearly derived from stand-alone cameras without taking advantage of all the new possibilities. With the mushrooming of mobile apps, more inventive photography apps have emerged. Unfortunately there is a clear limitation for camera phone photography apps. SDKs (software development kits) for mobile cameras are similarly closed systems. Without the possibility to control operations such as focusing, exposure metering or file compression, the possibilities are very limited for developers.

FCam is an open framework that is defying the current camera platforms (Adams et al., 2010). It is an API (application programming interface) for Nokia N900 that allows the developer to control nearly all the camera components. The framework was developed by Stanford University in collaboration with Nokia research center Palo Alto. Nokia N900 is currently the only commercial product that it runs on, but Stanford University plans to start selling a FCam-based F3 camera (Frankencamera version 3) with almost a full-frame size image sensor (24mm x 24mm).

The possibilities of open camera frameworks are certainly interesting, just think what open source has done to software. If open camera frameworks become popular in camera phones, stand-alone camera manufacturers might also be forced to open their interfaces. It could lead to a scenario, where photographers could download and install apps in their cameras. HDR (high dynamic range), panorama, etc. could be processed in the camera with real-time feedback. Frédo Durand, a professor at MIT, has condensed the future possibilities by stating that: “software is the next optics”.

Students interested in this topic check out Experimental Project in Computational Photography.

Levoy, M. (2010) Experimental Platforms for Computational Photography. Computer Graphics and Applications, IEEE, vol. 30, no. 5, pp. 81-87.
Adams, A., et al. (2010) The Frankencamera: an experimental platform for computational photography. in ACM SIGGRAPH 2010 papers, pp. 1-12.

Show comments

Leave a Reply