In the interest of carrying lighter “cameras” around, particularly on hikes where one may not want to haul around an SLR, I’ve been traveling/photographing quite a bit with *just* my iPhone which does instill a surprising level of anxiety. Just as shooting with an SLR requires a certain level of commitment, shooting with just a cell phone and all its limitations commits one to a certain “kind” of photography. What is new however, is the extent and flexibility of image processing options with modern cell phones that offer up exciting new possibilities that are not possible with traditional SLR cameras. The basic camera sensor in the iPhone is reasonably good, but pretty pedestrian compared to even most dedicated point and shoot cameras and certainly not up to SLR quality. That said, the images from the iPhone have been better than expected and if there is one thing that the Photowalking Utah community has taught me, it is that beautiful imagery can be created with just about any imaging device you can come up with.
The advantage of the iPhone (and other “smart phones”) is that you essentially have a general purpose computer embedded within it that can perform fairly sophisticated image processing operations. While image montaging is something that I’ve been familiar with for quite some time having used it for photography and professionally in our science, the image processing options available with cell phones these days truly is stunning and allows for the use of position sensors, gyroscopes and more to inform the output from the camera to create more complex imagery than can be traditionally done with a simple camera. In many ways image montaging on the iPhone is more sophisticated than most of the image montaging applications on traditional desktop computers as the phone device has additional metadata available to it that can bootstrap the calculation on camera position. When we first started exploring automated image montaging for connectomics, we worked briefly with Adobe on what would become Photomerge on Photoshop. The Adobe Photoshop engineer John Peterson used an approach that essentially performed an FFT each image and calculated a frequency domain that matched allowing the right image to be positioned on the correct edge. Unfortunately, this was an N-squared problem similar to Metcalfe’s law that requires lots of computational effort. These days, it would be greatly accelerated by GPU based computing, but the simpler way to speed these things up is by bootstrapping the computation using the metadata available on smart phones to detect position, angle and even direction and GPS coordinates.
While there are a number of apps for the iPhone now that do this in some form, the best one I’ve experimented with has been Photosynth from Microsoft. On occasion Microsoft produces something really cool that is not hamstrung or shoehorned into the Windows paradigm and Photosynth is one such example. Its an excellent bit of code that in addition to making flat image mosaics, renders immersive images that allow you to pan and zoom around much like the Quicktime VR that Apple released back in 1994.
While Photosynth does do a pretty good job, in environments with lots of things in “vertical planes”, there are the occasional mis-mosaicked panel or blurring error, but for the most part, it is a useful and fun tool and one more resource in the iPhone photography toolbox.