Tuesday, August 16, 2011
Misinformation: Camera Tech
Exciting new developments in computational photography may render aperture irrelevant
The term “camera” is vague in that all it implies is a device that’s able to capture images and, in general, store them. As digital technology advances and computers become more and more intrinsic to the photographic process, even this broad definition is going to be put to the test. A new technology from Lytro, for instance, allows you to adjust the vital focus of an image after it already has been taken. The camera’s core concept is based around the “light field,” which is comprised of all of the rays of light in a scene. Rather than funneling two-dimensional light rays through a lens and sandwiching them to a camera’s sensor, the Lytro camera takes these light rays and then projects them to a sensor that includes an incredibly efficient microlens array. Lytro is introducing a new file format (.lfp), and the camera includes an interior “light field engine” that works in concert with desktop software to produce “Living Pictures” that maintain multiple focusing points throughout the image.
Once browsers have adopted the format, these images also will carry the necessary information for adjusting focus as they’re shared online. With their first field camera, Lytro is promising an instant shutter without lag, both 2D and 3D capabilities in a single image and taken with a single lens (sort of), a continuously focusable image (discounting motion blur), and better low-light sensitivity because the camera takes all of the light in a scene from all angles. Not surprisingly, their first release is planned to be a point-and-shoot device for consumers, without video, and available by the end of the year. From Lytro’s site: “Relying on software rather than components can improve performance, from increased speed of picture taking to the potential for capturing better pictures in low light. It also creates new opportunities to innovate on camera lenses, controls and design.”
Particularly for action, event and reportage photographers, the implications for photography are enormous, even perhaps allowing for multiple focal points in an image without the need for extensive postprocessing stacking of multiple exposures. The technology, however, isn’t new. In fact, researchers have been aware of light-field theory for three-quarters of a century, and light-field rendering of 2D images from 4D information even was suggested by Marc Levoy and Pat Hanrahan in 1996. A German company called Raytrix actually has had plenoptic light-field cameras available for purchase for nearly a year, offering similar possibilities at a somewhat larger price. Adobe also has been experimenting with a plenoptic camera that takes a three-dimensional image utilizing a grouping of specially configured lenses that combine multiple captures into one behemoth, 100-megapixel image.
There are a lot of thoughts online about how this technology could, or could not, change the field of photography as we know it. Plenoptics removes many of the physical limitations of a lens, including lens aberrations, missed focus and the binding relationship between depth of field and aperture. Removing focus (and, to an extent, aperture) from the image equation has broad implications, however, culminating in the concept that you could theoretically place a camera anywhere, fire it remotely, make changes on a computer and remove the photographer from the image-taking process entirely. Of course, you can do that already. Whether through focus stacking of multiple exposures or software options that give you extensive control over bokeh, these options are already available to photographers and clients. Regardless, removing focus doesn’t necessarily remove the photographer. A camera, no matter how advanced, is just a tool. A photographer is someone who knows how to wield that tool effectively.
reprint from Digitalphotopro