The camera's resolution is five times better than 20/20 human vision over a 120 degree horizontal field.
The new camera has the potential to capture up to 50 gigapixels of data, which is 50,000 megapixels. By comparison, most consumer cameras are capable of taking photographs with sizes ranging from 8 to 40 megapixels. Pixels are individual "dots" of data -- the higher the number of pixels, the better resolution of the image.
The researchers believe that within five years, as the electronic components of the cameras become miniaturized and more efficient, the next generation of gigapixel cameras should be available to the general public.
The camera was developed by a team led by David Brady, Michael J. Fitzpatrick Professor of Electric Engineering at Duke's Pratt School of Engineering, along with scientists from the University of Arizona, the University of California -- San Diego, and Distant Focus Corp.
"Each one of the microcameras captures information from a specific area of the field of view," Brady said. "A computer processor essentially stitches all this information into a single highly detailed image. In many instances, the camera can capture images of things that photographers cannot see themselves but can then detect when the image is viewed later."
"The development of high-performance and low-cost microcamera optics and components has been the main challenge in our efforts to develop gigapixel cameras," Brady said. "While novel multiscale lens designs are essential, the primary barrier to ubiquitous high-pixel imaging turns out to be lower power and more compact integrated circuits, not the optics."
The software that combines the input from the microcameras was developed by an Arizona team led by Michael Gehm, assistant professor of electrical and computer engineering at the University of Arizona.
"Traditionally, one way of making better optics has been to add more glass elements, which increases complexity," Gehm said. "This isn't a problem just for imaging experts. Supercomputers face the same problem, with their ever more complicated processors, but at some point the complexity just saturates, and becomes cost-prohibitive."
"Our current approach, instead of making increasingly complex optics, is to come up with a massively parallel array of electronic elements," Gehm said. "A shared objective lens gathers light and routes it to the microcameras that surround it, just like a network computer hands out pieces to the individual work stations. Each gets a different view and works on their little piece of the problem. We arrange for some overlap, so we don't miss anything."
The prototype camera itself is two-and-half feet square and 20 inches deep. Interestingly, only about three percent of the camera is made of the optical elements, while the rest is made of the electronics and processors needed to assemble all the information gathered. Obviously, the researchers said, this is the area where additional work to miniaturize the electronics and increase their processing ability will make the camera more practical for everyday photographers.
"The camera is so large now because of the electronic control boards and the need to add components to keep it from overheating," Brady said, "As more efficient and compact electronics are developed, the age of hand-held gigapixel photography should follow."
Details of the new camera were published online in the journal Nature. Co-authors of the Nature report with Brady and Gehm include Steve Feller, Daniel Marks, and David Kittle from Duke; Dathon Golish and Estabon Vera from Arizona; and Ron Stack from Distance Focus. The team's research was supported by the Defense Advanced Research Projects Agency (DARPA).
Comments
Post a Comment