On Feb 1, 2017, in London, England, the Queen Elizabeth Prize for Engineering, a global £1 million prize that celebrates ground-breaking innovations in engineering, was awarded to Eric Fossum (USA), George Smith (USA), Nobukazu Teranishi (Japan) and Michael Tompsett (UK) for their work on digital imaging sensors. Smith and Tompsett did their work while at Bell Labs.
Digital imaging sensors
The invention honored by the 2017 QEPrize needs little introduction, since most of us have one in our pockets or purses. Nonetheless, digital image sensors have proven immensely useful beyond Instagram, SnapChat and the selfie. They have driven new research from the ocean’s depths to the outer reaches of space. They have transformed the fields of photography, communication and entertainment. The digital image sensor has also proven critical to the development of robotics and autonomous cars, multiple applications in defense and security, as well as medical imaging used in diagnostics and endoscopic surgery.
The pioneering work was first done at Bell Labs by George Smith and the late Willard Boyle who invented the Charge-Coupled Device (CCD) in 1969 — an invention that earned them the 2009 Nobel Prize in Physics, among many other international awards. Their CCD work was then developed specifically for imaging applications by Michael F. Tompsett, also at Bell Labs, in the 1970s.
In 1980, Nobukazu Teranishi, then at NEC in Japan, invented the pinned photodiode. This vastly improved the image-to-noise ratio and thus the resolution possible from the image sensor. In 1995, Eric Fossum, then at NASA, invented the complementary metal-oxide semiconductor (CMOS) active pixel image sensor, which used up to 100X less power than comparable CCD image sensors and was much less expensive to produce. Likely it is a CMOS sensor that you have in your smartphone and consumer-level camera today. Although the CMOS sensor for many years provided lower quality, resolution and light sensitivity, recent advances have brought it close to the quality of the CCD in some applications.
A need for improved computer memory led to breakthrough innovations in imaging
The CCD arose from a separate effort aimed at computer memory. Before the 1970s, computing circuits used relays, ferrite cores, delay lines, magnetic drums, tapes, and vacuum tubes to store data. It was not until the 1970s that random access memory (RAM) based on semiconductor circuits became commonplace.  Prior to this "modern era" Bell Labs engineers sought to develop useful memory circuits, and in 1966 Bell Labs engineer Andrew H. Bobeck developed the magnetic bubble memory. Soon after, George Smith and Willard Boyle aimed to create a kind of electronic bubble memory that would be less costly, easier to manufacture and more readily integrated into electronic circuits in comparison with magnetic bubble memory. Their work eventually resulted in the charge coupled device (CCD).
The CCD exploited a common characteristic of semiconductors, their ability to capture patterns formed by (negatively charged) electrons and holes and either store or read the patterns. For Smith and Boyle, these patterns could be used to encode and store digital information. The pattern would be created electronically on the surface of the device when used as memory, but because the material is light sensitive, the pattern could also be formed by focusing an image on the silicon surface. Stronger light creates a stronger electrical charge, less light, less charge.
Charge coupled device design illustrating storage and transfer. Archive drawing from 1970.
It was this latter characteristic that led to Tompsett’s contribution to image sensing. As part of a Bell Labs team that worked on various applications of the CCD and semiconductors — which included Walter J. Bertram, Rusty R. Buckley, William J. McNamara, David A. Sealer, Theodore A. Shankoff, Carlos H. Séquin, Philip J. Boddy and Hugh A. Watson — Tompsett further developed the CCD so that it could capture an image focused on its surface. He modified Smith’s and Boyle’s CCD design so that the image could be read off and then stored on another part of the device, while the next image was captured. Given the low resolution of these early devices, video was the first application that Tompsett had in mind, so these images were captured every 1/60 of a second, typical of a TV transmission format at 60 FPS. The initial “solid-state” cameras generated black and white video images, but by splitting the initial image through a prism to create red, blue and green channels (RGB), it was possible to collect the information on three CCDs, which led to the first solid-state color TV cameras. These three-chip cameras could be read off through a video amplifier and provide the analog video signal required to reproduce the image on a color television. 
Michael Thompsett (left) and Ed Zimany pictured in 1972 with the first all solid state video camera developed at Bell Labs.
The solid-state image sensor replaced the Vidicon tubes developed earlier by RCA, which were based on vacuum tube technology and were more fragile, more sensitive to electro-magnetic distortion and suffered from “color fringing” caused by mis-alignment of the three tubes devoted to the RGB channels. Vidicon cameras also suffered from blooming" distortion under strong illumination. A solid-state video camera was much smaller and lighter than television cameras that used the Vidicon tubes, being about the same size as one of the tubes alone.
The CCD and Beyond
It was another Bell Labs physics researcher, Tony Tyson, who took the CCD to the ends of the earth and beyond. CCDs were capable of great light sensitivity. Some of their earliest uses were in astronomy, where Tyson discovered far-distant galaxies using them. Some modifications had to be made to the CCD for deep space photography. Even, at room temperature, heat would create an electrical charge on the CCD called “dark current”. This was not a problem for short exposures, but for the long exposures typical of astronomical photography, the dark current could fill up the wells on the surface of the CCD. Jim Westphal, also at Bell Labs, came up with the solution: cool the chips using liquid nitrogen. Tyson developed CCD cameras that were used in the deepest oceans for recording light emitted by deep sea volcanic vents. His astronomical work with CCDs also led to the discovery of Dark Energy. He is currently the Chief Scientist on the construction of the Large Synoptic Survey Telescope (LSST) where they will be employing a very large (3.2 gigapixel) CCD array, the largest digital camera ever built.
The global image sensor market today is valued at close to $10 billion annually and is expected to grow at a CAGR of 8.8% reaching $16 billion annually by 2020. Ninety percent of sensors sold are CMOS, but CCD sensors continue to play a key role wherever higher quality and greater light sensitivity are required. We have only begun to explore the possible uses of these sensors. As we move to the Internet of Things, robotics and autonomous vehicles, the applications for all sensors are rapidly multiplying, and image sensors will be at center stage.
 “What is the difference between CCD and CMOS image sensors in a digital camera?” How Stuff Works, http://electronics.howstuffworks.com/cameras-photography/digital/question362.htm.
 D. Klein, "The History of Semiconductor Memory: From Magnetic Tape to NAND Flash Memory," in
IEEE Solid-State Circuits Magazine, vol. 8, no. 2, pp. 16-22, Spring 2016.
 Bryce Bayer at Eastman Kodak later developed a single chip image sensor using a Bayer filter that is laid over the silicon to create values for RGB. Three-chip CCDs are still used as they are more light sensitive; the Bayer filter absorbs two-thirds of the light. Bayer filtered CMOS sensors are almost universally used in consumer-level cameras.