Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20030193594 A1
Publication typeApplication
Application numberUS 10/383,450
Publication dateOct 16, 2003
Filing dateMar 6, 2003
Priority dateApr 16, 2002
Also published asUS20110013045
Publication number10383450, 383450, US 2003/0193594 A1, US 2003/193594 A1, US 20030193594 A1, US 20030193594A1, US 2003193594 A1, US 2003193594A1, US-A1-20030193594, US-A1-2003193594, US2003/0193594A1, US2003/193594A1, US20030193594 A1, US20030193594A1, US2003193594 A1, US2003193594A1
InventorsHiok Tay
Original AssigneeTay Hiok Nam
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Image sensor with processor controlled integration time
US 20030193594 A1
Abstract
An image sensor that has one or more pixels within a pixel array. The pixels are arranged within a plurality of rows within the array. Each row of the pixel array can be selected by a row decoder in response to an edge of a control signal. The control signal may be one of a plurality of signals generated by a processor coupled to the image sensor. The processor can control the exposure time of the pixels by varying the control signals. The control signals may also have an embedded narrow pulse that is used to determine the location of a “window” in the pixel array.
Images(15)
Previous page
Next page
Claims(20)
What is claimed is:
1. An image sensor that is connected to a processor which generates a plurality of control signals, the control signals including a first edge separated from a second edge by a control interval, comprising:
a pixel array that contains a plurality of rows of pixels; and,
a selection circuit that selects a row of said pixel array to generate and retrieve pixel data from said pixel array by resetting and reading said selected row of said pixel array, a time interval between the resetting and reading of said selected row being proportional to the control interval between the first and second edges.
2. The image sensor of claim 1, wherein said selection circuit includes a decoder circuit coupled to said pixel array, an address generator coupled to said decoder circuit and a pulse detector coupled to said address generator and the processor.
3. The image sensor of claim 2, wherein said address generator circuit includes a first counter that is started in response to the first edge and a second counter that is started in response to the second edge.
4. The image sensor of claim 3, wherein said selection circuit includes a narrow pulse detector that is coupled to a third counter of said address generator, said third counter being coupled to said decoder circuit.
5. The image sensor of claim 3, wherein said decoder circuit includes a multiplexor coupled to an address decoder, said multiplexor being coupled to said first and second counters.
6. The image sensor of claim 5, wherein said selection circuit includes a row driver coupled to a latch of said decoder circuit, said latch being coupled to said address decoder.
7. The image sensor of claim 1, further comprising a light reader circuit coupled to said pixel array.
8. The image sensor of claim 4, wherein said selection circuit includes a counter/latch that is coupled to said narrow pulse detector and said address generator.
9. The image sensor of claim 6, wherein said selection circuit includes a phase sequence decoder that is coupled to said light reader circuit and said row driver.
10. An image sensor that is connected to a processor which generates a plurality of control signals including a first pulse that has a first width and a second pulse that has a different second width, comprising:
a pixel array that contains a plurality of rows of pixels; and,
a selection circuit that selects a group of rows of said pixel array, the group being a function of a location of the second pulse relative to the first pulse.
11. The image sensor of claim 10, wherein said selection circuit includes a decoder circuit coupled to said pixel array, an address generator coupled to said decoder circuit and a pulse detector coupled to said address generator and the processor.
12. The image sensor of claim 11, wherein said address generator includes a first counter that is started in response to a first edge in the plurality of control signals and a second counter that is started in response to a second edge in the plurality of control signals selection.
13. The image sensor of claim 12, wherein said logic circuit includes a pulse detector that is coupled to a third counter of said address generator, said third counter being coupled to said decoder circuit.
14. The image sensor of claim 12, wherein said decoder circuit includes a multiplexor coupled to an address decoder, said multiplexor being coupled to said first and second counters.
15. The image sensor of claim 14, wherein said selection circuit includes a row driver coupled to a latch of said decoder circuit, said latch being coupled to said address decoder.
16. The image sensor of claim 10, further comprising a light reader circuit coupled to said pixel array.
17. The image sensor of claim 13, wherein said selection circuit includes a counter/latch that is coupled to said pulse detector and said address generator.
18. The image sensor of claim 16, wherein said selection circuit includes a phase sequence decoder that is coupled to said light reader circuit and said row driver.
19. An image sensor, comprising:
a pixel array that contains a plurality of rows of pixels;
an address decoder coupled to a row of said pixel array;
a multiplexor coupled to said address decoder;
a first address generator coupled to said multiplexor; and,
a second address generator coupled to said multiplexor.
20. The system of claim 19, further comprising a pulse detector coupled to said first and second address generators.
Description
    REFERENCE TO CROSS RELATED APPLICATION
  • [0001]
    This application claims priority under 35 U.S.C 119(e) to provisional application No. 60/372,902 filed on Apr. 16, 2002.
  • BACKGROUND OF THE INVENTION
  • [0002]
    1. Field of the Invention
  • [0003]
    The subject matter disclosed generally relates to the field of semiconductor image sensors.
  • [0004]
    2. Background Information
  • [0005]
    Photographic equipment such as digital cameras and digital camcorders contain electronic image sensors that capture light for processing into a still or video image, respectively. There are two primary types of electronic image sensors, charge coupled devices (CCDs) and complimentary metal oxide semiconductor (CMOS) sensors. CCD image sensors have relatively high signal to noise ratios (SNR) that provide quality images. Additionally, CCDs can be fabricated to have pixel arrays that are relatively small while conforming with most camera and video resolution requirements. A pixel is the smallest discrete element of an image. For these reasons, CCDs are used in most commercially available cameras and camcorders.
  • [0006]
    CMOS sensors are faster and consume less power than CCD devices. Additionally, CMOS fabrication processes are used to make many types of integrated circuits. Consequently, there is a greater abundance of manufacturing capacity for CMOS sensors than CCD sensors.
  • [0007]
    To date there has not been developed a CMOS sensor that has the same SNR and pixel pitch requirements as commercially available CCD sensors. Pixel pitch is the space between the centers of adjacent pixels. It would be desirable to provide a CMOS sensor that has relatively high SNR while providing a commercially acceptable pixel pitch.
  • [0008]
    The image sensor is typically connected to an external processor and external memory. The external memory stores data from the image sensor. The processor processes the stored data. The data includes one or more images generated by exposing the pixels for a predetermined time interval. The exposure time of the pixels is typically controlled by an internal clock(s) of the image sensor.
  • [0009]
    The exposure time of a picture frame is established by a word written into an exposure time register. Changing the exposure time requires writing new data into the register and then reading the data. In video and fast successive still photo shots this technique may create confusion regarding the exposure time of incoming pixel data, thereby creating instability in the system. It would be desirable to provide processor control of the exposure time of the pixels that improves stability and does not require an undesirable number of pins and signals.
  • [0010]
    Camera or camcorder products typically have an auto-focus function. To increase the speed of an auto-focus cycle the camera may be designed to process only a “window” of the pixel array. The auto-focus routine may require the window to move around the pixel array of the image sensor. It would be desirable to provide processor control of the window data in a manner that minimizes the pin count and number of signals required for the image sensor.
  • BRIEF SUMMARY OF THE INVENTION
  • [0011]
    An image sensor coupled to a process that generates a plurality of control signals. The image sensor includes a pixel array that is arranged into a number of rows. The sensor may also contain a logic circuit that selects a row of the pixel array to generate and retrieve pixel data in response to a first edge and a second edge of the control signals. A time interval between a resetting and a reading of the selected row is proportional to an interval between the first and second edges of the control signals.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0012]
    FIGS. 1 is a schematic of an embodiment of an image sensor;
  • [0013]
    [0013]FIG. 2 is a schematic of an embodiment of a pixel of the image sensor;
  • [0014]
    [0014]FIG. 3 is a schematic of an embodiment of a light reader circuit of the image sensor;
  • [0015]
    [0015]FIG. 4 is a flowchart for a first mode of operation of the image sensor;
  • [0016]
    [0016]FIG. 5 is a timing diagram for the first mode of operation of the image sensor;
  • [0017]
    [0017]FIG. 6 is a diagram showing the levels of a signal across a photodiode of a pixel;
  • [0018]
    [0018]FIG. 7 is a schematic for a logic circuit for generating the timing diagrams of FIG. 5;
  • [0019]
    [0019]FIG. 8 is a schematic of a logic circuit for generating a RST signal for a row of pixels;
  • [0020]
    [0020]FIG. 9 is a timing diagram for the logic circuit shown in FIG. 8;
  • [0021]
    [0021]FIG. 10 is a flowchart showing a second mode of operation of the image sensor;
  • [0022]
    [0022]FIG. 11 is a timing diagram for the second mode of operation of the image sensor;
  • [0023]
    [0023]FIG. 12 is a schematic of an embodiment of a row decoder of the image sensor;
  • [0024]
    [0024]FIG. 13 is a timing diagram for the row decoder shown in FIG. 12;
  • [0025]
    [0025]FIG. 14 is a timing diagram showing the transfer of pixel data when the image sensor is in a low noise mode;
  • [0026]
    [0026]FIG. 15 is a timing diagram showing the transfer of pixel data when the image sensor is in an extended dynamic range mode;
  • [0027]
    [0027]FIG. 16 is an illustration of a window of the pixel array;
  • [0028]
    [0028]FIG. 17 is timing diagram showing an embedded narrow pulse used to determine a start location of the window.
  • DETAILED DESCRIPTION
  • [0029]
    Disclosed is an image sensor that has one or more pixels within a pixel array. The pixels are arranged within a plurality of rows within the array. Each row of the pixel array can be selected by a row decoder in response to an edge of a control signal. The control signal may be one of a plurality of signals generated by a processor coupled to the image sensor. The processor can control the exposure time of the pixels by varying the control signals. The control signals may also have an embedded narrow pulse that is used to determine the location of a “window” in the pixel array.
  • [0030]
    The pixel may be a three transistor structure that minimizes the pixel pitch of the image sensor. The entire image sensor is preferably constructed with CMOS fabrication processes and circuits. The CMOS image sensor has the characteristics of being high speed, low power consumption, small pixel pitch and a high SNR.
  • [0031]
    Referring to the drawings more particularly by reference numbers, FIG. 1 shows an image sensor 10. The image sensor 10 includes a pixel array 12 that contains a plurality of individual photodetecting pixels 14. The pixels 14 are arranged in a two-dimensional array of rows and columns.
  • [0032]
    The pixel array 12 is coupled to a light reader circuit 16 by a bus 18 and to a row decoder 20 by control lines 22. The row decoder 20 can select an individual row of the pixel array 12. The light reader 16 can then read specific-discrete columns within the selected row. Together, the row decoder 20 and light reader 16 allow for the reading of an individual pixel 14 in the array 12.
  • [0033]
    The light reader 16 may be coupled to an analog to digital converter 24 (ADC) by output line(s) 26. The ADC 24 generates a digital bit string that corresponds to the amplitude of the signal provided by the light reader 16 and the selected pixels 14.
  • [0034]
    The ADC 24 may be coupled to line buffers 28 by data lines 30. The line buffers 28 may include separate pairs of buffers for first image data and second image data. The line buffers 28 are coupled to a data interface 32 that transfers data to a processor 34 over bus 36. The processor 34 may be coupled to memory 38 by bus 40. Although the memory 38 is shown coupled to the processor 34, it is to be understood that the system may have other configurations. For example, the processor 34 and memory 38 may be coupled to the interface 32 by separate busses.
  • [0035]
    The data interface 32 may be connected to a control line INTG 42 which provides a control signal from the processor 34. The control signal may contain a series of pulses that control the transfer of data to the processor 34. The pixel data may be transferred to the processor 34 in an interleaving manner. For example, the buffers 28 may store pixel data of a first image and a second image. The data interface 32 may interleave the data by sending a first line of the first image and then a first line of the second image and so forth and so on.
  • [0036]
    The image sensor 10 may have registers 44 that store mode and gain values. The values can be provided to the data interface 32, buffers 28, light reader 16 and row decoder 20 over lines 46, 48, 50 and 52, respectively. The values can be loaded into the registers 44 through lines 54, 56 and 58. The image sensor 10 may also have clock circuits 60 that provide CLK timing signals over line 62.
  • [0037]
    The light reader circuit 16 may be coupled to a column decoder 64 by control lines 66. The decoder 64 selects a column within the pixel array 12 to generate and retrieve pixel data from the pixels 14. The decoder 64 is coupled to a counter 68 by a bus 70. The counter 68 provides a count value that causes the decoder 64 to switch the selection of a column in the pixel array 12. Counter 68 is also connected to an input line HD 72 and an output line HDF 74.
  • [0038]
    The row decoder 20 may include a plurality of row drivers 76 that are coupled to the pixel array 12. The row drivers 76 may be coupled to decoders 78 and counters 80. The counters 80 may be coupled to a counter/latch circuit 82.
  • [0039]
    The row decoder 20 may also include a phase sequence decoder 84. The phase sequence decoder 84 may be coupled to the light reader 16, row drivers 76 and decoders 78 by control signals 86. The row decoder 20 may further include a wide pulse detector 88 and a narrow pulse detector 90. The wide pulse detector 88 may be connected to the counters 80 by LEAD 92 and LAG 94 control signals, respectively. The narrow pulse detector 90 may be connected to the counter/latch 82 by control signal NP 96. The pulse detectors 88 and 90 may be connected to the INTG control line 42 that is coupled to the processor 34. The counter/latch 82, narrow pulse detector 90 and phase sequence decoder 84 may be connected to the mode line 52 of register 44.
  • [0040]
    [0040]FIG. 2 shows an embodiment of a cell structure for a pixel 14 of the pixel array 12. The pixel 14 may contain a photodetector 100. By way of example, the photodetector 100 may be a photodiode. The photodetector 100 may be connected to a reset transistor 112. The photodetector 100 may also be coupled to a select transistor 114 through a level shifting transistor 116. The transistors 112, 114 and 116 may be field effect transistors (FETs).
  • [0041]
    The gate of reset transistor 112 may be connected to a RST line 118. The drain node of the transistor 112 may be connected to IN line 120. The gate of select transistor 114 may be connected to a SEL line 122. The source node of transistor 114 may be connected to an OUT line 124. The RST 118 and SEL lines 122 may be common for an entire row of pixels in the pixel array 12. Likewise, the IN 120 and OUT 124 lines may be common for an entire column of pixels in the pixel array 12. The RST line 118 and SEL line 122 are connected to the row decoder 20 and are part of the control lines 22.
  • [0042]
    [0042]FIG. 3 shows an embodiment of a light reader circuit 16. The light reader 16 may include a plurality of double sampling capacitor circuits 150 each connected to an OUT line 124 of the pixel array 12. Each double sampling circuit 150 may include a first capacitor 152 and a second capacitor 154. The first capacitor 152 is coupled to the OUT line 124 and ground GND1 156 by switches 158 and 160, respectively. The second capacitor 154 is coupled to the OUT line 124 and ground GND1 by switches 162 and 164, respectively. Switches 158 and 160 are controlled by a control line SAM1 166. Switches 162 and 164 are controlled by a control line SAM2 168. The capacitors 152 and 154 can be connected together to perform a voltage subtraction by closing switch 170. The switch 170 is controlled by a control line SUB 172.
  • [0043]
    The double sampling circuits 150 are connected to an operational amplifier 180 by a plurality of first switches 182 and a plurality of second switches 184. The amplifier 180 has a negative terminal−coupled to the first capacitors 152 by the first switches 182 and a positive terminal+coupled to the second capacitors 154 by the second switches 184. The operational amplifier 180 has a positive output+connected to an output line OP 188 and a negative output−connected to an output line OM 186. The output lines 186 and 188 are connected to the ADC 24 (see FIG. 1).
  • [0044]
    The operational amplifier 180 provides an amplified signal that is the difference between the voltage stored in the first capacitor 152 and the voltage stored in the second capacitor 154 of a sampling circuit 150 connected to the amplifier 180. The gain of the amplifier 180 can be varied by adjusting the variable capacitors 190. The variable capacitors 190 may be discharged by closing a pair of switches 192. The switches 192 may be connected to a corresponding control line (not shown). Although a single amplifier is shown and described, it is to be understood that more than one amplifier can be used in the light reader circuit 16.
  • [0045]
    [0045]FIGS. 4 and 5 show an operation of the image sensor 10 in a first mode also referred to as a low noise mode. In process block 300 a reference signal is written into each pixel 14 of the pixel array and then a first reference output signal is stored in the light reader 16. Referring to FIGS. 2 and 5, this can be accomplished by switching the RST 118 and IN 120 lines from a low voltage to a high voltage to turn on transistor 112. The RST line 118 is driven high for an entire row. IN line 120 is driven high for an entire column. In the preferred embodiment, RST line 118 is first driven high while the IN line 120 is initially low.
  • [0046]
    The RST line 118 may be connected to a tri-state buffer (not shown) that is switched to a tri-state when the IN line 120 is switched to a high state. This allows the gate voltage to float to a value that is higher than the voltage on the IN line 120. This causes the transistor 112 to enter the triode region. In the triode region the voltage across the photodiode 100 is approximately the same as the voltage on the IN line 120. Generating a higher gate voltage allows the photodetector to be reset at a level close to Vdd. CMOS sensors of the prior art reset the photodetector to a level of Vdd-Vgs, where Vgs can be up to 1 V.
  • [0047]
    The SEL line 122 is also switched to a high voltage level which turns on transistor 114. The voltage of the photodiode 100 is provided to the OUT line 124 through level shifter transistor 116 and select transistor 114. The SAM1 control line 166 of the light reader 16 (see FIG. 3) is selected so that the voltage on the OUT line 124 is stored in the first capacitor 152.
  • [0048]
    Referring to FIG. 4, in process block 302 the pixels of the pixel array are then reset and reset output signals are then stored in the light reader 16. Referring to FIGS. 2 and 5 this can be accomplished by driving the RST line 118 low to turn off the transistor 112 and reset the pixel 14. Turning off the transistor 112 will create reset noise, charge injection and clock feedthrough voltage that resides across the photodiode 100. As shown in FIG. 6 the noise reduces the voltage at the photodetector 100 when the transistor 112 is reset.
  • [0049]
    The SAM2 line 168 is driven high, the SEL line 122 is driven low and then high again, so that a level shifted voltage of the photodiode 100 is stored as a reset output signal in the second capacitor 154 of the light reader circuit 16. Process blocks 300 and 302 are repeated for each pixel 14 in the array 12.
  • [0050]
    Referring to FIG. 4, in process block 304 the reset output signals are then subtracted from the first reference output signals to create noise output signals that are then converted to digital bit strings by ADC 24. The digital output data can be stored within the line buffers 28 and eventually transferred and stored within the external memory 38. The noise signals may be referred to as a first image. Referring to FIG. 3, the subtraction process can be accomplished by closing switches 182, 184 and 170 of the light reader circuit 16 (FIG. 3) to subtract the voltage across the second capacitor 154 from the voltage across the first capacitor 152.
  • [0051]
    Referring to FIG. 4, in block 306 light response output signals are sampled from the pixels 14 of the pixel array 12 and stored in the light reader circuit 16. The light response output signals correspond to the optical image that is being detected by the image sensor 10. Referring to FIGS. 2, 3 and 5 this can be accomplished by having the IN 120, SEL 122 and SAM2 lines 168 in a high state and RST 118 in a low state. The second capacitor 152 of the light reader circuit 16 stores a level shifted voltage of the photodiode 100 as the light response output signal.
  • [0052]
    Referring to FIG. 4, in block 308 a second reference output signal is then generated in the pixels 14 and stored in the light reader circuit 16. Referring to FIGS. 2, 3 and 5, this can be accomplished similar to generating and storing the first reference output signal. The RST line 118 is first driven high and then into a tri-state. The IN line 120 is then driven high to cause the transistor 112 to enter the triode region so that the voltage across the photodiode 100 is the voltage on IN line 120. The SEL 122 and SAM2 168 lines are then driven high to store the second reference output voltage in the first capacitor 154 of the light reader circuit 16. Process blocks 306 and 308 are repeated for each pixel 14 in the array 12.
  • [0053]
    Referring to FIG. 4, in block 310 the light response output signal is subtracted from the second reference output signal to create a normalized light response output signal. The normalized light response output signal is converted into a digital bit string to create normalized light output data that is transferred to the processor 34. The normalized light response output signals may be referred to as a second image. Referring to FIGS. 2, 3 and 5 the subtraction process can be accomplished by closing switches 170, 182 and 184 of the light reader 16 to subtract the voltage across the first capacitor 152 from the voltage across the second capacitor 154. The difference is then amplified by amplifier 180 and converted into a digital bit string by ADC 24 as light response data.
  • [0054]
    Referring to FIG. 4, in block 312 the noise data is retrieved from memory 38. In block 314 the noise data, first image, is combined (subtracted) with the normalized light output data, second image, by the processor 34. The noise data corresponds to the first image and the normalized light output data corresponds to the second image. The second reference output signal is the same or approximately the same as the first reference output signal such that the present technique subtracts the noise data, due to reset noise, charge injection and clock feedthrough, from the normalized light response signal. This improves the signal to noise ratio of the final image data.
  • [0055]
    The process described is performed in a sequence across the various rows of the pixels in the pixel array 12. As shown in FIG. 5, the n-th row in the pixel array may be generating noise signals while the n-1-th row generates normalized light response signals, where 1 is the exposure duration in multiples of a line period.
  • [0056]
    The various control signals RST, SEL, IN, SAM1, SAM2 and SUB can be generated in the circuit generally referred to as the phase sequence decoder 84. FIG. 7 shows an embodiment of logic to generate the IN, SEL, SAM1, SAM2 and RST signals in accordance with the timing diagram of FIG. 5. The logic may include a plurality of comparators 350 with one input connected to a counter 68 and another input connected to hardwired signals that contain a lower count value and an upper count value. The counter 68 sequentially generates a count. The comparators 350 compare the present count with the lower and upper count values. If the present count is between the lower and upper count values the comparators 350 output a logical 1.
  • [0057]
    The comparators 350 are connected to plurality of AND gates 356 and OR gates 358. The OR gates 358 are connected to latches 360. The latches 360 provide the corresponding IN, SEL, SAM1, SAM2 and RST signals. The AND gates 356 are also connected to a mode line 364. To operate in accordance with the timing diagram shown in FIG. 5, the mode line 364 is set at a logic 1.
  • [0058]
    The latches 360 switch between a logic 0and a logic 1 in accordance with the logic established by the AND gates 356, OR gates 358, comparators 350 and the present count of the counter 352. For example, the hardwired signals for the comparator coupled to the IN latch may contain a count values of 6 and a count value of 24. If the count from the counter is greater or equal to 6 but less than 24 the comparator 350 will provide a logic 1 that will cause the IN latch 360 to output a logic 1. The lower and upper count values establish the sequence and duration of the pulses shown in FIG. 5. The mode line 364 can be switched to a logic 0 which causes the image sensor to function in a second mode.
  • [0059]
    The sensor 10 may have a plurality of reset RST(n) drivers 370, each driver 370 being connected to a row of pixels. FIGS. 8 and 9 show an exemplary driver circuit 370 and the operation of the circuit 370. Each driver 370 may have a pair of NOR gates 372 that are connected to the RST and SAM1 latches shown in FIG. 7. The NOR gates control the state of a tri-state buffer 374. The tri-state buffer 374 is connected to the reset transistors in a row of pixels. The input of the tri-state buffer is connected to an AND gate 376 that is connected to the RST latch and a row enable ROWEN(n) line.
  • [0060]
    [0060]FIGS. 10 and 11 show operation of the image sensor in a second mode also referred to as an extended dynamic range mode. In this mode the image provides a sufficient amount of optical energy so that the SNR is adequate even without the noise cancellation technique described in FIGS. 4 and 5. Although it is to be understood that the noise cancellation technique shown in FIGS. 4 and 5 can be utilized while the image sensor 10 is in the extended dynamic range mode. The extended dynamic mode has both a short exposure period and a long exposure period. Referring to FIG. 10, in block 400 each pixel 14 is reset to start a short exposure period. The mode of the image sensor can be set by the processor 34 through register 44 to determine whether the sensor should be in the low noise mode, or the extended dynamic range mode.
  • [0061]
    In block 402 a short exposure output signal is generated in the selected pixel and stored in the second capacitor 154 of the light reader circuit 16.
  • [0062]
    In block 404 the selected pixel is then reset. The level shifted reset voltage of the photodiode 100 is stored in the first capacitor 152 of the light reader circuit 16 as a reset output signal. The short exposure output signal is subtracted from the reset output signal in the light reader circuit 16. The difference between the short exposure signal and the reset signal is converted into a binary bit string by ADC 24 and stored into the external memory 38. The short exposure data corresponds to the first image pixel data. Then each pixel is again reset to start a long exposure period.
  • [0063]
    In block 406 the light reader circuit 16 stores a long exposure output signal from the pixel in the second capacitor 154. In block 408 the pixel is reset and the light reader circuit 16 stores the reset output signal in the first capacitor 152. The long exposure output signal is subtracted from the reset output signal, amplified and converted into a binary bit string by ADC 24 as long exposure data.
  • [0064]
    Referring to FIG. 10, in block 410 the short exposure data is retrieved from memory 38. In block 412 the short exposure data is combined with the long exposure data by the processor 34. The data may be combined in a number of different manners. The external processor 34 may first analyze the image with the long exposure data. The photodiodes may be saturated if the image is too bright. This would normally result in a “washed out” image. The processor 34 can process the long exposure data to determine whether the image is washed out, if so, the processor 34 can then use the short exposure image data. The processor 34 can also use both the long and short exposure data to compensate for saturated portions of the detected image.
  • [0065]
    By way of example, the image may be initially set to all zeros. The processor 34 then analyzes the long exposure data. If the long exposure data does not exceed a threshold then N least significant bits (LSB) of the image is replaced with all N bits of the long exposure data. If the long exposure data does exceed the threshold then N most significant bits (MSB) of the image are replaced by all N bits of the short exposure data. The image data is N+M bits per pixel. This technique increases the dynamic range by M bits, where M is the exponential in an exposure duration ratio of long and short exposures that is defined by the equation l=2M. The replaced image may undergo a logarithmic mapping to a final picture of N bits in accordance with the mapping equation Y=2N log2 (X)/(N+M).
  • [0066]
    [0066]FIG. 11 shows the timing of data generation and retrieval for the long and short exposure data. The reading of output signals from the pixel array 12 overlap with the retrieval of signals from memory 38. FIG. 11 shows timing of data generation and retrieval wherein a n-th row of pixels starts a short exposure, the (n-k)-th row ends the short exposure period and starts the long exposure period, and the (n-k-1)-th row of pixels ends the long exposure period. Where k is the short exposure duration in multiples of the line period, and 1 is the long exposure duration in multiples of the line period.
  • [0067]
    The processor 34 begins to retrieve short exposure data for the pixels in row (n-k) at the same time as the (n-k-1)-th row in the pixel array is completing the long exposure period. At the beginning of a line period, the light reader circuit 16 retrieves the short exposure output signals from the (n-k)-th row of the pixel array 12 as shown by the enablement of signals SAM1, SAM2, SEL(n-k) and RST(n-k). The light reader circuit 16 then retrieves the long exposure data of the (n-k-1)-th row.
  • [0068]
    The dual modes of the image sensor 10 can compensate for varying brightness in the image. When the image brightness is low the output signals from the pixels are relatively low. This would normally reduce the SNR of the resultant data provided by the sensor, assuming the average noise is relatively constant. The noise compensation scheme shown in FIGS. 4 and 5 improve the SNR of the output data so that the image sensor provides a quality picture even when the subject image is relatively dark. Conversely, when the subject image is too bright the extended dynamic range mode depicted in FIGS. 10 and 11 compensates for such brightness to provide a quality picture. Although a process having a short exposure followed by a long exposure is shown and described, it is to be understood that the short exposure may follow the long exposure.
  • [0069]
    [0069]FIG. 12 shows an embodiment of a row driver 76 and adecoder 78 of the row decoder 20. The decoder 20 may contain an address decoder 500 and a latch 502. The input of the latch 502 is connected to input lines CLR 504, D0, D1 506 from the phase decoder circuit 84 (see FIG. 1) and the output line LE 508 of the address decoder 500. Although a phase decoder circuit 84 is shown and described, it is to be understood that any state value generator may be utilized. The input of the driver 76 is connected to output lines Q0, Q1 510 of the latch 502 and input lines RST 512 and SEL 514 from the phase sequence decoder 84. The latches 502 for each row of pixels are all connected to the phase decoder circuit 84 by the same common control lines 504 and 506. The common control lines 504 and 506 minimize the lines, transistors and space required by the row decoder while providing a means for loading the state valves with a time division muliplexing process.
  • [0070]
    The address decoder 500 is coupled to a multiplexor 520 by an address bus 522. The address decoder 500 is also connected to control lines PRE# 524 and EVA# 526 from the phase sequence decoder 84. The multiplexor 520 may have three input address busses 528, 530 and 532. The address busses 528, 530 and 532 are connected to a first counter 534, a second counter 536 and a third counter 538, respectively. Although counters 534, 536 and 538 are shown and described, it is to be understood that any address generator may be implemented.
  • [0071]
    The output of the multiplexor 520 is switched between the busses 528, 530 and 532 by a control line PA 540 from the phase sequence decoder 84. There is a corresponding address decoder 500 and latch 502 for each row of the pixel array 12. The multiplexor 520 provides a time division multiplexing means for selecting a row of the pixel array with a reduced number of lines and transistors which minimizes the size of the image sensor.
  • [0072]
    [0072]FIGS. 13 and 14 show an operation of the row decoder 20 and transfer of pixel data. As shown in FIG. 14, the integration time and transfer of data is dependent on the control signal INTG from the processor 34. Making the integration time and data transfer dependent on the control signal INTG allows the processor 34 to control and vary these parameters.
  • [0073]
    The INTG control signal contains a plurality of pulses each with a falling edge and a rising edge. Referring to FIGS. 1, 12, 13 and 14, a falling edge is detected by the wide pulse detector 88, which generates an output on the LEAD control line 92. The LEAD control signal starts the first counter 534. The first counter 534 outputs an address that is provided to the multiplexor 520.
  • [0074]
    The PA control signal switches some of the multiplexors 520 to provide the address from the first counter 534 to the corresponding address decoders 500. If the address from the first counter 534 matches a stored address within the address decoder 500 the decoder 500 will enable the latch 502 through line LE 508. The latch 502 loads state values Q0 and Q1 into the row driver 76. The output state values correspond to state values D0 and D1 that were previously loaded into the latch 502 from the phase sequence decoder 84. When in low noise mode the state values allow for the RST and SEL signals to pass through the driver 76 to the selected row to generate and retrieve reference and reset signals, the first image.
  • [0075]
    The first counter 534 continues to output new address values which in turn sequentially select rows of the pixel array 12 to allow for the generation and retrieval of reference and noise signals for each row. The falling edge of the INTG control signal also enables the transfer of the first image to the processor 34 from the data interface 32. The process continues until all of the first image data is transferred to the processor 34, and stored in memory 38.
  • [0076]
    A rising edge of a pulse is detected by the wide pulse detector 88 which generates an output on the LAG control line 94. The LAG signal initiates the second counter 536. The second counter 536 provides addresses that are provided to the multiplexors 520 of each row. The multiplexors 520 mux the addresses to the decoders 500. If the addresses match, the latch 502 is enabled to load state values into the row drivers 76. When in the low noise mode the state values allow for the generation and retrieval of light response and reference signals for the second image. The rising edge also enables the data interface 32 to transfer the second image data to the processor 34. As shown in FIG. 14, the transfer of first and second image data may overlap. The interface 32 can transfer the overlapping data to the processor 34 in an interleaving manner.
  • [0077]
    [0077]FIG. 15 shows the transfer of data when the image sensor 10 is in the extended dynamic range mode. In this mode the INTG control signal includes a narrow pulse between wide pulses. Short exposure is initiated by the falling edge of a wide pulse. The narrow pulse is detected by the narrow pulse detector 90 which initiates the third counter 538. The third counter 538 provides addresses which are decoded by matching decoders 500 to enable corresponding latches 502. The enabled latches 502 load state values into the row drivers 76 that allow for the generation and retrieval of long exposure and reference signals of the second image. The narrow pulse also enables the data interface 32 to transfer the short exposure and reference signals of the first image to the processor 34.
  • [0078]
    The processor 34 can change the exposure time by varying the width of the pulses in the control signal. The variation in pulse width is an integer multiple of the line period so that the change in pulse width is in synchronization with the signals generated by the phase sequence decoder 82. When in the extended dynamic range mode the exposure time can be varied by changing the location of the narrow pulse.
  • [0079]
    As shown in FIG. 16, the image sensor may generate data within a window 550 of the pixel array 12. The window 550 is an area typically offset from the first row of the pixel array 12. The window information may be provided to the processor 34 to auto-focus the camera. In auto-focus mode the window offset may vary to capture different parts of the image.
  • [0080]
    [0080]FIG. 17 shows an INTG control signal with an in embedded narrow pulse that is used to determine the offset location of the window 550. When the register 44 sets the image sensor in a window mode, the narrow pulse detector 90 detects the embedded narrow pulse and provides a START control signal to the counter/latch 82 on the NP control line 96. The wide pulse detector 88 detects the rising edge of the next pulse and provides a STOP control signal to the counter/latch 82 on the LAG control line 94. The counter/latch 82 uses the START and STOP control signal to determine the offset for the window. An offset value is loaded into the counters 534, 536 and 538 to provide an initial count value. The processor 34 can control the window offset by varying the location of the embedded narrow pulse within the control signal.
  • [0081]
    It is the intention of the inventor that only claims which contain the term “means” shall be construed under 35 U.S.C. 112, sixth paragraph.
  • [0082]
    While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US3931674 *Feb 8, 1974Jan 13, 1976Fairchild Camera And Instrument CorporationSelf aligned CCD element including two levels of electrodes and method of manufacture therefor
US4425501 *Mar 30, 1981Jan 10, 1984Honeywell Inc.Light aperture for a lenslet-photodetector array
US4473836 *May 3, 1982Sep 25, 1984Dalsa Inc.Integrable large dynamic range photodetector element for linear and area integrated circuit imaging arrays
US4614996 *Jul 18, 1985Sep 30, 1986Shimizu Construction Co., Ltd.Ceiling illumination apparatus
US4647975 *Oct 30, 1985Mar 3, 1987Polaroid CorporationExposure control system for an electronic imaging camera having increased dynamic range
US4704633 *Feb 20, 1986Nov 3, 1987Fuji Photo Film Co., Ltd.Method for reading out image information on an image having a wide dynamic range
US4858013 *May 26, 1988Aug 15, 1989Mitsubishi Denki Kabushiki KaishaSolid state imaging device with adaptive pixel correction
US4974093 *Dec 22, 1988Nov 27, 1990Fuji Photo Film Co., Ltd.Solid state image-pickup device with expanded dynamic range
US5043821 *Aug 29, 1989Aug 27, 1991Canon Kabushiki KaishaImage pickup device having a frame-size memory
US5138458 *Dec 17, 1990Aug 11, 1992Olympus Optical Co., Ltd.Electronic camera apparatus capable of providing wide dynamic range image signal
US5163914 *Oct 24, 1991Nov 17, 1992Abel Elaine RSupport for a respirator hose
US5235197 *Jun 25, 1991Aug 10, 1993Dalsa, Inc.High photosensitivity and high speed wide dynamic range ccd image sensor
US5278658 *May 14, 1991Jan 11, 1994Ricoh Company, Ltd.Image reading apparatus having a function for correcting dark signals generated in a photoelectric conversion element
US5309243 *Jun 10, 1992May 3, 1994Eastman Kodak CompanyMethod and apparatus for extending the dynamic range of an electronic imaging system
US5420635 *Aug 28, 1992May 30, 1995Fuji Photo Film Co., Ltd.Video camera, imaging method using video camera, method of operating video camera, image processing apparatus and method, and solid-state electronic imaging device
US5434620 *Sep 28, 1993Jul 18, 1995Nippondenso Co., Ltd.Image sensor
US5436662 *May 3, 1993Jul 25, 1995Olympus Optical Co., Ltd.Imaging apparatus having a solid state matrix-type imaging element and pulse generator for the expanding the dynamic range
US5452004 *Jun 17, 1993Sep 19, 1995Litton Systems, Inc.Focal plane array imaging device with random access architecture
US5455621 *Oct 25, 1993Oct 3, 1995Matsushita Electric Industrial Co., Ltd.Imaging method for a wide dynamic range and an imaging device for a wide dynamic range
US5461425 *Feb 15, 1994Oct 24, 1995Stanford UniversityCMOS image sensor with pixel level A/D conversion
US5471515 *Jan 28, 1994Nov 28, 1995California Institute Of TechnologyActive pixel sensor with intra-pixel charge transfer
US5587738 *Nov 17, 1994Dec 24, 1996Canon Kabushiki KaishaSolid-state image pickup device having plural switches for subtracting a stored signal from a pixel output
US5638118 *Feb 9, 1995Jun 10, 1997Canon Kabushiki KaishaImage sensing device with diverse storage times used in picture composition
US5665959 *Jul 1, 1996Sep 9, 1997The United States Of America As Represented By The Administrator Of The National Aeronautics And Space AdminstrationSolid-state image sensor with focal-plane digital photon-counting pixel array
US5675381 *Jun 7, 1995Oct 7, 1997Canon Kabushiki KaishaControl of solid-state image sensor
US5737016 *Aug 7, 1995Apr 7, 1998Canon Kabushiki KaishaSolid state image pickup apparatus for reducing noise
US5801773 *Oct 26, 1994Sep 1, 1998Canon Kabushiki KaishaImage data processing apparatus for processing combined image signals in order to extend dynamic range
US5841126 *Jan 24, 1997Nov 24, 1998California Institute Of TechnologyCMOS active pixel sensor type imaging system on a chip
US5861620 *Jan 16, 1997Jan 19, 1999Canon Kabushiki KaishaPhotoelectric converting apparatus
US5880460 *Jun 9, 1997Mar 9, 1999Foveonics, Inc.Active pixel sensor cell that reduces noise in the photo information extracted from the cell
US5883830 *May 13, 1997Mar 16, 1999Intel CorporationCMOS imaging device with integrated flash memory image correction circuitry
US5886659 *Aug 21, 1997Mar 23, 1999California Institute Of TechnologyOn-focal-plane analog-to-digital conversion for current-mode imaging devices
US5892541 *Sep 10, 1996Apr 6, 1999Foveonics, Inc.Imaging system and method for increasing the dynamic range of an array of active pixel sensor cells
US5909026 *Jun 3, 1997Jun 1, 1999California Institute Of TechnologyIntegrated sensor with frame memory and programmable resolution for light adaptive imaging
US5926214 *Nov 14, 1996Jul 20, 1999Vlsi Vision LimitedCamera system and associated method for removing reset noise and fixed offset noise from the output of an active pixel array
US5929908 *Jan 30, 1996Jul 27, 1999Canon Kabushiki KaishaImage sensing apparatus which performs dynamic range expansion and image sensing method for dynamic range expansion
US5953061 *May 8, 1998Sep 14, 1999Xerox CorporationPixel cells having integrated analog memories and arrays thereof
US5962844 *Sep 3, 1997Oct 5, 1999Foveon, Inc.Active pixel image cell with embedded memory and pixel level signal processing capability
US5990506 *Mar 20, 1997Nov 23, 1999California Institute Of TechnologyActive pixel sensors with substantially planarized color filtering elements
US6005619 *Oct 6, 1997Dec 21, 1999Photobit CorporationQuantum efficiency improvements in active pixel sensors
US6008486 *Dec 31, 1997Dec 28, 1999Gentex CorporationWide dynamic range optical sensor
US6021172 *Dec 5, 1995Feb 1, 2000California Institute Of TechnologyActive pixel sensor having intra-pixel charge transfer with analog-to-digital converter
US6024881 *Aug 11, 1998Feb 15, 2000Just; Gerard A.Magnetic absorption treatment of fluid phases
US6040858 *Nov 16, 1995Mar 21, 2000Canon Kabushiki KaishaMethod and apparatus for expanding the dynamic range of sensed color images
US6049357 *Apr 30, 1997Apr 11, 2000Canon Kabushiki KaishaImage pickup apparatus including signal accumulating cells
US6101287 *May 27, 1998Aug 8, 2000Intel CorporationDark frame subtraction
US6115065 *Nov 7, 1996Sep 5, 2000California Institute Of TechnologyImage sensor producing at least two integration times from each sensing pixel
US6115066 *Jun 12, 1997Sep 5, 2000International Business Machines CorporationImage sensor with direct digital correlated sampling
US6144408 *Dec 22, 1997Nov 7, 2000Eastman Kodak CompanyBlack pattern correction for charge transfer sensor
US6198686 *Jul 10, 2000Mar 6, 2001Fujitsu LimitedMemory device having row decoder
US6246436 *Nov 3, 1997Jun 12, 2001Agilent Technologies, IncAdjustable gain active pixel sensor
US6300978 *Feb 11, 1998Oct 9, 2001Kabushiki Kaisha ToshibaMOS-type solid-state imaging apparatus
US6317154 *Mar 2, 2001Nov 13, 2001Intel CorporationMethod to reduce reset noise in photodiode based CMOS image sensors
US6365950 *May 27, 1999Apr 2, 2002Samsung Electronics Co., Ltd.CMOS active pixel sensor
US6369737 *Oct 30, 1997Apr 9, 2002The Board Of Trustees Of The Leland Stanford Junior UniversityMethod and apparatus for converting a low dynamic range analog signal to a large dynamic range floating-point digital representation
US6369853 *Nov 13, 1997Apr 9, 2002Foveon, Inc.Intra-pixel frame storage element, array, and electronic shutter method suitable for electronic still camera applications
US6418245 *Jul 14, 1997Jul 9, 2002Canon Kabushiki KaishaDynamic range expansion method for image sensed by solid-state image sensing device
US6473122 *Dec 6, 1999Oct 29, 2002Hemanth G. KanekalMethod and apparatus to capture high resolution images using low resolution sensors and optical spatial image sampling
US6493030 *Apr 8, 1998Dec 10, 2002Pictos Technologies, Inc.Low-noise active pixel sensor for imaging arrays with global reset
US6532040 *Sep 9, 1998Mar 11, 2003Pictos Technologies, Inc.Low-noise active-pixel sensor for imaging arrays with high speed row reset
US6538593 *Feb 28, 2002Mar 25, 2003The Board Of Trustees Of The Leland Stanford Junior UniversityMethod and apparatus for converting a low dynamic range analog signal to a large dynamic range floating-point digital representation
US6603702 *Feb 26, 2001Aug 5, 2003Matsushita Electric Industrial Co., Ltd.Semiconductor integrated circuit
US6794627 *Oct 24, 2001Sep 21, 2004Foveon, Inc.Aggregation of active pixel sensor signals
US6847398 *Mar 30, 1999Jan 25, 2005Micron Technology, Inc.Latched row logic for a rolling exposure snap
US6856349 *Sep 30, 1996Feb 15, 2005Intel CorporationMethod and apparatus for controlling exposure of a CMOS sensor array
US6992715 *Oct 4, 2001Jan 31, 2006Psion Teklogix Systems Inc.Row decoding scheme for double sampling in 3T pixel arrays
US20010040631 *Jan 3, 2001Nov 15, 2001Ewedemi Odutola OluseyeCMOS sensor array with a memory interface
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7973844 *Jan 21, 2005Jul 5, 2011Hamamatsu Photonics K.K.Solid state image pickup device
US8035718Mar 26, 2008Oct 11, 2011Aptina Imaging CorporationSystems, methods, and devices for preventing shoot-through current within and between signal line drivers of semiconductor devices
US8072523 *Jun 2, 2009Dec 6, 2011Micron Technology, Inc.Redundancy in column parallel or row architectures
US8446507 *Nov 1, 2011May 21, 2013Micron Technology, Inc.Redundancy in column parallel or row architectures
US8634010Apr 18, 2013Jan 21, 2014Micron Technology, Inc.Redundancy in column parallel or row architectures
US9369652 *Apr 7, 2015Jun 14, 2016Taiwan Semiconductor Manufacturing Company, Ltd.Readout device with readout circuit
US20070242151 *Jan 21, 2005Oct 18, 2007Yasuhiro SuzukiSolid State Image Pickup Device
US20090244039 *Jun 2, 2009Oct 1, 2009Micron Technology, Inc.Redundancy in column parallel or row architectures
US20090244344 *Mar 26, 2008Oct 1, 2009Micron Technology, Inc.Systems, methods, and devices for preventing shoot-through current within and between signal line drivers of semiconductor devices
US20120044219 *Nov 1, 2011Feb 23, 2012Christian BoemlerRedundancy in column parallel or row architectures
US20130020465 *Sep 27, 2011Jan 24, 2013Lg Innotek Co., Ltd.Pixel, pixel array, image sensor including the same, and method for driving image sensor
US20150189197 *Nov 11, 2014Jul 2, 2015Cista System Corp.Compact row decoder with multiple voltage support
US20150215556 *Apr 7, 2015Jul 30, 2015Taiwan Semiconductor Manufacturing Company, Ltd.Readout device with readout circuit
EP1711007A1 *Jan 21, 2005Oct 11, 2006Hamamatsu Photonics K.K.Solid state image pickup device
Classifications
U.S. Classification348/308
International ClassificationH04N5/353
Cooperative ClassificationH04N5/3575, H04N5/35581, H04N5/3535, H04N5/3454, H04N5/343
Legal Events
DateCodeEventDescription
Mar 6, 2003ASAssignment
Owner name: CANDELA MICROSYSTEMS, INC., CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TAY, HIOK NAM;REEL/FRAME:013866/0418
Effective date: 20030306