WO2014122506A1 - Image processing of sub -images of a plenoptic image - Google Patents

Image processing of sub -images of a plenoptic image Download PDF

Info

Publication number
WO2014122506A1
WO2014122506A1 PCT/IB2013/052353 IB2013052353W WO2014122506A1 WO 2014122506 A1 WO2014122506 A1 WO 2014122506A1 IB 2013052353 W IB2013052353 W IB 2013052353W WO 2014122506 A1 WO2014122506 A1 WO 2014122506A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
image
data
plenoptic
particular point
Prior art date
Application number
PCT/IB2013/052353
Other languages
French (fr)
Inventor
Mithun Uliyar
Gururaj PUTRAYA
Basavaraja SHANTHAPPA VANDROTTI
Krishna Govindarao
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Publication of WO2014122506A1 publication Critical patent/WO2014122506A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/957Light-field or plenoptic cameras or camera modules

Definitions

  • Embodiments of the present invention relate to image processing. In particular, they relate to processing plenoptic images.
  • a plenoptic camera is a camera that captures four dimensional light field information/radiance of a scene.
  • a method comprising: identifying, in each of a plurality of sub-images of a plenoptic image, a pixel location corresponding with a particular point in a scene; and combining data from the identified pixel locations to form a single pixel for the particular point in the scene.
  • an apparatus comprising: processing circuitry; and at least one memory storing computer program code configured, working with the processing circuitry, to cause at least the following to be performed: identifying, in each of a plurality of sub- images of a plenoptic image, a pixel location corresponding with a particular point in a scene; and combining data from the identified pixel locations to form a single pixel for the particular point in the scene.
  • an apparatus comprising means for identifying, in each of a plurality of sub- images of a plenoptic image, a pixel location corresponding with a particular point in a scene; and means for combining data from the identified pixel locations to form a single pixel for the particular point in the scene.
  • computer program code that, when performed at least one processor, causes at least the following to be performed: identifying, in each of a plurality of sub- images of a plenoptic image, a pixel location corresponding with a particular point in a scene; and combining data from the identified pixel locations to form a single pixel for the particular point in the scene.
  • Fig. 1 illustrates a first schematic of a focused plenoptic camera
  • Fig. 2 illustrates an apparatus
  • FIG. 3 illustrates a further apparatus
  • Fig. 4 illustrates an array of micro-lenses
  • Fig. 5 illustrates a plenoptic image
  • Fig. 6 illustrates a second schematic of a focused plenoptic camera
  • Fig. 7 illustrates a flow chart of a first method
  • Fig. 8 illustrates a flow chart of a second method
  • Fig. 9 illustrates a flow chart of a third method.
  • Embodiments of the invention relate to processing a plenoptic image to obtain an output image with a high signal to noise ratio (SNR).
  • SNR signal to noise ratio
  • a higher signal to noise ratio may be obtained by binning pixels in the plenoptic image that correspond with a particular point in a scene.
  • the figures illustrate an apparatus 10 comprising: processing circuitry 12; and at least one memory 14 storing computer program code 18 configured, working with the processing circuitry 12, to cause at least the following to be performed: identifying, in each of a plurality of sub-images 32a-32d of a plenoptic image 31 , a pixel location corresponding with a particular point in a scene; and combining data from the identified pixel locations to form a single pixel for the particular point in the scene.
  • Fig. 1 illustrates a schematic of a focused plenoptic camera (otherwise known as Plenoptic Camera 2.0).
  • the focused plenoptic camera comprises a main lens 22 and an array 24 of micro-lenses.
  • the main lens 22 images a real-life scene.
  • the array 24 of micro-lenses is focused on the image plane 23 of the main lens 22.
  • Each micro- lens conveys a portion of the image produced by the main lens 22 onto an image sensor 26, effectively acting as a relaying system.
  • Each micro-lens satisfies the lens equation:
  • v is the distance from the micro-lens to the main lens image plane 23
  • b is the distance from the micro-lens to the image sensor 26
  • f is the focal length of the micro-lens.
  • the focused plenoptic camera illustrated in Fig. 1 is a Keplerian focused plenoptic camera in which the image plane 23 of the main lens 22 is positioned between the main lens 22 and the micro-lens array 24.
  • the micro-lens array 24 is placed between the main lens 22 and the image plane 23 of the main-lens 22.
  • Each micro-lens forms a sub-image on the image sensor 26.
  • the sub-images collectively form a plenoptic image (otherwise known as a "light-field image"). The number of times a particular point in a scene appears in the plenoptic image will depend upon its proximity to the plenoptic camera.
  • a scene point that is closer to the plenoptic camera will be imaged by fewer of the micro-lenses in the array 24 than a scene point that is further away, and will thus appear less frequently in the plenoptic image.
  • a scene point that is closer to the plenoptic camera will be imaged by more of the micro-lenses in the array 24 than a scene point that is further away, and will thus appear more frequently in the plenoptic image.
  • each micro-lens has a different position to the others, a disparity exists when comparing the location of a particular scene point in a sub-image formed by one micro-lens with the location of the same scene point in another sub-image formed by another micro-lens. That is, there will be an offset in the location of a particular scene point in one sub-image relative to the location of the same scene point in another sub-image. Furthermore, since each micro-lens conveys only part of the image formed by the main lens 22 onto the image sensor, individual points in the scene will be imaged by some micro-lenses and not others. This means that each point in the scene will be present in only a subset of the sub-images.
  • Fig. 2 illustrates a first apparatus 10 comprising processing circuitry 12 and a memory 14.
  • the apparatus 10 may, for example, be a chip or a chipset.
  • the processing circuitry 12 is configured to read from and write to the memory 14.
  • the processing circuitry 12 may comprise an output interface via which data and/or commands are output by the processing circuitry 12 and an input interface via which data and/or commands are input to the processing circuitry 12.
  • the processor 12 may be or comprise one or more processors.
  • the processing circuitry 12 may include an analog to digital converter.
  • the memory 14 stores a computer program 17 comprising computer program instructions/code 18 that control the operation of the apparatus 10 when loaded into the processing circuitry 12.
  • the computer program code 18 provides the logic and routines that enable the apparatus 10 to perform the methods illustrated in Figs 7, 8 and 9.
  • the processing circuitry 12, by reading the memory 14, is able to load and execute the computer program 17.
  • the memory 14 is illustrated as a single component it may be implemented as one or more separate components some or all of which may be integrated/removable and/or may provide permanent/semi-permanent/ dynamic/cached storage.
  • the computer program 17 may arrive at the apparatus 10 via any suitable delivery mechanism 30.
  • the delivery mechanism 30 may be, for example, a non-transitory computer-readable storage medium such as a compact disc read-only memory (CD- ROM) or digital versatile disc (DVD).
  • the delivery mechanism 30 may be a signal configured to reliably transfer the computer program 17.
  • the apparatus 10 may cause the propagation or transmission of the computer program 17 as a computer data signal.
  • Fig. 3 illustrates a second apparatus 20.
  • the second apparatus 20 is a plenoptic camera.
  • the second apparatus 20 includes a housing 21 , the first apparatus 10 illustrated in Fig. 2 and the main lens 22, micro-lens array 24 and image sensor 26 of the plenoptic camera illustrated in Fig. 1 .
  • the housing 21 houses the processing circuitry 12, the memory 14, the main lens 22, the micro-lens array 24 and the image sensor 26.
  • the apparatus 20 may also comprise a display.
  • the memory 14 is illustrated in Fig. 3 as storing a plenoptic image 31 , a further plenoptic image 33 and a depth map 35. These items will be described in further detail later.
  • the image sensor 26 may be any type of image sensor, including a charge-coupled device (CCD) sensor and a complementary metal-oxide-semiconductor (CMOS) sensor.
  • CCD charge-coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the array 24 of micro-lenses may include any number of micro-lenses.
  • the elements 12, 14, 22, 24, 26 are operationally coupled and any number or combination of intervening elements can exist between them (including no intervening elements).
  • An aperture 27 is present into the housing 21 that enables light to enter the housing 21 .
  • the arrow labeled with the reference numeral 40 in Fig. 3 illustrates light entering the housing 21 .
  • the arrow labeled with the reference numeral 41 illustrates light being conveyed from the main lens 22 to the micro-lens array 24.
  • the arrow labeled with the reference numeral 42 illustrates light being conveyed from the micro-lens array 24 to the image sensor 26, which obtains and stores electronic image data.
  • the processing circuitry 12 is configured to read image data from the image sensor.
  • An analog to digital converter of the processing circuitry 12 may convert analog voltage/charge data stored by the image sensor 26 (and forming a plenoptic image) into digital data.
  • the processing circuitry 12 may store one or more digital plenoptic images 31 , 33 in the memory 14.
  • Fig. 4 illustrates an example of an array 24 of micro-lenses.
  • the micro-lenses may have a different shape from that illustrated in Fig. 4.
  • each micro-lens may be rectangular or hexagonal in shape.
  • the array 24 shown in Fig. 4 for illustrative purposes has a hundred micro-lenses. In practice, the array 24 may have many more micro-lenses, such as hundreds or thousands of micro-lenses.
  • the micro-lens labeled with the reference numeral 25a in Fig. 4 is considered to have four neighboring micro-lenses 25b-25e. There are two vertically neighboring micro- lenses 25b, 25d and two horizontally neighboring micro-lenses 25c, 25e.
  • Fig. 5 illustrates an example of a plenoptic image 31 .
  • the plenoptic image 31 comprises a plurality of sub-images. A sub-image is generated by each micro-lens in the array.
  • the sub-image labeled with the reference numeral 32a in Fig. 5 is formed by the corresponding micro-lens labeled with the micro-lens labeled 25a.
  • the sub- images labeled with the reference numerals 32b, 32c, 32d and 32e are formed by the micro-lenses labeled with the reference numerals 25b, 25c, 25d and 25e respectively.
  • Fig. 6 is a schematic illustrating the main lens 22, the micro-lens array 24 and the image sensor 26 in a Galilean arrangement.
  • the main lens 22, the micro-lens array 24 and the image sensor 26 might instead be in a Keplerian arrangement.
  • Point C is a point on a (virtual) image plane 23 at which a real-life scene point is imaged by the main lens 22.
  • the point labeled P is a point on the image sensor 26 at which the same point in the real-life scene is imaged by the micro-lens 25a in a sub-image 32a.
  • the point P' is a point at which the same point in a real-life scene is imaged by a neighboring micro-lens 25c in a different sub-image 32c.
  • the micro-lenses are equivalent to pin-hole cameras and the triangle formed by points A, B and C in Fig. 6 is a similar triangle to that formed by points P, P' and C in Fig. 6, we can show that:
  • v is the distance from the micro-lens array 24 to a virtual image corresponding with the real-life scene point imaged at points P and P' and formed by the main lens 22; B is the distance between the micro-lens array 24 and the image sensor 26; and D is the distance between the micro-lenses 25a, 25c.
  • the processing circuitry 12 identifies, in each of a plurality of sub-images of a plenoptic image 31 , a pixel location corresponding with a particular real-life point in a scene. Since the real-life scene point may only have been imaged in a subset of the sub-images, the pixel locations may only be identified in a subset of the sub-images.
  • a depth may be determined for the particular point in the scene and used to identify pixel locations, in multiple sub-images, which correspond with the particular point in the scene.
  • equation (3) may be used to determine the corresponding pixel locations that are present in a subset of the sub-images in the plenoptic image 31 .
  • the processing circuitry 12 combines the data from the identified pixel locations to form a single pixel for the particular point in the scene (for example, by binning the data from the identified pixel locations).
  • the data that is combined may be analog or digital data.
  • the processing circuitry 12 may repeat the process in blocks 701 and 702 for all of the points in the scene that are imaged by the main lens 22 and relayed to the image sensor 26 by the micro-lens array.
  • this enables an output image to be produced that has a high signal to noise ratio, resulting in improved camera performance in low light.
  • the processing circuitry 12 may verify that the data from each of the identified pixel locations relate to the same point in the scene. This may be done by comparing the depths of the identified pixel locations with one another. Alternatively or additionally, it may be done by comparing the pixel values of the identified pixel locations with one another.
  • an additional/further plenoptic image may be captured prior to the capture of the plenoptic image that is used for data combination/binning.
  • the earlier-captured plenoptic image may be used to determine a depth that is used to identify the pixel locations corresponding with a particular point in a scene. The identified pixel locations are then applied to the later-captured plenoptic image in order to combine/bin data and form a single pixel for the particular point in the scene.
  • Fig. 8 illustrates a more detailed description of some embodiments of the method illustrated in Fig. 7, in which the data that is combined at various pixel locations is digital data.
  • a plenoptic image 31 is captured in an analog format by the apparatus 20.
  • the plenoptic image 31 formed on the image sensor 26 is converted from the analog format to a digital format by the processing circuitry 12.
  • the processing circuitry 12 analyses the (digital) plenoptic image 31 to produce a depth map 35 for each pixel in the plenoptic image 31 .
  • a depth map 35 for a particular portion of a scene may be produced by comparing a portion of one sub-image with a portion of another sub-image to identify a matching content portion (that is, a matching set of pixels).
  • the processing circuitry 12 may use the offset/disparity in the position of that content portion from one sub-image relative to another to determine the depth of that content portion. This process may be repeated for each and every portion in an imaged scene to generate a depth map 35 for the whole of the plenoptic image 31 .
  • the depth map 35 may, for example, include a depth value for each pixel in the plenoptic image 31 .
  • Each depth value may be a virtual depth (that is, a distance from the micro-lens array 24 to the image formed by the main lens 22) or a real depth value (that is, a distance from the micro- lens array 24 to the real-life scene point).
  • the processing circuitry 12 applies equation (3) to identify, for each point in a scene, pixel locations in different sub-images of the plenoptic image 31 where the scene point has been imaged.
  • each scene point may appear in multiple sub-images in the plenoptic images, and the number of sub- images in which a particular scene point appears will depend upon its proximity to the plenoptic camera when it was captured. Let us consider a situation where a particular point in a scene has been imaged at a pixel location P on the image sensor 26, in a sub-image formed by a first micro-lens.
  • the processing circuitry 12 may compare the depth of one pixel location P with the depth of the other pixel location P' using the depth map 35. In the event that the depth of the pixel locations P, P' are the same or similar, the processing circuitry 12 determines that the pixel locations P, P' relate to the same scene point. Alternatively or additionally, the processing circuitry 12 may compare the value of the pixels at each pixel location P, P' with one another. In the event that they are the same or similar, the processing circuitry 12 determines that the pixel locations P, P' relate to the same scene point. At block 805 in Fig. 8, the processing circuitry 12 combines the data from the pixel locations that were identified in block 804 as relating to the same scene point (for example, by binning the data from the identified pixel locations).
  • an output image with a high signal to noise ratio is produced by the processing circuitry 12 and stored in the memory 14.
  • the output image is of a conventional/standard format (that is, as opposed to in a plenoptic format) in which there is a single pixel for each individual point in a real-life scene.
  • the processing circuitry 12 may be configured to produce an output image in a standard/conventional format from the plenoptic image 31 , and generate a depth map 35 for each pixel in that output image.
  • the processing circuitry 12 may then apply equation (3) to identify, for each scene point imaged in the output image, multiple pixel locations in the plenoptic image 31 where the scene point has been reproduced.
  • the processing circuitry 12 may compare the value of the pixels at the identified pixel locations with the value of the corresponding pixel in the output image to verify that they relate to the same real-life scene point. Depth values cannot be compared in these embodiments because depth map 35 only includes a depth value for the pixel in the output image.
  • a depth map 35 is only produced for each of the pixels in an output image generated from the plenoptic image 31 rather than for each of the pixels in the plenoptic image 31 itself, because the plenoptic image 31 contains more pixels. For example, if a plenoptic image includes 10 megapixels, the output image might only include around 2 megapixels.
  • Fig. 9 illustrates a more detailed description of some embodiments of the method illustrated in Fig. 7, in which the data that is combined at various pixel locations is analog data.
  • the analog data that is combined may, for example, be voltage or charge values.
  • the image sensor 26 is a destructive readout image sensor 26. That is, when analog data is read from the sensor 26, it is destroyed and cannot be recovered.
  • a first plenoptic image 31 is captured by the apparatus 20.
  • the processing circuitry 12 digitizes the first plenoptic image 31 as described above in relation to block 802 of Fig. 8.
  • a depth map 35 is generated for each of the pixels in the first plenoptic image 31 as described above in relation to block 803 of Fig. 8.
  • the processing circuitry 12 applies equation (3) to identify, for each point in a scene, multiple pixel locations in the first plenoptic image 31 where the scene point has been imaged, as described above in relation to block 804 of Fig. 8.
  • the apparatus 20 captures a second plenoptic image 33.
  • the processing circuitry 12 uses the second plenoptic image 33 to combine analog data from the pixel locations that were identified in block 904 (from the first plenoptic image 31 ) as relating to the same scene point.
  • the processing circuitry 12 may, for example, bin the data from the identified pixel locations.
  • an output image with a high signal to noise ratio is produced by the processing circuitry 12 and stored in the memory 14.
  • the output image is of a conventional/standard format (as opposed to in a plenoptic format) in which there is a single pixel for each individual point in a real-life scene.
  • combining/binning analog data often produces an output image with a higher signal to noise ratio than combining/binning digital data.
  • the second plenoptic image 33 may be captured before, at the same time, or after generation of the depth map 35 in block 903 and the identification of the pixel locations in block 904.
  • the first and second plenoptic images 31 , 33 may be full resolution plenoptic images. That is, the first and second plenoptic images 31 , 33 may use the full resolution of the image sensor 26 for a particular image aspect ratio (such as 4:3, 16:9, etc.).
  • the second plenoptic image 33 may be a full resolution image and the first plenoptic image 31 may be a lower resolution image, such as a viewfinder image.
  • a viewfinder image is an image that is captured and output to a display which enables a user to see a scene on the display in real-time. Use of a viewfinder image may reduce the time between capturing the first and second plenoptic images 31 , 33, advantageously minimizing any differences in content between the two plenoptic images 31 , 33. Another potential benefit is that it may also take less time for the processing circuitry 12 to generate a depth map 35 for a viewfinder image than a full resolution image.
  • the first and second plenoptic images 31 , 33 are different frames of a video. They may be consecutive frames of a video. In such embodiments, it may not be necessary to generate a depth map 35 for every frame in the video in the manner described above. Instead, depth tracking may be performed in which the processing circuitry 12 adjusts the depth map 35 from frame to frame by analyzing how the content in the video changes from one frame to the next. New depth values may be determined for "new content” that appears in a particular frame, whereas old depth values (determined in relation to a prior frame) may be maintained for "old content" that was present in one or more prior frames.
  • the image sensor 26 is not a destructive readout sensor.
  • references to 'computer-readable storage medium', 'processing circuitry', 'processor' etc. should be understood to encompass not only computers having different architectures such as single/multi-processor architectures and sequential (Von Neumann)/parallel architectures but also specialized circuits such as field- programmable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other processing circuitry.
  • References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed-function device, gate array or programmable logic device etc.
  • circuitry refers to all of the following:
  • circuits such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
  • circuitry would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware.
  • circuitry would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, or other network device.
  • the blocks illustrated in figure 6, 7, 8 and 9 may represent steps in a method and/or sections of code in the computer program 17. The illustration of a particular order to the blocks does not necessarily imply that there is a required or preferred order for the blocks and the order and arrangement of the block may be varied. Furthermore, it may be possible for some blocks to be omitted.
  • the apparatus 10 illustrated in Fig. 2 may form part of a computer rather than a plenoptic camera such as that illustrated Fig. 3.
  • the apparatus that performs the image processing to produce an output image having a high signal to ratio need not be or form part of the apparatus that was used to capture the original plenoptic image.
  • the method(s) described above may also be applied to a plenoptic image captured by an array of cameras.
  • generation of the depth map, identification of pixels corresponding with individual points in the scene and binning the identified pixels can be performed as described above in relation to a Plenoptic camera 2.0.

Abstract

A method, apparatus and a computer program is provided. The method comprises: identifying, in each of a plurality of sub-images of a plenoptic image, a pixel location corresponding with a particular point in a scene; and combining data from the identified pixel locations to form a single pixel for the particular point in the scene.

Description

TITLE
IMAGE PROCESSING OF SUB -IMAGES OF A PLENOPTIC IMAGE TECHNOLOGICAL FIELD
Embodiments of the present invention relate to image processing. In particular, they relate to processing plenoptic images.
BACKGROUND
A plenoptic camera, alternatively known as a light-field camera, is a camera that captures four dimensional light field information/radiance of a scene.
BRIEF SUMMARY
According to various, but not necessarily all, embodiments of the invention there is provided a method, comprising: identifying, in each of a plurality of sub-images of a plenoptic image, a pixel location corresponding with a particular point in a scene; and combining data from the identified pixel locations to form a single pixel for the particular point in the scene.
According to various, but not necessarily all, embodiments of the invention there is provided an apparatus comprising: processing circuitry; and at least one memory storing computer program code configured, working with the processing circuitry, to cause at least the following to be performed: identifying, in each of a plurality of sub- images of a plenoptic image, a pixel location corresponding with a particular point in a scene; and combining data from the identified pixel locations to form a single pixel for the particular point in the scene. According to various, but not necessarily all, embodiments of the invention there is provided an apparatus comprising means for identifying, in each of a plurality of sub- images of a plenoptic image, a pixel location corresponding with a particular point in a scene; and means for combining data from the identified pixel locations to form a single pixel for the particular point in the scene. According to various, but not necessarily all, embodiments of the invention there is provided computer program code that, when performed at least one processor, causes at least the following to be performed: identifying, in each of a plurality of sub- images of a plenoptic image, a pixel location corresponding with a particular point in a scene; and combining data from the identified pixel locations to form a single pixel for the particular point in the scene.
BRIEF DESCRIPTION For a better understanding of various examples that are useful for understanding the brief description, reference will now be made by way of example only to the accompanying drawings in which:
Fig. 1 illustrates a first schematic of a focused plenoptic camera;
Fig. 2 illustrates an apparatus;
Fig. 3 illustrates a further apparatus;
Fig. 4 illustrates an array of micro-lenses;
Fig. 5 illustrates a plenoptic image;
Fig. 6 illustrates a second schematic of a focused plenoptic camera;
Fig. 7 illustrates a flow chart of a first method;
Fig. 8 illustrates a flow chart of a second method; and
Fig. 9 illustrates a flow chart of a third method.
DETAILED DESCRIPTION Embodiments of the invention relate to processing a plenoptic image to obtain an output image with a high signal to noise ratio (SNR). A higher signal to noise ratio may be obtained by binning pixels in the plenoptic image that correspond with a particular point in a scene. In this regard, the figures illustrate an apparatus 10 comprising: processing circuitry 12; and at least one memory 14 storing computer program code 18 configured, working with the processing circuitry 12, to cause at least the following to be performed: identifying, in each of a plurality of sub-images 32a-32d of a plenoptic image 31 , a pixel location corresponding with a particular point in a scene; and combining data from the identified pixel locations to form a single pixel for the particular point in the scene.
Fig. 1 illustrates a schematic of a focused plenoptic camera (otherwise known as Plenoptic Camera 2.0). The focused plenoptic camera comprises a main lens 22 and an array 24 of micro-lenses. The main lens 22 images a real-life scene. The array 24 of micro-lenses is focused on the image plane 23 of the main lens 22. Each micro- lens conveys a portion of the image produced by the main lens 22 onto an image sensor 26, effectively acting as a relaying system. Each micro-lens satisfies the lens equation:
1 1 _ 1
b v f (1 )
where v is the distance from the micro-lens to the main lens image plane 23, b is the distance from the micro-lens to the image sensor 26 and f is the focal length of the micro-lens.
The focused plenoptic camera illustrated in Fig. 1 is a Keplerian focused plenoptic camera in which the image plane 23 of the main lens 22 is positioned between the main lens 22 and the micro-lens array 24. In a different type of focused plenoptic camera, known as a Galilean focused plenoptic camera, the micro-lens array 24 is placed between the main lens 22 and the image plane 23 of the main-lens 22. Each micro-lens forms a sub-image on the image sensor 26. The sub-images collectively form a plenoptic image (otherwise known as a "light-field image"). The number of times a particular point in a scene appears in the plenoptic image will depend upon its proximity to the plenoptic camera. In a Keplerian focused plenoptic camera, a scene point that is closer to the plenoptic camera will be imaged by fewer of the micro-lenses in the array 24 than a scene point that is further away, and will thus appear less frequently in the plenoptic image. In a Galilean focused plenoptic camera, a scene point that is closer to the plenoptic camera will be imaged by more of the micro-lenses in the array 24 than a scene point that is further away, and will thus appear more frequently in the plenoptic image.
Since each micro-lens has a different position to the others, a disparity exists when comparing the location of a particular scene point in a sub-image formed by one micro-lens with the location of the same scene point in another sub-image formed by another micro-lens. That is, there will be an offset in the location of a particular scene point in one sub-image relative to the location of the same scene point in another sub-image. Furthermore, since each micro-lens conveys only part of the image formed by the main lens 22 onto the image sensor, individual points in the scene will be imaged by some micro-lenses and not others. This means that each point in the scene will be present in only a subset of the sub-images.
Fig. 2 illustrates a first apparatus 10 comprising processing circuitry 12 and a memory 14. The apparatus 10 may, for example, be a chip or a chipset.
The processing circuitry 12 is configured to read from and write to the memory 14. The processing circuitry 12 may comprise an output interface via which data and/or commands are output by the processing circuitry 12 and an input interface via which data and/or commands are input to the processing circuitry 12. The processor 12 may be or comprise one or more processors. The processing circuitry 12 may include an analog to digital converter.
The memory 14 stores a computer program 17 comprising computer program instructions/code 18 that control the operation of the apparatus 10 when loaded into the processing circuitry 12. The computer program code 18 provides the logic and routines that enable the apparatus 10 to perform the methods illustrated in Figs 7, 8 and 9. The processing circuitry 12, by reading the memory 14, is able to load and execute the computer program 17.
Although the memory 14 is illustrated as a single component it may be implemented as one or more separate components some or all of which may be integrated/removable and/or may provide permanent/semi-permanent/ dynamic/cached storage. The computer program 17 may arrive at the apparatus 10 via any suitable delivery mechanism 30. The delivery mechanism 30 may be, for example, a non-transitory computer-readable storage medium such as a compact disc read-only memory (CD- ROM) or digital versatile disc (DVD). The delivery mechanism 30 may be a signal configured to reliably transfer the computer program 17. The apparatus 10 may cause the propagation or transmission of the computer program 17 as a computer data signal.
Fig. 3 illustrates a second apparatus 20. The second apparatus 20 is a plenoptic camera. The second apparatus 20 includes a housing 21 , the first apparatus 10 illustrated in Fig. 2 and the main lens 22, micro-lens array 24 and image sensor 26 of the plenoptic camera illustrated in Fig. 1 . The housing 21 houses the processing circuitry 12, the memory 14, the main lens 22, the micro-lens array 24 and the image sensor 26. In some embodiments, the apparatus 20 may also comprise a display.
The memory 14 is illustrated in Fig. 3 as storing a plenoptic image 31 , a further plenoptic image 33 and a depth map 35. These items will be described in further detail later. The image sensor 26 may be any type of image sensor, including a charge-coupled device (CCD) sensor and a complementary metal-oxide-semiconductor (CMOS) sensor.
The array 24 of micro-lenses may include any number of micro-lenses.
The elements 12, 14, 22, 24, 26 are operationally coupled and any number or combination of intervening elements can exist between them (including no intervening elements). An aperture 27 is present into the housing 21 that enables light to enter the housing 21 . The arrow labeled with the reference numeral 40 in Fig. 3 illustrates light entering the housing 21 . The arrow labeled with the reference numeral 41 illustrates light being conveyed from the main lens 22 to the micro-lens array 24. The arrow labeled with the reference numeral 42 illustrates light being conveyed from the micro-lens array 24 to the image sensor 26, which obtains and stores electronic image data. The processing circuitry 12 is configured to read image data from the image sensor. An analog to digital converter of the processing circuitry 12 may convert analog voltage/charge data stored by the image sensor 26 (and forming a plenoptic image) into digital data. The processing circuitry 12 may store one or more digital plenoptic images 31 , 33 in the memory 14.
Fig. 4 illustrates an example of an array 24 of micro-lenses. In some implementations, the micro-lenses may have a different shape from that illustrated in Fig. 4. For instance, each micro-lens may be rectangular or hexagonal in shape. The array 24 shown in Fig. 4 for illustrative purposes has a hundred micro-lenses. In practice, the array 24 may have many more micro-lenses, such as hundreds or thousands of micro-lenses. The micro-lens labeled with the reference numeral 25a in Fig. 4 is considered to have four neighboring micro-lenses 25b-25e. There are two vertically neighboring micro- lenses 25b, 25d and two horizontally neighboring micro-lenses 25c, 25e.
Fig. 5 illustrates an example of a plenoptic image 31 . The plenoptic image 31 comprises a plurality of sub-images. A sub-image is generated by each micro-lens in the array. The sub-image labeled with the reference numeral 32a in Fig. 5 is formed by the corresponding micro-lens labeled with the micro-lens labeled 25a. The sub- images labeled with the reference numerals 32b, 32c, 32d and 32e are formed by the micro-lenses labeled with the reference numerals 25b, 25c, 25d and 25e respectively.
Fig. 6 is a schematic illustrating the main lens 22, the micro-lens array 24 and the image sensor 26 in a Galilean arrangement. In other embodiments, the main lens 22, the micro-lens array 24 and the image sensor 26 might instead be in a Keplerian arrangement. Point C is a point on a (virtual) image plane 23 at which a real-life scene point is imaged by the main lens 22. The point labeled P is a point on the image sensor 26 at which the same point in the real-life scene is imaged by the micro-lens 25a in a sub-image 32a. The point P' is a point at which the same point in a real-life scene is imaged by a neighboring micro-lens 25c in a different sub-image 32c. By assuming that the micro-lenses are equivalent to pin-hole cameras and the triangle formed by points A, B and C in Fig. 6 is a similar triangle to that formed by points P, P' and C in Fig. 6, we can show that:
where v is the distance from the micro-lens array 24 to a virtual image corresponding with the real-life scene point imaged at points P and P' and formed by the main lens 22; B is the distance between the micro-lens array 24 and the image sensor 26; and D is the distance between the micro-lenses 25a, 25c.
It can be shown, using equation (2), that:
Figure imgf000009_0001
We can use equation (3) to determine the location on the image sensor 26 at which a particular real-life scene point, imaged by one micro-lens at point P, is re-produced by another micro-lens. A method according to embodiments of the invention will now be described in relation to Fig. 7. At block 701 in Fig. 7, the processing circuitry 12 identifies, in each of a plurality of sub-images of a plenoptic image 31 , a pixel location corresponding with a particular real-life point in a scene. Since the real-life scene point may only have been imaged in a subset of the sub-images, the pixel locations may only be identified in a subset of the sub-images.
A depth may be determined for the particular point in the scene and used to identify pixel locations, in multiple sub-images, which correspond with the particular point in the scene. For example, in this regard, equation (3) may be used to determine the corresponding pixel locations that are present in a subset of the sub-images in the plenoptic image 31 . At block 702 in Fig. 7, the processing circuitry 12 combines the data from the identified pixel locations to form a single pixel for the particular point in the scene (for example, by binning the data from the identified pixel locations). The data that is combined may be analog or digital data.
The processing circuitry 12 may repeat the process in blocks 701 and 702 for all of the points in the scene that are imaged by the main lens 22 and relayed to the image sensor 26 by the micro-lens array. Advantageously, this enables an output image to be produced that has a high signal to noise ratio, resulting in improved camera performance in low light.
In some implementations of the invention, the processing circuitry 12 may verify that the data from each of the identified pixel locations relate to the same point in the scene. This may be done by comparing the depths of the identified pixel locations with one another. Alternatively or additionally, it may be done by comparing the pixel values of the identified pixel locations with one another.
In some embodiments of the invention, an additional/further plenoptic image may be captured prior to the capture of the plenoptic image that is used for data combination/binning. In these embodiments, the earlier-captured plenoptic image may be used to determine a depth that is used to identify the pixel locations corresponding with a particular point in a scene. The identified pixel locations are then applied to the later-captured plenoptic image in order to combine/bin data and form a single pixel for the particular point in the scene.
Fig. 8 illustrates a more detailed description of some embodiments of the method illustrated in Fig. 7, in which the data that is combined at various pixel locations is digital data. At block 801 , a plenoptic image 31 is captured in an analog format by the apparatus 20. At block 802 in Fig. 7, the plenoptic image 31 formed on the image sensor 26 is converted from the analog format to a digital format by the processing circuitry 12.
At block 803 in Fig. 8, the processing circuitry 12 analyses the (digital) plenoptic image 31 to produce a depth map 35 for each pixel in the plenoptic image 31 . A depth map 35 for a particular portion of a scene may be produced by comparing a portion of one sub-image with a portion of another sub-image to identify a matching content portion (that is, a matching set of pixels). The processing circuitry 12 may use the offset/disparity in the position of that content portion from one sub-image relative to another to determine the depth of that content portion. This process may be repeated for each and every portion in an imaged scene to generate a depth map 35 for the whole of the plenoptic image 31 . The depth map 35 may, for example, include a depth value for each pixel in the plenoptic image 31 . Each depth value may be a virtual depth (that is, a distance from the micro-lens array 24 to the image formed by the main lens 22) or a real depth value (that is, a distance from the micro- lens array 24 to the real-life scene point).
At block 804 in Fig. 8, the processing circuitry 12 applies equation (3) to identify, for each point in a scene, pixel locations in different sub-images of the plenoptic image 31 where the scene point has been imaged. As explained above, each scene point may appear in multiple sub-images in the plenoptic images, and the number of sub- images in which a particular scene point appears will depend upon its proximity to the plenoptic camera when it was captured. Let us consider a situation where a particular point in a scene has been imaged at a pixel location P on the image sensor 26, in a sub-image formed by a first micro-lens. Let us assume that the particular point in the scene has been imaged in some of the sub-images of the plenoptic image 31 , but not others. If the processing circuitry 12 applies equation (3) to determine where that scene point has been reproduced by a particular micro-lens and that micro-lens has, in fact, reproduced the scene point, equation (3) will yield an accurate result. That is, P' will be a location at which the particular point has been imaged. However, if the processing circuitry 12 applies equation (3) to determine where that scene point has been re=produced by a particular micro-lens and that micro-lens has not imaged the scene point, equation (3) will yield an incorrect result. That is, P' will be a location at which the particular point has not been imaged. In order to verify that equation (3) has yielded an accurate result, the processing circuitry 12 may compare the depth of one pixel location P with the depth of the other pixel location P' using the depth map 35. In the event that the depth of the pixel locations P, P' are the same or similar, the processing circuitry 12 determines that the pixel locations P, P' relate to the same scene point. Alternatively or additionally, the processing circuitry 12 may compare the value of the pixels at each pixel location P, P' with one another. In the event that they are the same or similar, the processing circuitry 12 determines that the pixel locations P, P' relate to the same scene point. At block 805 in Fig. 8, the processing circuitry 12 combines the data from the pixel locations that were identified in block 804 as relating to the same scene point (for example, by binning the data from the identified pixel locations).
At block 806 in Fig. 8, an output image with a high signal to noise ratio is produced by the processing circuitry 12 and stored in the memory 14. The output image is of a conventional/standard format (that is, as opposed to in a plenoptic format) in which there is a single pixel for each individual point in a real-life scene.
In some embodiments, rather than generating a depth map 35 for each pixel in the plenoptic image 31 as described above in relation to block 803, the processing circuitry 12 may be configured to produce an output image in a standard/conventional format from the plenoptic image 31 , and generate a depth map 35 for each pixel in that output image. The processing circuitry 12 may then apply equation (3) to identify, for each scene point imaged in the output image, multiple pixel locations in the plenoptic image 31 where the scene point has been reproduced. In these embodiments, the processing circuitry 12 may compare the value of the pixels at the identified pixel locations with the value of the corresponding pixel in the output image to verify that they relate to the same real-life scene point. Depth values cannot be compared in these embodiments because depth map 35 only includes a depth value for the pixel in the output image.
Faster processing may be possible in the embodiments where a depth map 35 is only produced for each of the pixels in an output image generated from the plenoptic image 31 rather than for each of the pixels in the plenoptic image 31 itself, because the plenoptic image 31 contains more pixels. For example, if a plenoptic image includes 10 megapixels, the output image might only include around 2 megapixels.
Fig. 9 illustrates a more detailed description of some embodiments of the method illustrated in Fig. 7, in which the data that is combined at various pixel locations is analog data. The analog data that is combined may, for example, be voltage or charge values.
In the Fig. 9 embodiments of the invention, the image sensor 26 is a destructive readout image sensor 26. That is, when analog data is read from the sensor 26, it is destroyed and cannot be recovered.
At block 901 in Fig. 9, a first plenoptic image 31 is captured by the apparatus 20. At block 902, the processing circuitry 12 digitizes the first plenoptic image 31 as described above in relation to block 802 of Fig. 8. At block 903, a depth map 35 is generated for each of the pixels in the first plenoptic image 31 as described above in relation to block 803 of Fig. 8.
At block 904 of Fig. 9, the processing circuitry 12 applies equation (3) to identify, for each point in a scene, multiple pixel locations in the first plenoptic image 31 where the scene point has been imaged, as described above in relation to block 804 of Fig. 8.
At block 905 of Fig. 9, the apparatus 20 captures a second plenoptic image 33. At block 906 of Fig. 9, the processing circuitry 12 uses the second plenoptic image 33 to combine analog data from the pixel locations that were identified in block 904 (from the first plenoptic image 31 ) as relating to the same scene point. The processing circuitry 12 may, for example, bin the data from the identified pixel locations. At block 907 in Fig. 9, an output image with a high signal to noise ratio is produced by the processing circuitry 12 and stored in the memory 14. The output image is of a conventional/standard format (as opposed to in a plenoptic format) in which there is a single pixel for each individual point in a real-life scene. Advantageously, combining/binning analog data often produces an output image with a higher signal to noise ratio than combining/binning digital data.
The second plenoptic image 33 may be captured before, at the same time, or after generation of the depth map 35 in block 903 and the identification of the pixel locations in block 904.
In circumstances where the scene being imaged is a static scene (with no or few moving objects within it), the first and second plenoptic images 31 , 33 may be full resolution plenoptic images. That is, the first and second plenoptic images 31 , 33 may use the full resolution of the image sensor 26 for a particular image aspect ratio (such as 4:3, 16:9, etc.).
In circumstances where the scene being imaged is a dynamic scene (with moving objects within it), the second plenoptic image 33 may be a full resolution image and the first plenoptic image 31 may be a lower resolution image, such as a viewfinder image. A viewfinder image is an image that is captured and output to a display which enables a user to see a scene on the display in real-time. Use of a viewfinder image may reduce the time between capturing the first and second plenoptic images 31 , 33, advantageously minimizing any differences in content between the two plenoptic images 31 , 33. Another potential benefit is that it may also take less time for the processing circuitry 12 to generate a depth map 35 for a viewfinder image than a full resolution image.
In some of the Fig. 9 embodiments of the invention, the first and second plenoptic images 31 , 33 are different frames of a video. They may be consecutive frames of a video. In such embodiments, it may not be necessary to generate a depth map 35 for every frame in the video in the manner described above. Instead, depth tracking may be performed in which the processing circuitry 12 adjusts the depth map 35 from frame to frame by analyzing how the content in the video changes from one frame to the next. New depth values may be determined for "new content" that appears in a particular frame, whereas old depth values (determined in relation to a prior frame) may be maintained for "old content" that was present in one or more prior frames. In some embodiments of the invention, the image sensor 26 is not a destructive readout sensor. In these embodiments, it is not necessary to capture two plenoptic images 31 , 33 in order to combine/bin analog data. References to 'computer-readable storage medium', 'processing circuitry', 'processor' etc. should be understood to encompass not only computers having different architectures such as single/multi-processor architectures and sequential (Von Neumann)/parallel architectures but also specialized circuits such as field- programmable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other processing circuitry. References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed-function device, gate array or programmable logic device etc.
As used in this application, the term 'circuitry' refers to all of the following:
(a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and
(b) to combinations of circuits and software (and/or firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and
(c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
This definition of 'circuitry' applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term "circuitry" would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware. The term "circuitry" would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, or other network device. The blocks illustrated in figure 6, 7, 8 and 9 may represent steps in a method and/or sections of code in the computer program 17. The illustration of a particular order to the blocks does not necessarily imply that there is a required or preferred order for the blocks and the order and arrangement of the block may be varied. Furthermore, it may be possible for some blocks to be omitted.
Although embodiments of the present invention have been described in the preceding paragraphs with reference to various examples, it should be appreciated that modifications to the examples given can be made without departing from the scope of the invention as claimed. For example, the apparatus 10 illustrated in Fig. 2 may form part of a computer rather than a plenoptic camera such as that illustrated Fig. 3. In this regard, the apparatus that performs the image processing to produce an output image having a high signal to ratio need not be or form part of the apparatus that was used to capture the original plenoptic image.
Although embodiments of the invention are described above in the context of a Plenoptic 2.0 camera, they can be equally applied to a Plenoptic 1 .0 camera. When a Plenoptic 1 .0 camera setup is used, the pixels which correspond to a particular scene point are positioned in neighboring sub-images. It is not necessary to produce a depth map 35 prior to binning the pixels.
The method(s) described above may also be applied to a plenoptic image captured by an array of cameras. In such an implementation, generation of the depth map, identification of pixels corresponding with individual points in the scene and binning the identified pixels can be performed as described above in relation to a Plenoptic camera 2.0.
Features described in the preceding description may be used in combinations other than the combinations explicitly described.
Although functions have been described with reference to certain features, those functions may be performable by other features whether described or not.
Although features have been described with reference to certain embodiments, those features may also be present in other embodiments whether described or not. Whilst endeavoring in the foregoing specification to draw attention to those features of the invention believed to be of particular importance it should be understood that the Applicant claims protection in respect of any patentable feature or combination of features hereinbefore referred to and/or shown in the drawings whether or not particular emphasis has been placed thereon.
I/we claim:

Claims

1 . A method, comprising:
identifying, in each of a plurality of sub-images of a plenoptic image, a pixel location corresponding with a particular point in a scene; and
combining data from the identified pixel locations to form a single pixel for the particular point in the scene.
2. The method as claimed in claim 1 , wherein the data from the identified pixel locations is binned to form the single pixel.
3. The method as claimed in claim 1 or 2, further comprising: determining a depth for the particular point in the scene; and using the depth to identify the pixel locations corresponding with the particular point in the scene.
4. The method as claimed in claim 3, wherein the depth for the particular point in the scene is determined using a further plenoptic image, and the data from the identified pixel locations is obtained from the plenoptic image.
5. The method as claimed in claim 4, wherein the further plenoptic image is captured prior to the plenoptic image.
6. The method as claimed in claim 4 or 5, wherein the data being combined is analog data.
7. The method as claimed in claim 3, wherein the depth for the particular point in the scene is determined using the plenoptic image, the data from the identified pixel locations is obtained from the plenoptic image.
8. The method as claimed in clam 7, wherein the data being combined is digital data.
9. The method as claimed in any of the preceding claims, wherein the plenoptic image is generated using an array of micro-lenses, and each pixel location corresponding with the particular point in the scene is formed by a different micro-lens in the array.
10. The method as claimed in any of the preceding claims, further comprising: verifying that the data from each of the identified pixel locations corresponds with the particular point in the scene prior to combining the data from the identified pixel locations.
1 1 . The method as claimed in claim 10, wherein verifying that the data from each of the identified pixel locations corresponds with the particular point in the scene comprises comparing a depth an identified pixel location with a depth of another identified pixel location.
12. A method as claimed in claim 10 or 1 1 , wherein the data from the identified pixel locations comprises pixel values, and verifying that data from the identified pixel locations is appropriate for combining comprises comparing pixel values from the identified pixel locations with one another before combining the pixel values to form a single pixel for the particular point in the scene.
13. An apparatus comprising:
processing circuitry; and
at least one memory storing computer program code configured, working with the processing circuitry, to cause the method as claimed in one or more of claims 1 to 12 to be performed.
14. An apparatus comprising means for performing the method as claimed in one or more of claims 1 to 12.
15. Computer program code that, when performed at least one processor, causes the method as claimed in any of claims 1 to 12 to be performed.
PCT/IB2013/052353 2013-02-07 2013-03-25 Image processing of sub -images of a plenoptic image WO2014122506A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN529CH2013 2013-02-07
IN529/CHE/2013 2013-02-07

Publications (1)

Publication Number Publication Date
WO2014122506A1 true WO2014122506A1 (en) 2014-08-14

Family

ID=48446418

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2013/052353 WO2014122506A1 (en) 2013-02-07 2013-03-25 Image processing of sub -images of a plenoptic image

Country Status (1)

Country Link
WO (1) WO2014122506A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016149438A1 (en) * 2015-03-17 2016-09-22 Cornell University Depth field imaging apparatus, methods, and applications

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020061131A1 (en) * 2000-10-18 2002-05-23 Sawhney Harpreet Singh Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery
US20100003024A1 (en) * 2007-12-10 2010-01-07 Amit Kumar Agrawal Cameras with Varying Spatio-Angular-Temporal Resolutions
EP2244484A1 (en) * 2009-04-22 2010-10-27 Raytrix GmbH Digital imaging system for synthesizing an image using data recorded with a plenoptic camera
US20120242855A1 (en) * 2011-03-24 2012-09-27 Casio Computer Co., Ltd. Device and method including function for reconstituting an image, and storage medium
US8345144B1 (en) * 2009-07-15 2013-01-01 Adobe Systems Incorporated Methods and apparatus for rich image capture with focused plenoptic cameras

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020061131A1 (en) * 2000-10-18 2002-05-23 Sawhney Harpreet Singh Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery
US20100003024A1 (en) * 2007-12-10 2010-01-07 Amit Kumar Agrawal Cameras with Varying Spatio-Angular-Temporal Resolutions
EP2244484A1 (en) * 2009-04-22 2010-10-27 Raytrix GmbH Digital imaging system for synthesizing an image using data recorded with a plenoptic camera
US8345144B1 (en) * 2009-07-15 2013-01-01 Adobe Systems Incorporated Methods and apparatus for rich image capture with focused plenoptic cameras
US20120242855A1 (en) * 2011-03-24 2012-09-27 Casio Computer Co., Ltd. Device and method including function for reconstituting an image, and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ADELSON E H ET AL: "SINGLE LENS STEREO WITH A PLENOPTIC CAMERA", TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE, PISCATAWAY, USA, vol. 14, no. 2, 28 February 1992 (1992-02-28), pages 99 - 106, XP000248474, ISSN: 0162-8828, DOI: 10.1109/34.121783 *
CHRISTIAN PERWASS ET AL: "Single lens 3D-camera with extended depth-of-field", PROCEEDINGS OF SPIE, vol. 8291, 5 February 2012 (2012-02-05), pages 829108 - 1, XP055072572, ISSN: 0277-786X, DOI: 10.1117/12.909882 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016149438A1 (en) * 2015-03-17 2016-09-22 Cornell University Depth field imaging apparatus, methods, and applications
US10605916B2 (en) 2015-03-17 2020-03-31 Cornell University Depth field imaging apparatus, methods, and applications
US10983216B2 (en) 2015-03-17 2021-04-20 Cornell University Depth field imaging apparatus, methods, and applications

Similar Documents

Publication Publication Date Title
CN110428366B (en) Image processing method and device, electronic equipment and computer readable storage medium
US9524556B2 (en) Method, apparatus and computer program product for depth estimation
CN106412214B (en) Terminal and terminal shooting method
Georgiev et al. Lytro camera technology: theory, algorithms, performance analysis
US8401316B2 (en) Method and apparatus for block-based compression of light-field images
US9390530B2 (en) Image stitching
CN109474780B (en) Method and device for image processing
TWI538512B (en) Method for adjusting focus position and electronic apparatus
TWI615027B (en) High dynamic range image generation method, imaging device, terminal device and imaging method
US20130021504A1 (en) Multiple image processing
EP2786556B1 (en) Controlling image capture and/or controlling image processing
Villalba et al. Smartphone image clustering
US9342875B2 (en) Method for generating image bokeh effect and image capturing device
WO2019056527A1 (en) Capturing method and device
JP2015231220A (en) Image processing apparatus, imaging device, image processing method, imaging method and program
US7796806B2 (en) Removing singlet and couplet defects from images
WO2022160857A1 (en) Image processing method and apparatus, and computer-readable storage medium and electronic device
KR102069269B1 (en) Apparatus and method for stabilizing image
JP2014120122A (en) Area extraction device, area extraction method and computer program
Gao et al. Camera model identification based on the characteristic of CFA and interpolation
Zhou et al. Unmodnet: Learning to unwrap a modulo image for high dynamic range imaging
TW201607296A (en) Method of quickly generating depth map of image and image processing device
WO2014122506A1 (en) Image processing of sub -images of a plenoptic image
CN109995985B (en) Panoramic image shooting method and device based on robot and robot
TW201911853A (en) Dual-camera image pick-up apparatus and image capturing method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13723218

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13723218

Country of ref document: EP

Kind code of ref document: A1