WO2005027041A1 - 視覚処理装置、視覚処理方法、視覚処理プログラムおよび半導体装置 - Google Patents
視覚処理装置、視覚処理方法、視覚処理プログラムおよび半導体装置 Download PDFInfo
- Publication number
- WO2005027041A1 WO2005027041A1 PCT/JP2004/013601 JP2004013601W WO2005027041A1 WO 2005027041 A1 WO2005027041 A1 WO 2005027041A1 JP 2004013601 W JP2004013601 W JP 2004013601W WO 2005027041 A1 WO2005027041 A1 WO 2005027041A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- gradation
- image
- processing
- gradation conversion
- visual processing
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 836
- 230000000007 visual effect Effects 0.000 title claims abstract description 437
- 238000003672 processing method Methods 0.000 title claims description 85
- 239000004065 semiconductor Substances 0.000 title claims description 23
- 238000006243 chemical reaction Methods 0.000 claims abstract description 526
- 230000002093 peripheral effect Effects 0.000 claims description 214
- 238000000034 method Methods 0.000 claims description 43
- 230000008569 process Effects 0.000 claims description 27
- 238000004364 calculation method Methods 0.000 claims description 13
- 230000008859 change Effects 0.000 claims description 13
- 238000009795 derivation Methods 0.000 abstract description 6
- 230000000694 effects Effects 0.000 description 63
- 238000012937 correction Methods 0.000 description 61
- 238000010586 diagram Methods 0.000 description 36
- 230000006870 function Effects 0.000 description 21
- 239000011159 matrix material Substances 0.000 description 21
- 230000004048 modification Effects 0.000 description 21
- 238000012986 modification Methods 0.000 description 21
- 230000006835 compression Effects 0.000 description 9
- 238000007906 compression Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 7
- 230000007423 decrease Effects 0.000 description 7
- 230000001965 increasing effect Effects 0.000 description 7
- 230000009471 action Effects 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 6
- 230000003247 decreasing effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000001228 spectrum Methods 0.000 description 5
- 230000001186 cumulative effect Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 230000010354 integration Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 101100280216 Caenorhabditis elegans exl-1 gene Proteins 0.000 description 1
- 150000001768 cations Chemical class 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- GKAOGPIIYCISHV-UHFFFAOYSA-N neon atom Chemical group [Ne] GKAOGPIIYCISHV-UHFFFAOYSA-N 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration by the use of histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration by the use of local operators
-
- G06T5/75—
-
- G06T5/94—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/407—Control or modification of tonal gradation or of extreme levels, e.g. background level
- H04N1/4072—Control or modification of tonal gradation or of extreme levels, e.g. background level dependent on the contents of the original
- H04N1/4074—Control or modification of tonal gradation or of extreme levels, e.g. background level dependent on the contents of the original using histograms
Definitions
- the present invention relates to a visual processing device, and more particularly to a visual processing device that performs gradation processing of an image signal.
- Another embodiment of the present invention relates to a visual processing method, a visual processing program, and a semiconductor device. (Background technology)
- spatial processing and gradation processing are known.
- Spatial processing is processing of a target pixel by using pixels around the target pixel to be processed.
- techniques for performing contrast enhancement, dynamic range (DR) compression, and the like of an original image using spatially processed image signals are known.
- contrast enhancement the difference between the original image and the blur signal (sharp component of the image) is added to the original image to sharpen the image.
- DR compression a part of the blur signal is subtracted from the original image, and the dynamic range is compressed.
- the gradation process is a process for converting a pixel value using a look-up table (LUT) or the like for each target pixel, irrespective of pixels surrounding the target pixel, and is sometimes called gamma correction.
- LUT look-up table
- pixel value conversion is performed using LUT that emphasizes the gradation of a gradation level having a high appearance frequency in the original image.
- one LUT is determined and used for the entire original image (histogram equalization method), and the LUT is determined for each of the image areas obtained by dividing the original image into multiple parts.
- Japanese Patent Application Laid-Open No. 2000-57373 page 3, FIG. 13 to FIG. 16 See).
- Figure 33 shows the LUT determined for each of the image areas obtained by dividing the original image into multiple parts.
- a visual processing device 300 to be used is shown.
- the visual processing device 300 includes: an image dividing unit 301 that divides an original image input as an input signal IS into a plurality of image areas Sm (1 ⁇ m ⁇ n: n is the number of divisions of the original image); A gradation conversion curve deriving unit 310 that derives a gradation conversion curve Cm for the region Sm, and an output signal OS that loads the gradation conversion curve Cm and performs gradation processing for each image region Sm. And a gradation processing unit 304 for outputting.
- the gradation conversion curve deriving unit 310 generates a brightness conversion histogram Cm for each image region Sm from the generated brightness histogram Hm, and a histogram generation unit 302 that generates a lightness histogram Hm in each image region Sm. And a gradation curve creation unit 303.
- the image dividing unit 301 divides the original image input as the input signal IS into a plurality (n) of image areas (see FIG. 34 (a)).
- the histogram creation unit 302 creates a brightness histogram Hm for each image area Sm (see FIG. 35).
- Each brightness histogram Hm indicates the distribution of brightness values of all pixels in the image area Sm. That is, in the brightness histograms Hm shown in FIGS. 35A to 35D, the horizontal axis indicates the brightness level of the input signal IS, and the vertical axis indicates the number of pixels.
- the gradation curve creation unit 303 accumulates the “number of pixels” of the lightness histogram Hm in order of lightness, and sets the accumulated curve as a gradation conversion curve Cm (see FIG. 36).
- the horizontal axis represents the brightness value of the pixel in the image area Sm in the input signal IS
- the vertical axis represents the brightness value of the pixel in the image area Sm in the output signal OS.
- the gradation processing unit 304 loads the gradation conversion curve Cm, and converts the brightness value of the pixel of the image area Sm in the input signal IS based on the gradation conversion curve Cm. By doing so, the gradient of the frequently occurring gradation is set up in each block, and the sense of contrast for each block is improved.
- the histogram creation unit 302 creates a gradation conversion curve Cm from the brightness histogram Hm of the pixels in the image area Sm.
- a gradation conversion curve Cm to be applied to the image area Sm more appropriately, it is necessary to select from a dark part (shadow) to a light part (highlight) of the image. And it is necessary to refer to more pixels. For this reason, each image area S m cannot be made very small, that is, the number n of divisions of the original image cannot be made too large.
- the number of divisions n varies depending on the image content, but empirically a number of divisions of 4 to 16 is used.
- each image area S m cannot be made too small, the following problem may occur in the output signal OS after the gradation processing.
- gradation processing is performed using one gradation conversion curve C m for each image area S m, the seam at the boundary of each image area S m becomes unnaturally noticeable, A pseudo contour may occur in the area S m.
- the image area Sm is large when the number of divisions is at most 4 to 16, when there is an extremely different image between the image areas, the shading change between the image areas is large and the occurrence of false contours should be prevented. But it's difficult. For example, as shown in FIG. 34 (b) and FIG. 34 (c), the shading changes extremely depending on the positional relationship between the image (for example, an object in the image) and the image area Sm.
- an object of the present invention is to provide a visual processing device that realizes a gradation process for further improving a visual effect.
- the visual processing device is a visual processing device that performs gradation processing on an input image signal for each image region, and includes a gradation conversion characteristic deriving unit and a gradation processing unit.
- the gradation conversion characteristic deriving means uses the peripheral image data of at least one peripheral image region, which is an image region located around the target image region to be subjected to the gradation processing and includes a plurality of pixels, to generate the target image.
- the gradation conversion characteristics of the region are derived.
- the gradation processing means performs gradation processing of the image signal of the target image area based on the derived gradation conversion characteristics.
- the target image region is, for example, a pixel included in the image signal, an image block obtained by dividing the image signal into predetermined units, or an area configured by a plurality of pixels.
- the peripheral image area is, for example, an area composed of an image block obtained by dividing an image signal into predetermined units and other plural pixels.
- the peripheral image data is image data of the peripheral image area or data derived from the image data, and includes, for example, a pixel value of the peripheral image area, a gradation characteristic (brightness and brightness for each pixel), a thumbnail (a reduced image) Or thinned images with reduced resolution).
- the surrounding image area is the target image It may be located around the area, and need not be an area surrounding the target image area.
- the visual processing device of the present invention when judging the gradation conversion characteristics of the target image area, the judgment is made using peripheral image data of the peripheral image area. Therefore, it is possible to add a spatial processing effect to the gradation processing for each target image area, and it is possible to realize gradation processing that further improves the visual effect.
- a visual processing device is the visual processing device according to the first aspect, wherein the peripheral image region is an image block obtained by dividing an image signal into predetermined units.
- the image blocks are respective areas obtained by dividing an image signal into rectangles.
- the visual processing device of the present invention it is possible to process the peripheral image area in units of image blocks. For this reason, it is possible to reduce the processing load required for determining the peripheral image area and deriving the gradation conversion characteristics.
- the visual processing device is the visual processing device according to claim 1 or 2, wherein the gradation conversion characteristic deriving unit further uses the target image data of the target image region, A gradation conversion characteristic is derived.
- the target image data is image data of the target image area or data derived from the image data, and includes, for example, a pixel value of the target image area, a gradation characteristic (brightness or brightness for each pixel), a thumbnail (a reduced image or Thinned image with reduced resolution).
- the visual processing device of the present invention when judging the gradation conversion characteristics of the target image area, the judgment is performed not only using the target image data of the target image area but also the peripheral image data of the peripheral image area. For this reason, it is possible to add a spatial processing effect to the gradation processing of the target image area, and it is possible to realize gradation processing for further improving the visual effect.
- the visual processing device is the visual processing device according to claim 3, wherein the gradation conversion characteristic deriving unit indicates a feature of the target image area using the target image data and the peripheral image data. It has a characteristic parameter deriving means for deriving a characteristic parameter which is a parameter, and a gradation conversion characteristic determining means for determining a gradation conversion characteristic based on the characteristic parameter of the target area derived by the characteristic parameter deriving means.
- the characteristic parameters are, for example, the average of the target image data and the peripheral image data.
- Average values simple average, weighted average, etc.
- representative values maximum, minimum, median, etc.
- histogram is, for example, a distribution of gradation characteristics of the target image data and the peripheral image data.
- the visual processing device of the present invention derives feature parameters using peripheral image data as well as target image data. For this reason, it is possible to add a spatial processing effect to the gradation processing of the target image area, and it is possible to realize gradation processing for further improving the visual effect. As a more specific effect, it is possible to suppress the generation of a false contour due to the gradation processing. In addition, it is possible to prevent the boundary of the target image region from being unnaturally conspicuous.
- a visual processing device is the visual processing device according to the fourth aspect, wherein the feature parameter is a histogram.
- the gradation conversion characteristic determining means determines, for example, a cumulative curve obtained by accumulating the values of the histograms as the gradation conversion characteristic, or selects a gradation conversion characteristic according to the histogram.
- a histogram is created using not only target image data but also peripheral image data. For this reason, it is possible to suppress the occurrence of the false contour due to the gradation processing. In addition, it is possible to prevent the boundary of the target image region from being unnaturally conspicuous.
- the visual processing device is the visual processing device according to claim 4, wherein the gradation conversion characteristic determining means selects a gradation conversion characteristic tabulated in advance using the characteristic parameter. It is characterized by the following.
- the gradation conversion characteristics are tabulated data, and the table stores the characteristics of the target image data after the gradation processing for the target image data.
- the gradation conversion characteristic determining means selects a table corresponding to each of the characteristic parameter values.
- gradation processing is performed using the gradation conversion characteristics tabulated. Therefore, it is possible to speed up gradation processing.
- one table is selected from a plurality of tables and gradation processing is performed, appropriate gradation processing can be performed.
- the visual processing device according to claim 1 is the visual processing device according to claim 6, wherein the gradation conversion characteristics tabulated in advance can be changed.
- the visual processing device of the present invention by changing the gradation conversion characteristics, it is possible to variously change the characteristics of the gradation processing without changing the hardware configuration.
- the visual processing device is the visual processing device according to claim 7, wherein the change of the gradation conversion characteristic is realized by correcting at least a part of the gradation conversion characteristic.
- the gradation conversion characteristic is changed by correcting at least a part of the gradation conversion characteristic. For this reason, it is possible to realize various gradation processes while reducing the storage capacity for gradation conversion characteristics.
- the visual processing device is the visual processing device according to claim 4, wherein the gradation conversion characteristic determining means generates a gradation conversion characteristic by a predetermined operation using the characteristic parameter. It is characterized by the following.
- the gradation conversion characteristic gives target image data after gradation processing on the target image data.
- the calculation for generating the gradation conversion characteristics is determined in advance by using the characteristic parameters. More specifically, for example, an operation corresponding to each of the characteristic parameter values is selected, or an operation is generated according to the characteristic parameter value.
- the visual processing device of the present invention it is not necessary to store the gradation conversion characteristics in advance, and it is possible to reduce the storage capacity for storing the gradation conversion characteristics.
- a visual processing device is the visual processing device according to the ninth aspect, wherein the predetermined operation can be changed.
- the visual processing device of the present invention it is possible to variously change the characteristics of the gradation processing by changing the calculation.
- the visual processing device is the visual processing device according to claim 10, wherein the change of the operation is realized by correcting at least a part of the operation. .
- the gradation conversion characteristic is changed by correcting at least a part of the calculation. For this reason, the storage capacity for storing the operations is not the same. However, it is possible to realize more various gradation processes.
- the visual processing device is the visual processing device according to claim 4, wherein the gradation conversion characteristic is obtained by interpolating or extrapolating a plurality of gradation conversion characteristics. It is characterized by.
- the gradation conversion characteristic is, for example, a characteristic of target image data after gradation processing on the target image data.
- the gradation conversion characteristics are given in, for example, a table format or a calculation format.
- the visual processing device of the present invention it is possible to perform gradation processing using new gradation conversion characteristics obtained by interpolating or extrapolating a plurality of gradation conversion characteristics. For this reason, even if the storage capacity for storing the gradation conversion characteristics is reduced, more various gradation processing can be realized.
- a visual processing method is a visual processing method for performing gradation processing on an input image signal for each image area, comprising: a gradation conversion characteristic deriving step; and a gradation processing step.
- the gradation conversion characteristic deriving step is performed by using the peripheral image data of at least one peripheral image region which is an image region located around the target image region to be subjected to the gradation processing and includes a plurality of pixels. Deriving the gradation conversion characteristics of the image area.
- the gradation processing step performs gradation processing on the image signal of the target image area based on the derived gradation conversion characteristics.
- the visual processing method of the present invention when judging the gradation conversion characteristics of the target image area, the judgment is made using peripheral image data of the peripheral image area. Therefore, it is possible to add a spatial processing effect to the gradation processing for each target image area, and it is possible to realize gradation processing that further improves the visual effect.
- a visual processing method according to claim 14 is the visual processing method according to claim 13, wherein the peripheral image area is an image block obtained by dividing an image signal into predetermined units. According to the visual processing method of the present invention, it is possible to perform processing on a peripheral image area image block basis. For this reason, it is possible to reduce the processing load required for determining the peripheral image area and deriving the gradation conversion characteristics.
- a visual processing method is the visual processing method according to claim 13 or 14, wherein the gradation conversion characteristic deriving step includes converting the target image data of the target image region into Further, the gradation conversion characteristic of the target image area is derived.
- the visual processing method of the present invention when judging the gradation conversion characteristics of the target image area, the judgment is made using not only the target image data of the target image area but also the peripheral image data of the peripheral image area. For this reason, it is possible to add a spatial processing effect to the gradation processing of the target image area, and it is possible to realize gradation processing for further improving the visual effect. ,
- the visual processing method according to claim 16 is the visual processing method according to claim 15, wherein the gradation conversion characteristic deriving step uses the target image data and the peripheral image data to characterize the target image area.
- a characteristic parameter deriving step of deriving a characteristic parameter which is a parameter to be indicated, and a gradation conversion characteristic determining step of determining a gradation conversion characteristic based on the characteristic parameter of the target region derived in the characteristic parameter deriving step. are doing.
- feature parameters are derived using not only target image data but also peripheral image data. For this reason, it is possible to add a spatial processing effect to the gradation processing of the target image area, and it is possible to realize gradation processing for further improving the visual effect. As a more specific effect, it is possible to suppress the generation of a false contour due to the gradation processing. In addition, it is possible to prevent the boundary of the target image region from being unnaturally conspicuous.
- a visual processing program is a visual processing program for performing a visual processing method of performing gradation processing on an input image signal for each image region using a computer.
- the visual processing method includes a gradation conversion characteristic deriving step and a gradation processing step.
- the gradation conversion characteristic deriving step is performed by using the peripheral image data of at least one peripheral image region which is an image region located around the target image region to be subjected to the gradation processing and includes a plurality of pixels. Deriving the gradation conversion characteristics of the image area.
- the gradation processing step performs gradation processing on the image signal of the target image area based on the derived gradation conversion characteristics.
- a visual processing program according to claim 18 is the visual processing program according to claim 17, wherein the peripheral image area is an image block obtained by dividing an image signal into predetermined units.
- the visual processing program of the present invention it is possible to process the peripheral image area in image block units. For this reason, it becomes possible to reduce the processing load required for determining the peripheral image area and deriving the gradation conversion characteristics.
- the visual processing program according to claim 19 is the visual processing program according to claim 17 or 18, wherein the gradation conversion characteristic deriving step further uses target image data of a target image region, A gradation conversion characteristic of the target image area is derived.
- the determination is performed using not only the target image data of the target image area but also the peripheral image data of the peripheral image area. For this reason, it is possible to add a spatial processing effect to the gradation processing of the target image area, and it is possible to realize gradation processing that further improves the visual effect.
- the visual processing program according to claim 20 is the visual processing program according to claim 19, wherein the gradation conversion characteristic deriving step is characterized by using the target image data and the peripheral image data to determine a characteristic of the target image area.
- a characteristic parameter deriving step of deriving a characteristic parameter that is a parameter indicating the tone conversion characteristic, and a gradation conversion characteristic determining step of determining a gradation conversion characteristic based on the characteristic parameter of the target region derived in the characteristic parameter deriving step. are doing.
- the visual processing program of the present invention derives feature parameters using peripheral image data as well as target image data. For this reason, it is possible to add a spatial processing effect to the gradation processing of the target image area, and it is possible to realize gradation processing for further improving the visual effect. As a more specific effect, it is possible to suppress the occurrence of false contours due to gradation processing. In addition, it is possible to prevent the boundary of the target image region from being unnaturally conspicuous.
- a semiconductor device is a semiconductor device that performs a gradation process on an input image signal for each image region, and includes a gradation conversion characteristic deriving unit and a gradation processing unit.
- the gradation conversion characteristic deriving unit uses the peripheral image data of at least one peripheral image region, which is an image region located around the target image region to be subjected to the gradation processing and includes a plurality of pixels.
- the gradation conversion characteristics of the target image area are derived.
- the gradation processing unit performs gradation processing of the image signal of the target image area based on the derived gradation conversion characteristics.
- the semiconductor device of the present invention when judging the gradation conversion characteristics of the target image area, the judgment is made using the peripheral image data of the peripheral image area. For this reason, it is possible to add a spatial processing effect to the gradation processing for each target image area, and it is possible to realize gradation processing that further enhances the visual effect.
- the semiconductor device according to claim 22 is the semiconductor device according to claim 21, wherein the peripheral image area is an image block obtained by dividing an image signal into predetermined units.
- the semiconductor device of the present invention it is possible to process the peripheral image area in image block units. For this reason, it is possible to reduce the processing load required for determining the peripheral image area and deriving the gradation conversion characteristics.
- the gradation conversion characteristic deriving unit further uses target image data of the target image region to obtain the target image region. Is derived.
- the semiconductor device of the present invention when judging the gradation conversion characteristic of the target image area, the judgment is made not only using the target image data of the target image area but also the peripheral image data of the peripheral image area. For this reason, it is possible to add a spatial processing effect to the gradation processing of the target image area, and it is possible to realize gradation processing for further improving the visual effect.
- the semiconductor device is the semiconductor device according to claim 23, wherein the gradation conversion characteristic deriving unit uses the target image data and the peripheral image data to set a parameter indicating a feature of the target image area.
- a characteristic parameter deriving unit that derives a characteristic parameter
- a gradation conversion characteristic determining unit that determines a gradation conversion characteristic based on the characteristic parameter of the target region derived by the characteristic parameter deriving unit.
- the characteristic parameters are derived using not only the target image data but also the peripheral image data. For this reason, it is possible to add a spatial processing effect to the gradation processing of the target image area, and realize gradation processing that further enhances the visual effect It is possible to do. As a more specific effect, it is possible to suppress the generation of a pseudo contour due to the gradation processing. In addition, it is possible to prevent the boundary of the target image region from being unnaturally conspicuous.
- the visual processing device of the present invention it is possible to realize gradation processing for further improving the visual effect.
- FIG. 1 is a block diagram (first embodiment) for explaining the structure of the visual processing device 1.
- FIG. 2 is an explanatory diagram (first embodiment) for explaining the image area Pm.
- FIG. 3 is an explanatory diagram (first embodiment) for explaining the brightness histogram Hm.
- FIG. 4 is an explanatory diagram (first embodiment) for explaining the gradation conversion curve Cm.
- FIG. 5 is a flowchart (first embodiment) for explaining the visual processing method.
- FIG. 6 is a block diagram (second embodiment) for explaining the structure of the visual processing device 11.
- FIG. 11 is an explanatory diagram (second embodiment) for explaining curve candidates G 1 to G p.
- FIG. 8 is an explanatory diagram (second embodiment) for explaining the two-dimensional LUT 41.
- FIG. 9 is an explanatory diagram (second embodiment) for explaining the operation of the gradation correction section 15.
- FIG. 10 is a flowchart (second embodiment) for explaining the visual processing method.
- FIG. 11 is an explanatory diagram (second embodiment) for describing a modification of the selection of the gradation conversion curve Cm.
- FIG. 12 is an explanatory diagram (second embodiment) for describing a gradation process as a modification.
- FIG. 13 is a block diagram (second embodiment) for explaining the structure of the gradation processing execution section 44.
- FIG. 14 is an explanatory diagram (second embodiment) illustrating the relationship between the curve parameters PI and P2 and the gradation conversion curve candidates G1 to Gp.
- FIG. 15 is an explanatory diagram (second embodiment) for explaining the relationship between the curve parameters P1 and P2 and the selection signal Sm.
- FIG. 16 is an explanatory diagram (second embodiment) for explaining the relationship between the curve parameters P1 and P2 and the selection signal Sm.
- FIG. 17 is an explanatory diagram (second embodiment) for explaining the relationship between the curve parameters P 1 and P 2 and the gradation conversion curve candidates G 1 to G p.
- FIG. 18 is an explanatory diagram (second embodiment) for explaining the relationship between the curve parameters P1 and P2 and the selection signal Sm.
- FIG. 19 is a block diagram (third embodiment) illustrating the structure of the visual processing device 21.
- FIG. 20 is an explanatory diagram (third embodiment) for explaining the operation of the selection signal correction unit 24.
- FIG. 21 is a flowchart (third embodiment) for explaining a visual processing method.
- FIG. 22 is a block diagram (fourth embodiment) illustrating the structure of the visual processing device 61.
- FIG. 23 is an explanatory diagram (fourth embodiment) illustrating the spatial processing of the spatial processing unit 62.
- FIG. 24 is a table (fourth embodiment) for explaining the weight coefficient [W ij].
- FIG. 25 is an explanatory diagram (fourth embodiment) illustrating the effect of visual processing by the visual processing device 61.
- FIG. 26 is a block diagram (fourth embodiment) illustrating the structure of the visual processing device 961.
- FIG. 27 is an explanatory diagram (fourth embodiment) for describing the spatial processing of the spatial processing unit 962.
- FIG. 28 is a table (fourth embodiment) for explaining the weight coefficient [W ij].
- Fig. 29 is a block diagram (No. 1) explaining the overall configuration of the content supply system. Sixth Embodiment)
- FIG. 30 is an example (sixth embodiment) of a mobile phone equipped with the visual processing device of the present invention.
- FIG. 31 is a block diagram (sixth embodiment) illustrating the configuration of a mobile phone.
- FIG. 32 shows an example of a digital broadcasting system (sixth embodiment).
- FIG. 33 is a block diagram (background art) for explaining the structure of the visual processing device 300.
- FIG. 34 is an explanatory diagram (background art) for explaining the image area Sm.
- FIG. 35 is an explanatory diagram (background art) for explaining the brightness histogram Hm.
- FIG. 36 is an explanatory diagram (background art) for explaining the gradation conversion curve C m.
- the visual processing device 1 is a device that performs a gradation process on an image by being built in or connected to a device that handles images, such as a computer, a television, a digital camera, a mobile phone, and a PDA.
- the visual processing device 1 is characterized in that gradation processing is performed on each of image regions that are finely divided as compared with the related art.
- FIG. 1 is a block diagram illustrating the structure of the visual processing device 1.
- the visual processing device 1 includes: an image dividing unit 2 that divides an original image input as an input signal IS into a plurality of image areas P m (1 ⁇ m ⁇ n: n is the number of divisions of the original image); A gradation conversion curve deriving unit 10 for deriving a gradation conversion curve C m for P m, and an output signal OS loaded with the gradation conversion curve G m and subjected to gradation processing for each image area P m And a gradation processing unit 5 for outputting the same.
- the gradation conversion curve deriving unit 10 includes a histogram creating unit 3 that creates a brightness histogram Hm of the pixels of the wide area image area Em composed of each image area Pm and the image area around the image area Pm. And a tone conversion curve C m for each image area P m is created from the created brightness histogram H m. And a tone curve creating section 4.
- the image dividing unit 2 divides the original image input as the input signal IS into a plurality (n) of image areas Pm (see FIG. 2).
- the number of divisions of the original image is greater than the number of divisions (for example, 4 to 16 divisions) of the conventional visual processing device 300 shown in FIG. 33. For example, 80 divisions in the horizontal direction and 60 divisions in the vertical direction 4800 split.
- the histogram creating unit 3 creates a brightness histogram Hm of the wide image area Em for each image area Pm.
- the wide area image area Em is a set of a plurality of image areas including each image area Pm.For example, 25 images of 5 blocks in the vertical direction and 5 blocks in the horizontal direction centering on the image area Pm A set of regions. Note that depending on the position of the image area Pm, it may not be possible to take a wide image area Em of 5 blocks in the vertical direction and 5 blocks in the horizontal direction around the image area Pm. For example, with respect to the image area PI located around the original image, it is not possible to take a wide image area EI of 5 blocks in the vertical direction and 5 blocks in the horizontal direction around the image area PI.
- the brightness histogram Hm created by the histogram creating unit 3 indicates the distribution of brightness values of all pixels in the wide-area image area Em. That is, in the brightness histograms Hm shown in FIGS. 3A to 3C, the horizontal axis represents the brightness level of the input signal IS, and the vertical axis represents the number of pixels.
- the gradation curve creation unit 4 accumulates “the number of pixels j” in the lightness histogram Hm of the wide area image area Em in the order of lightness, and sets the accumulated curve as the gradation conversion curve Cm of the image area Pm (see FIG. 4).
- the horizontal axis represents the brightness value of the pixel in the image area Pm in the input signal IS
- the vertical axis represents the brightness value of the pixel in the image area Pm in the output signal OS.
- the gradation processing unit 5 loads the gradation conversion curve Cm and converts the brightness value of the pixel of the image area Pm in the input signal IS based on the gradation conversion curve Cm.
- FIG. 5 shows a flowchart illustrating a visual processing method in the visual processing device 1.
- the visual processing method shown in FIG. 5 is realized by hardware in the visual processing device 1 and is a method for performing gradation processing of the input signal IS (see FIG. 1).
- the input signal IS is processed in image units (steps SI0 to S16).
- the original image input as the input signal IS is divided into a plurality of image areas Pm (1 ⁇ m ⁇ n: n is the number of divisions of the original image) (step SI 1), and gradation processing is performed for each image area Pm (Steps SI2 to S15).
- a brightness histogram Hm of the pixels of the wide area image area Em composed of each image area P m and the image area around the image area P m is created (step S12). Further, a gradation conversion curve Cm for each image area Pm is created based on the brightness histogram Hm (step S13). Here, the description of the lightness histogram Hm and the gradation conversion curve Cm is omitted (see the section ⁇ Effect> above). Using the created gradation conversion curve Cm, gradation processing is performed on the pixels in the image area Pm (step S 1).
- step S15 it is determined whether or not the processing for all the image areas Pm has been completed (step S15), and the processing of steps SI2 to S15 is performed on the original image until it is determined that the processing has been completed. Repeat several times. Thus, the processing for each image is completed (step S16).
- Each step of the visual processing method shown in FIG. 5 may be realized as a visual processing program by a computer or the like.
- the gradation conversion curve Cm is created for each image area Pm. Therefore, it is possible to perform appropriate gradation processing as compared with the case where the same gradation conversion is performed on the entire original image.
- the gradation conversion curve Cm created for each image area Pm is created based on the brightness histogram Hm of the wide-area image area Em. Therefore, even if the size of each image area Pm is small, it is possible to sample a sufficient lightness value. As a result, it is necessary to create an appropriate gradation conversion curve Gm even for a small image area Pm. Becomes possible.
- each image area Pm is smaller than before. For this reason, it is possible to suppress the occurrence of pseudo contours in the image area Pm.
- the number of divisions of the original image was set to 480 as an example.
- the effect of the present invention is not limited to this case, and similar effects can be obtained with other division numbers. Is possible. Note that there is a trade-off between the amount of gradation processing and the visual effect on the number of divisions. In other words, if the number of divisions is increased, the amount of gradation processing increases, but a better visual effect (for example, suppression of pseudo contours) can be obtained.
- the number of image areas constituting the wide area image area is 25, but the effect of the present invention is not limited to this case, and the same effect can be obtained with other numbers. It is possible to obtain.
- the visual processing device 11 is a device that performs a gradation process on an image by being built in or connected to a device that handles images, such as a computer, a television, a digital camera, a mobile phone, and a PDA.
- the visual processing device 11 is characterized in that a plurality of gradation conversion curves stored in advance as LUTs are switched and used. ⁇ Constitution>
- FIG. 6 is a block diagram illustrating the structure of the visual processing device 11.
- the visual processing device 11 includes an image dividing unit 12, a selection signal deriving unit 13, and a gradation processing unit 20.
- the image division unit 12 receives the input signal IS as an input, and outputs an image area Pm (1 ⁇ m ⁇ n: n is the number of divisions of the original image) obtained by dividing the original image input as the input signal IS into a plurality of parts.
- the selection signal deriving unit 13 outputs a selection signal Sm for selecting a gradation conversion curve Cm applied to the gradation processing of each image area Pm.
- the gradation processing unit 20 includes a gradation processing execution unit 14 and a gradation correction unit 15.
- the gradation processing execution unit 14 includes a plurality of gradation conversion curve candidates G 1 to G p (p is the number of candidates) as a two-dimensional LUT, receives the input signal IS and the selection signal Sm as inputs, and A gradation processing signal CS obtained by performing gradation processing on the pixels in the image area Pm is output.
- the gradation correction section 15 receives the gradation processing signal CS as an input, and outputs an output signal OS in which the gradation of the gradation processing signal CS is corrected.
- the gradation conversion curve candidates G1 to Gp will be described with reference to FIG.
- the gradation conversion curve candidates G1 to Gp are curves that give the relationship between the brightness value of the pixel of the input signal IS and the brightness value of the pixel of the gradation processing signal CS.
- the horizontal axis represents the brightness value of the pixel in the input signal IS
- the vertical axis represents the brightness value of the pixel in the gradation processing signal CS.
- the gradation conversion curve candidates G 1 to G p have a monotonically decreasing relationship with respect to the subscript, and the relationship of G 1 ⁇ G 2 ⁇ ' ⁇ ' ⁇ G p with respect to the brightness values of all input signal IS pixels Is satisfied.
- the brightness value of the input signal IS is in the range of the value [0.0 to 1.0].
- the gradation processing execution unit 14 has gradation conversion curve candidates G1 to Gp as a two-dimensional LUT. That is, the two-dimensional LUT provides a look-up that provides the brightness value of the pixel of the gradation processing signal CS with respect to the brightness value of the pixel of the input signal IS and the selection signal Sm for selecting the gradation conversion curve candidates G1 to Gp.
- FIG. 8 shows an example of this two-dimensional LUT.
- the two-dimensional LUT 41 shown in FIG. 8 is a matrix with 64 rows and 64 columns, in which the gradation conversion curve candidates G1 to G64 are arranged in the row direction (horizontal direction).
- the column direction (vertical direction) of the matrix for example, the value of the upper 6 bits of the pixel value of the input signal IS represented by 10 bits, that is, the gradation processing signal for the value of the input signal IS divided into 64 steps
- the pixel values of CS are arranged.
- the pixel value of the gradation processing signal CS has, for example, a value in the range of [0.0 to 1.0] when the gradation conversion curve candidates G “! To G are“ power functions ”.
- the image dividing unit 12 operates in substantially the same manner as the image dividing unit 2 in FIG. 1, and divides the original image input as the input signal IS into a plurality (n) of image areas Pm (see FIG. 2).
- the number of divisions of the original image is larger than the number of divisions (for example, 4 to 16 divisions) of the conventional visual processing device 300 shown in FIG. 33, for example, 80 divisions in the horizontal direction and 60 divisions in the vertical direction. 4800 split.
- the selection signal deriving unit 13 selects a gradation conversion curve Cm applied to each image area Pm from the gradation conversion curve candidates G1 to Gp. Specifically, the selection signal deriving unit 13 calculates the average brightness value of the wide area image area Em of the image area Pm, and calculates the gradation conversion curve candidates G1 to Gp according to the calculated average brightness value. Make one of the choices. That is, the gradation conversion curve candidates G1 to Gp are associated with the average brightness value of the wide area image area Em, and as the average brightness value increases, the gradation conversion curve candidates G1 to Gp with larger subscripts are selected. .
- the wide area image area Em is the same as that described in the first embodiment with reference to FIG. That is, the wide-area image area Em is a set of a plurality of image areas including the respective image areas Pm. It is a set of 25 image areas consisting of 5 blocks and 5 blocks in the horizontal direction. Note that, depending on the position of the image area Pm, it may not be possible to have a wide image area Em of 5 blocks in the vertical direction and 5 blocks in the horizontal direction around the image area Pm. For example, with respect to the image area PI located around the original image, a wide image area EI of 5 blocks vertically and 5 blocks horizontally cannot be taken around the image area PI. In this case, the area where the original image is overlapped with the area of 5 blocks in the vertical direction and 5 blocks in the horizontal direction around the image area PI is adopted as the wide area image area EI.
- the selection result of the selection signal deriving unit 13 is output as a selection signal Sm indicating one of the gradation conversion curve candidates G1 to Gp. More specifically, the selection signal Sm is output as the value of the subscript (1 to p) of the gradation conversion curve candidates G1 to Gp.
- the gradation processing execution unit 14 receives the brightness values of the pixels in the image area Pm included in the input signal IS and the selection signal Sm as inputs, and, for example, uses the two-dimensional LUT 41 shown in FIG. Outputs the brightness value of CS.
- the tone correction unit 15 converts the brightness value of the pixel in the image area Pm included in the tone processing signal CS into the pixel position and the tone conversion selected for the image area Pm and the image area around the image area Pm. Correct based on the curve. For example, the gradation conversion curve Cm applied to the pixels included in the image area Pm and the gradation conversion curve selected for the image area around the image area Pm are corrected by the internal division ratio of the pixel position, and the corrected The brightness value of the pixel is determined. The operation of the gradation correction unit 15 will be described in more detail with reference to FIG. FIG.
- FIG. 9 shows a gradation conversion curve C o, G p, of the image area Po, P p, P q, P r (o, p, q, r are positive integers equal to or less than the number of divisions n (see FIG. 2)).
- Cq and Cr are selected as the gradation conversion curve candidates G s, G t, Gu, and G v (where s, t, u, and v are positive integers equal to or less than the number of gradation conversion curve candidates p). I have.
- the position of the pixel X (assumed to be the lightness value [X]) of the image area P o to be subjected to the gradation correction is represented by [i: 1—i between the center of the image area Po and the center of the image area P p.
- the center of the image area Po and the center of the image area P q are internally divided into [j: 1-j].
- [Gs], [Gt], [Gu], and [Gv] are the lightness values when the gradation conversion curve candidates Gs, Gt, Gu, and Gv are applied to the lightness value [x]. Let it be a value.
- FIG. 10 is a flowchart illustrating a visual processing method in the visual processing device 11.
- the visual processing method shown in FIG. 10 is a method that is realized by hardware in the visual processing device 11 and performs gradation processing of the input signal I S (see FIG. 6).
- the input signal I S is processed in image units (steps S20-S26).
- the original image input as the input signal IS is divided into a plurality of image areas P m (1 ⁇ m ⁇ n: ⁇ is the number of divisions of the original image) (step S21), and gradation processing is performed for each image area Pm. (Steps S22 to S24).
- the gradation conversion curve Cm applied to each image region Pm is selected from the gradation conversion curve candidates G1 to Gp (step S22). Specifically, the average brightness value of the wide area image area Em of the image area Pm is calculated, and one of the gradation conversion curve candidates G 1 to G p is selected according to the calculated average brightness value.
- the gradation conversion curve candidates G 1 to Gp are associated with the average brightness value of the wide-area image area Em, and as the average brightness value increases, the gradation conversion curve candidates G 1 to Gp with larger subscripts are selected. You. Here, the description of the wide-area image area Em is omitted (see the section ⁇ Action> above).
- the brightness value of the gradation processing signal CS is output using the two-dimensional LUT 41 shown in FIG. 8 (step S23). Further, it is determined whether or not the processing has been completed for all the image areas Pm (step S24), and the processing of steps S22 to S24 is repeated several times until the processing is determined to be completed. repeat. As described above, the processing for each image area is completed.
- the brightness value of the pixel in the image area Pm included in the gradation processing signal CS is based on the pixel position and the gradation conversion curve selected for the image area Pm and the image area around the image area Pm. Is corrected (step S25). For example, the pixels included in the image area Pm The applied gradation conversion curve Cm and the gradation conversion curve selected for the image area around the image area Pm are corrected by the internal division ratio of the pixel position, and the brightness value of the corrected pixel is obtained. . The description of the details of the correction is omitted (see the above-mentioned ⁇ action> column, FIG. 9).
- Each step of the visual processing method shown in FIG. 10 may be realized as a visual processing program by a computer or the like.
- the gradation conversion curve Cm selected for each image area Pm is created based on the average brightness value of the wide area image area Em. For this reason, even if the size of the image area Pm is small, it is possible to sample sufficient brightness values. As a result, an appropriate gradation conversion curve Cm can be selected and applied to a small image area Pm.
- the gradation processing execution unit 14 has a two-dimensional LUT created in advance. Therefore, it is possible to reduce the processing load required for the gradation processing, more specifically, the processing load required for creating the gradation conversion curve Cm. As a result, it is possible to speed up the processing required for the gradation processing of the image area Pm.
- the gradation processing execution unit 14 executes gradation processing using a two-dimensional LUT.
- the two-dimensional LUT is read from a storage device such as a hard disk or a ROM included in the visual processing device 11 and used for gradation processing.
- a storage device such as a hard disk or a ROM included in the visual processing device 11
- gradation processing By changing the contents of the two-dimensional LUT to be read, various gradation processing can be realized without changing the hardware configuration. That is, it is possible to realize gradation processing more suitable for the characteristics of the original image.
- the gradation correction unit 15 corrects the gradation of the pixels in the image area Pm that has been subjected to the gradation processing using one gradation conversion curve Cm. Therefore, it is possible to obtain an output signal OS that has been subjected to more appropriate gradation processing. For example, it is possible to suppress the occurrence of a false contour. Further, in the output signal OS, it is possible to further prevent the joints at the boundaries of the respective image areas P m from being unnaturally conspicuous.
- the number of divisions of the original image was set to 480 as an example.
- the effect of the present invention is not limited to this case, and similar effects can be obtained with other division numbers. Is possible.
- the number of image areas constituting the wide area image area is 25, but the effect of the present invention is not limited to this case, and the same effect can be obtained with other numbers. It is possible to obtain.
- the two-dimensional LUT 41 composed of a matrix of 64 rows and 64 columns is an example of the two-dimensional LUT.
- the effect of the present invention is not limited to a two-dimensional LU of this size.
- a matrix in which more tone conversion curve candidates are arranged in the row direction may be used.
- the pixel values of the gradation processing signal CS for the values obtained by dividing the pixel values of the input signal IS into smaller steps may be arranged in the column direction of the matrix.
- a pixel value of the gradation processing signal CS may be arranged for each pixel value of the input signal IS represented by 10 bits.
- the size of the two-dimensional LUT increases, more appropriate gradation processing can be performed. If the size is reduced, it is possible to reduce the memory for storing the two-dimensional LUT, etc. (4)
- the gradation processing signal CS may be output as a matrix component linearly interpolated by the gradation processing execution unit 14 with the lower 4 bits of the pixel value of the input signal IS. .
- the components of the matrix corresponding to the upper 6 bits of the pixel value of the input signal IS represented by 10 bits are arranged, and the upper 6 bits of the pixel value of the input signal IS are arranged.
- the matrix component for the value of the input signal IS and the component of the matrix for the value obtained by adding [1] to the value of the upper 6 bits of the pixel value of the input signal IS (for example, the component one row below in Fig. 8) Linear interpolation is performed using the lower 4 bits of the pixel value of the input signal IS and output as the gradation processing signal CS.
- the gradation conversion curve Cm to be applied to the image area Pm is selected based on the average brightness value of the wide area image area Em.
- the method of selecting the gradation conversion curve Cm is not limited to this method.
- the gradation conversion curve Cm to be applied to the image area Pm may be selected based on the maximum lightness value or the minimum lightness value of the wide area image area Em.
- the value [Sm] of the selection signal Sm may be the average brightness value, the maximum brightness value, or the minimum brightness value of the wide area image area Em.
- the gradation conversion curve candidates G1 to G64 are associated with the values obtained by dividing the possible values of the selection signal Sm into 64 steps.
- a gradation conversion curve Gm applied to the image area Pm may be selected as follows. That is, an average brightness value is obtained for each image region Pm, and a provisional selection signal S m ′ for each image region Pm is obtained from each average brightness value. Ask for.
- the provisional selection signal Sm ' has a value of a subscript number of the gradation conversion curve candidate G1 to Gp.
- the value of the provisional selection signal Sm ' is averaged to obtain the value [Sm] of the selection signal Sm of the image area Pm, and the gradation conversion curve candidate G "!
- the candidate whose subscript is the integer closest to the value [Sm] of Gp is selected as the gradation conversion curve Cm.
- the gradation conversion curve Cm to be applied to the image area Pm is selected based on the average brightness value of the wide area image area Em.
- the gradation conversion curve Cm applied to the image area Pm may be selected based on a weighted average (weighted average) instead of the simple average of the wide area image area Em.
- weighted average weighted average
- the average brightness value of each image region constituting the wide image region Em is obtained, and the image regions P s1 and P s P having average brightness values that are significantly different from the average brightness value of the image region are obtained.
- the weight is reduced or excluded, and the average brightness value of the wide image area Em is calculated.
- the image area Pm is applied to the image area Pm.
- the effect of the brightness value of the specific area on the selection of the gradation conversion curve Cm is reduced, and more appropriate gradation processing is performed.
- the existence of the gradation correction section 15 may be optional. That is, even when the gradation processing signal CS is output, the same effect as described in ⁇ Effect> of [First Embodiment] is obtained, compared to the conventional visual processing device 300 (see FIG. 33). Effects, and ⁇ Effects> of [Second Embodiment] The same effects as those described in (1) and (2) can be obtained.
- the gradation conversion curve candidates G1 to Gp are in a relationship of monotonously decreasing with respect to the subscript, and G1 ⁇ G2 ⁇ '' - ⁇ Gp with respect to the brightness values of the pixels of all the input signals IS. He explained that the relationship was satisfied.
- the floor of the 2D LUT The tonal conversion curve candidates G 1 to Gp do not have to satisfy the relationship of G 1 ⁇ G2 ⁇ 1 ⁇ ⁇ Gp with respect to a part of the brightness values of the pixels of the input signal IS. That is, any of the gradation conversion curve candidates G1 to Gp may intersect each other.
- the input signal IS value is large, but the average brightness value of the wide image area Em is small.
- the effect of the image signal value on the image quality is small.
- the gradation conversion curve candidates G "! To Gp included in the two-dimensional LUT are expressed as G 1 ⁇ G2 ⁇ '' ⁇ Gp with respect to a part of the brightness value of the pixel of the input signal IS.
- the value stored in the two-dimensional LUT may be arbitrary where the value after gradation processing has little effect on image quality.
- the values stored for the input signal IS and the selection signal Sm with the same value are the same as the values of the input signal IS and the selection signal Sm. It is desirable to maintain a monotonically increasing or monotonically decreasing relationship.
- the gradation conversion curve candidates G1 to Gp included in the two-dimensional LUT are described as “power functions”.
- the gradation conversion curve candidates G 1 to Gp do not have to be strictly formulated as “power functions”.
- the function may have a shape such as an S-shape or an inverted S-shape.
- the visual processing device 11 may further include a profile data creation unit that creates profile data that is a value stored in the two-dimensional LUT.
- the profile data creation unit is composed of an image division unit 2 and a gradation conversion curve derivation unit 10 in the visual processing device 1 (see FIG. 1). Is stored in the 2D LUT as profile data.
- each of the gradation conversion curves stored in the two-dimensional LUT may be associated with the spatially processed input signal IS.
- the image dividing unit 12 and the selection signal deriving unit 13 may be replaced with a spatial processing unit that spatially processes the input signal IS.
- the brightness value of the pixel of the input signal IS does not have to be a value in the range of [0.0 to 1.0].
- the value in that range may be normalized to a value [0.0 to 1.0].
- Each of the gradation conversion curve candidates G "! To Gp performs gradation processing on an input signal IS having a wider dynamic range than a normal dynamic range, and outputs a gradation processing signal GS having a normal dynamic range. It may be a tone conversion curve.
- the input signal IS has a dynamic range wider than the normal dynamic range (for example, a signal in the range of [0.0-1. It has been demanded.
- the gradation processing signal CS of the value [0.0 to 1.0] is applied. Use a tone conversion curve to output.
- the pixel value of the gradation processing signal CS may be in the range of the value [0.0 to 1.0] when the gradation conversion curve candidates G 1 to Gp are “power functions”. Has the value of ".
- the pixel value of the gradation processing signal CS is not limited to this range.
- the gradation conversion curve candidates G "! To Gp for the input signal I S having the value [0.0 to 1.0] may perform dynamic range compression.
- the gradation processing execution unit 14 has the gradation conversion curve candidates G1 to Gp as a two-dimensional LUT. It has been described as J.
- FIG. 13 is a block diagram illustrating the structure of a gradation processing execution unit 44 as a modification of the gradation processing execution unit 14.
- the gradation processing execution unit 44 receives the input signal IS and the selection signal Sm as inputs, and outputs a gradation processing signal CS, which is the input signal IS subjected to gradation processing.
- the gradation processing execution unit 44 includes a curve parameter output unit 45 and a calculation unit 48.
- the curve parameter output unit 45 includes a first LUT 46 and a second LUT 47.
- the first 1___________________ 46 and the 21st_____________________ input the selection signal Sm as input, and respectively select the curve parameters P 1 and P 2 of the gradation conversion curve candidate Gm specified by the selection signal Sm. Output.
- the operation unit 48 receives the curve parameters P1 and P2, the input signal IS, and outputs the gradation processing signal CS.
- the eleventh eleventh to eleventh eleventh eleventh 47 are one-dimensional LUTs storing the values of the curve parameters P1 and P2 for the selection signal Sm, respectively.
- FIG. 14 shows gradation conversion curve candidates G 1 to Gp.
- the gradation conversion curve candidates G1 to Gp have a monotonically decreasing relationship with respect to the suffix, and satisfy the relationship of G1 ⁇ G2 ⁇ '' ⁇ Gp with respect to the brightness values of all input signal IS pixels. ing.
- the relationship between the above-mentioned gradation conversion curve candidates G1 to Gp is as follows: for a gradation conversion curve candidate with a large subscript, when the input signal IS is small, or for a gradation conversion curve candidate with a small subscript, When the input signal IS is large, it does not have to be established in the case where the input signal IS is large.
- the curve parameters P1 and P2 are output as the value of the gradation processing signal CS for a predetermined value of the input signal IS. That is, the gradation conversion curve is selected by the selection signal Sm.
- the value of the curve parameter P1 is output as the value [Rim] of the gradation conversion curve candidate Gm for the predetermined value [XI] of the input signal IS
- the value of the curve parameter P2 is Is output as the value [R2m] of the gradation conversion curve candidate Gm for the predetermined value [X2] of the input signal IS.
- the value [X2] is larger than the value [X1].
- the first LUT 46 and the second LUT 47 store the values of the curve parameters P1 and P2 for the selection signal Sm, respectively. More specifically, for example, for each selection signal Sm given as a 6-bit signal, the values of the curve parameters P1 and P2 are given in 6 bits, respectively.
- the number of bits secured for the selection signal Sm and the curve parameters P1 and P2 is not limited to this.
- FIG. 15 shows the change in the values of the curve parameters P1 and P2 with respect to the selection signal Sm.
- the first LUT 46 and the second LUT 47 store the values of the curve parameters P1 and P2 for the respective selection signals Sm.
- the value [R 1 m] is stored as the value of the curve parameter P1 for the selection signal Sm
- the value [R2m] is stored as the value of the curve parameter P2.
- the curve parameters P 1 and P 2 are output in response to the input selection signal Sm in the above-mentioned first 1_th and second 46th steps.
- the arithmetic unit 48 derives a gradation processing signal CS for the input signal IS based on the acquired curve parameters P1 and P2 (value [Rim] and value [R2m]). The specific procedure is described below. Here, it is assumed that the value of the input signal IS is given in the range of the value [0.0 to 1.0].
- the gradation conversion curve candidates G1 to Gp are used to convert the input signal IS given in the range of [0.0 to 1.0] into the range of value [0.0 to 1.0]. And The present invention can be applied to a case where the input signal Is is not limited to this range.
- the arithmetic unit 48 compares the value of the input signal IS with predetermined values [X 1] and [X 2].
- the operation unit 48 derives the gradation processing signal CS for the input signal IS.
- the above-described processing may be executed by a computer or the like as a gradation processing program.
- the gradation processing program is a program for causing a computer to execute the gradation processing method described below.
- the gradation processing method is a method of obtaining the input signal IS and the selection signal Sm and outputting the gradation processing signal CS, and is characterized in that the input signal IS is gradation processed using a one-dimensional LUT. are doing.
- the first LUT 46 and the second LUT 47 output the curve parameters P 1 and P 2.
- Detailed descriptions of the first LUT 46, the second LUT 47, and the curve parameters P1 and P2 are omitted.
- gradation processing of the input signal IS is performed based on the curve parameters P1 and P2. The detailed contents of the gradation processing have been described in the description of the calculation unit 48, and thus the description thereof is omitted.
- the gradation processing signal CS with respect to the input signal IS is derived.
- the gradation processing execution unit 44 as a modification of the gradation processing execution unit 14 has two one-dimensional LUTs instead of two-dimensional LUTs. For this reason, the storage capacity for storing the lookup table can be reduced.
- the values of the curve parameters P1 and P2 are the values of the gradation conversion curve candidate Gm with respect to the predetermined value of the input signal" S.
- the curve parameters P 1 and P 2 may be other curve parameters of the tone conversion curve candidate Gm.
- the curve parameter may be the gradient of the gradation conversion curve candidate Gm. This will be specifically described with reference to FIG.
- the value of the curve parameter P1 is determined by the slope of the gradation conversion curve candidate Gm in the predetermined range [0.0 to X1] of the input signal IS.
- the value of the curve parameter P2 is the value [K2m] of the gradient of the gradation conversion curve candidate Gm in the predetermined range [X "!-X2] of the input signal IS.
- FIG. 16 shows the change in the values of the curve parameters P1 and P2 with respect to the selection signal Sm.
- the values of the curve parameters P1 and P2 with respect to the respective selection signals Sm are stored in the first and second selection lines 46 and 21_47.
- the value [K1m] is stored as the value of the curve parameter P1 for the selection signal Sm
- the value [K2m] is stored as the value of the curve parameter P2.
- the curve parameters P1 and P2 are output in the first to eleventh steps 46 to 21_11 to 47.
- the arithmetic unit 48 derives a gradation processing signal CS for the input signal IS based on the acquired curve parameters P1 and P2. The specific procedure is described below. First, the arithmetic unit 48 compares the value of the input signal I S with predetermined values [X I] and [X 2]. ⁇
- the value of the input signal IS is not less than [XI] and less than [X2]
- the coordinates ([X1], [Y1]) and coordinates ([X2], [K1m] * [X 1] + [K2m] * ([X2]-[XI]) (hereinafter referred to as [Y2]))
- the operation unit 48 derives the gradation processing signal CS for the input signal IS.
- the curve parameter may be a coordinate on the gradation conversion curve candidate Gm. This will be described specifically with reference to FIG.
- the value of the curve parameter P1 is the value [Mm] of one of the coordinates on the gradation conversion curve candidate Gm
- the curve parameter P2 Is on the gradation conversion curve candidate Gm
- the gradation conversion curve candidates G 1 to G p are all curves passing through the coordinates (X 1, Y 1).
- FIG. 18 shows the change in the values of the curve parameters P1 and P2 with respect to the selection signal Sm.
- the values of the curve parameters P1 and P2 with respect to the respective selection signals Sm are stored in the first selection section 46 and the 21_11 section 47.
- the value [Mm] is stored as the value of the curve parameter P1 for the selection signal Sm
- the value [Nm] is stored as the value of the curve parameter P2.
- the curve parameters P1 and P2 are output to the channels 21_11 and 47.
- the arithmetic unit 48 derives the gradation processing signal CS from the input signal IS by the same processing as in the modification described with reference to FIG. Detailed explanation is omitted.
- curve parameters P 1 and P 2 may be other curve parameters of the tone conversion curve candidate Gm.
- the number of curve parameters is not limited to the above. There may be fewer or more.
- the calculation unit 48 the calculation in the case where the gradation conversion curve candidates G 1 to G p are curves composed of straight line segments has been described.
- the coordinates on the gradation conversion curve candidates G1 to Gp are given as curve parameters, a smooth curve passing through the given coordinates is created (curve fitting), and the created curve is created.
- the gradation conversion processing may be performed using a curve.
- the curve parameter output unit 45 includes the first LUT 46 and the second LUT 47.
- the curve parameter output unit 45 may not include the LUT that stores the values of the curve parameters P1 and P2 with respect to the value of the selection signal Sm.
- the curve parameter output unit 45 outputs the values of the curve parameters P 1 and P 2 Is calculated. More specifically, the curve parameter output unit 45 stores parameters representing the graphs of the curve parameters P1 and P2 shown in FIGS. 15, 16, and 18. The curve parameter output unit 45 specifies a graph of the curve parameters P1 and P2 from the stored parameters. Further, the values of the curve parameters P1 and P2 with respect to the selection signal Sm are output using the graphs of the curve parameters P1 and P2. ,
- the parameters for specifying the graph of the curve parameters P 1 and P 2 are the coordinates on the graph, the slope of the graph, the curvature, and the like.
- the curve parameter output unit 45 stores the coordinates of two points on the graph of the curve parameters P 1 and P 2 shown in FIG. 15, and draws a straight line connecting the coordinates of these two points. , Used as a graph of curve parameters P1 and P2.
- the visual processing device 21 is a device that performs gradation processing of an image by being built in or connected to a device that handles images, such as a computer, a television, a digital camera, a mobile phone, and a PDA.
- the visual processing device 21 is characterized in that a plurality of gradation conversion curves stored in advance as LUT are switched and used for each pixel to be subjected to gradation processing.
- FIG. 19 is a block diagram illustrating the structure of the visual processing device 21.
- the visual processing device 21 includes an image dividing unit 22, a selection signal deriving unit 23, and a gradation processing unit 30.
- the image division unit 22 receives the input signal IS as an input, and outputs an image area P m (1 ⁇ m ⁇ n: n is the number of divisions of the original image) obtained by dividing the original image input as the input signal IS into a plurality of parts. I do.
- the selection signal deriving unit 23 calculates the gradation for each image area Pm. Outputs the selection signal Sm for selecting the conversion curve Cm.
- the gradation processing unit 30 includes a selection signal correction unit 24 and a gradation processing execution unit 25.
- the selection signal correction unit 24 receives the selection signal Sm as an input, and outputs a selection signal SS for each pixel which is a signal obtained by correcting the selection signal Sm for each image area Pm.
- the gradation processing execution unit 25 is provided with a plurality of gradation conversion curve candidates G1 to Gp (p is the number of candidates) as a two-dimensional LUT, and receives an input signal IS and a selection signal SS for each pixel as inputs. An output signal OS that has been subjected to gradation processing for the pixel is output.
- the gradation conversion curve candidates G1 to Gp are almost the same as those described in the second embodiment with reference to FIG. 7, and thus description thereof is omitted here.
- the gradation conversion curve candidates G "! To Gp are curves that give the relationship between the brightness value of the pixel of the input signal IS and the brightness value of the pixel of the output signal OS.
- the gradation processing execution unit 25 has gradation conversion curve candidates G 1 to Gp as a two-dimensional LUT.
- the two-dimensional LUT has the brightness value of the pixel of the input signal IS and the gradation conversion curve candidate!
- This is a look-up table (LUT) that gives the brightness value of the pixel of the output signal OS to the selection signal SS for selecting Gp. Since a specific example is almost the same as that described in the second embodiment with reference to FIG. 8, the description is omitted here.
- the pixel values of the output signal OS corresponding to the upper 6 bits of the pixel value of the input signal IS represented by, for example, 10 bits are arranged in the column direction of the matrix.
- the image dividing unit 22 operates in substantially the same manner as the image dividing unit 2 in FIG. 1, and divides the original image input as the input signal IS into a plurality (n) of image areas Pm (see FIG. 2).
- the number of divisions of the original image is larger than the number of divisions (for example, 4 to 16 divisions) of the conventional visual processing device 300 shown in FIG. 33, for example, 80 divisions in the horizontal direction and 60 divisions in the vertical direction. 4800 split.
- the selection signal deriving unit 23 selects a gradation conversion curve Cm for each image area Pm from the gradation conversion curve candidates G 1 to Gp. Specifically, the selection signal deriving unit 23 calculates the average brightness value of the wide area image area Em of the image area Pm, and calculates the calculated average value. One of the gradation conversion curve candidates G 1 to Gp is selected according to the lightness value. That is, the gradation conversion curve candidates G1 to Gp are associated with the average lightness value of the wide area image area Em. As the average lightness value increases, the gradation conversion curve candidates G1 to Gp having larger subscripts are selected. Is done.
- the wide area image area Em is the same as that described in the first embodiment with reference to FIG. That is, the wide-area image area Em is a set of a plurality of image areas including the respective image areas Pm.For example, a set of 25 image areas of 5 blocks in the vertical direction and 5 blocks in the horizontal direction centering on the image area Pm It is. Note that, depending on the position of the image area Pm, it may not be possible to have a wide image area Em of 5 blocks in the vertical direction and 5 blocks in the horizontal direction around the image area Pm. For example, with respect to the image area PI located around the original image, it is not possible to take a wide image area EI of 5 blocks in the vertical direction and 5 blocks in the horizontal direction around the image area PI. In this case, a region where the original image is overlapped with the region of 5 blocks in the vertical direction and 5 blocks in the horizontal direction around the image region P I is adopted as the wide-area image region E I.
- the selection result of the selection signal deriving unit 23 is output as a selection signal Sm indicating one of the gradation conversion curve candidates G 1 to Gp. More specifically, the selection signal Sm is output as the value of the subscript (1 to p) of the gradation conversion curve candidates G1 to Gp.
- the selection signal correction unit 24 performs correction using each selection signal Sm output for each image area Pm, and performs pixel-by-pixel selection for selecting a gradation conversion curve for each pixel constituting the input signal IS. Outputs selection signal SS.
- the selection signal SS for the pixels included in the image area Pm is obtained by correcting the value of the selection signal output for the image area Pm and the image area around the image area Pm with the internal ratio of the pixel position. It is
- FIG. 20 shows the selection signals S o, S p for the image regions P o, P p, P q, P r (where o, p, q, r are positive integers less than or equal to the number of divisions n (see FIG. 2) , Sq, and Sr are output.
- the position of the pixel X to be subjected to the gradation correction is divided into [i: 1 ⁇ ] with the center of the image area Po and the center of the image area Pp, and the center of the image area ⁇ And image area Let the center of the area Pq be a position that internally divides into [j: 1-j].
- [S o], [S p], [S q], and [S r] are the values of the selection signals S o, S p, S q, and S r.
- the gradation processing execution unit 25 receives the brightness value of the pixel included in the input signal IS and the selection signal SS as inputs, and outputs the brightness value of the output signal OS using, for example, the two-dimensional LUT 41 shown in FIG.
- the value [SS] of the selection signal SS is not equal to the subscript (1 to p) of the gradation conversion curve candidates G1 to Gp included in the two-dimensional LUT 41, the integer closest to the value [SS]
- the gradation conversion curve candidates G 1 to Gp with suffixes are used for the gradation processing of the input signal IS.
- FIG. 21 is a flowchart illustrating a visual processing method in the visual processing device 21.
- the visual processing method shown in FIG. 21 is a method that is realized by hardware in the visual processing device 21 and performs gradation processing of the input signal I S (see FIG. 19).
- the input signal IS is processed in image units (steps S30 to S37).
- the original image input as the input signal IS is divided into a plurality of image areas Pm (1 ⁇ m ⁇ n: n is the number of divisions of the original image) (step S31), and a gradation conversion curve Cm is obtained for each image area Pm.
- step S32 to S33 the gradation conversion curve is selected for each pixel of the original image, and the pixel unit is selected. Is performed (steps S34 to S36).
- a gradation conversion curve Cm is selected from gradation conversion curve candidates G1 to GP (step S32). Specifically, the average brightness value of the wide-area image region Em of the image region Pm is calculated, and any one of the gradation conversion curve candidates G1 to Gp is selected according to the calculated average brightness value.
- the gradation conversion curve candidates G "! To Gp are associated with the average brightness value of the wide-area image area Em, and the average brightness value increases. , The gradation conversion curve candidates G1 to Gp having larger suffixes are selected.
- the description of the wide-area image area Em will be omitted (see the section ⁇ Action> above).
- the selection result is output as a selection signal Sm indicating any of the gradation conversion curve candidates G1 to Gp. More specifically, the selection signal Sm is output as the value of the subscript (1 to P) of the gradation conversion curve candidates G1 to Gp. Further, it is determined whether or not the processing for all image areas Pm has been completed (step S33), and the processing of steps S32 to S33 is repeated several times until the processing is determined to be completed. repeat. As described above, the processing for each image area is completed.
- the correction using the selection signal Sm output for each image area Pm outputs the selection signal SS for each pixel for selecting the gradation conversion curve for each pixel constituting the input signal IS.
- the selection signal SS for the pixels included in the image area Pm is obtained by correcting the value of the selection signal output for the image area Pm and the image area around the image area Pm by the internal division ratio of the pixel position. .
- the description of the details of the correction is omitted (refer to the column of ⁇ Operation>, FIG. 20).
- step S35 The brightness value of the pixel included in the input signal IS and the selection signal SS are input, and the brightness value of the output signal OS is output using, for example, the two-dimensional LUT 41 shown in FIG. 8 (step S35). Further, it is determined whether or not the processing has been completed for all the pixels (step S36), and the processing of steps S34 to S36 is repeated several times until it is determined that the processing has been completed. Thus, the processing for each image is completed.
- each step of the visual processing method shown in FIG. 21 may be realized as a visual processing program by a computer or the like.
- the gradation conversion curve Cm selected for each image area Pm is created based on the average brightness value of the wide area image area Em. Therefore, the size of the image area Pm is Even if it is small, it is possible to sample a sufficient brightness value. As a result, an appropriate gradation conversion curve Cm is selected even for a small image area Pm.
- the selection signal correction unit 24 outputs a selection signal SS for each pixel by performing correction based on the selection signal Sm output for each image area.
- the pixels of the original image that constitute the input signal IS are subjected to gradation processing using the gradation conversion curve candidates G "! To G0 specified by the selection signal SS for each pixel. It is possible to obtain a processed output signal OS. For example, it is possible to suppress the occurrence of false contours, and in the output signal OS, the joints of the boundaries of the respective image areas Pm are unnaturally conspicuous. This can be further prevented.
- the gradation processing execution unit 25 has a two-dimensional LUT created in advance. Therefore, it is possible to reduce the processing load required for the gradation processing, and more specifically, to reduce the processing load required for creating the gradation conversion curve Cm. As a result, it is possible to speed up the gradation processing.
- the gradation processing execution unit 25 executes gradation processing using a two-dimensional LUT.
- the contents of the two-dimensional LUT are read from a storage device such as a hard disk or ROM provided in the visual processing device 21 and used for gradation processing.
- a storage device such as a hard disk or ROM provided in the visual processing device 21 and used for gradation processing.
- various gradation processing can be realized without changing the hardware configuration. That is, it is possible to realize gradation processing more suitable for the characteristics of the original image.
- the two-dimensional LUT 41 composed of a matrix of 64 rows and 64 columns is an example of a two-dimensional LUT.
- the effect of the present invention is not limited to a two-dimensional LUT of this size.
- a matrix in which more tone conversion curve candidates are arranged in the row direction may be used.
- the pixel value of the output signal OS corresponding to the value obtained by dividing the pixel value of the input signal IS into smaller steps may be arranged in the column direction of the matrix.
- a pixel value of the output signal OS may be arranged for each pixel value of the input signal IS represented by 10 bits.
- the value [SS] of the selection signal SS does not become equal to the subscript (1 to p) of the gradation conversion curve candidates G 1 to Gp included in the two-dimensional LUT 41, the value [SS] of the selection signal SS
- the gradation conversion curve candidate Gk (1 ⁇ k ⁇ p-1) whose subscript is the largest integer (k) not exceeding, and the floor whose subscript is the smallest integer (k + 1) exceeding [SS]
- the pixel value of the input signal IS that has been subjected to gradation processing using both the tone conversion curve candidate G k + 1 is weighted average (internally divided) using the fractional part of the value [SS] of the selection signal SS,
- the output signal OS may be output.
- the pixel values of the output signal OS with respect to the values of the upper six bits of the pixel value of the input signal IS represented by, for example, 10 bits are arranged in the column direction of the matrix.
- the output signal OS is obtained as a matrix component linearly interpolated by the lower four bits of the pixel value of the input signal IS by the gradation processing execution unit 25. It may be output. That is, matrix components are arranged in the column direction of the matrix with respect to the upper six bits of the pixel value of the input signal IS represented by, for example, 10 bits.
- the matrix component for the bit value and the matrix component for the value obtained by adding [1] to the value of the upper 6 bits of the pixel value of the input signal IS (for example, the component one row below in Fig. 8) Is linearly interpolated using the lower 4 bits of the pixel value of the input signal IS and output as the output signal OS.
- the selection signal Sm for the image area Pm is output based on the average brightness value of the wide area image area Em.
- the method of outputting the selection signal Sm is not limited to this method.
- the selection signal Sm for the image area Pm may be output based on the maximum brightness value or the minimum brightness value of the wide area image area Em.
- the value [Sm] of the selection signal Sm may be the average brightness value, the maximum brightness value, or the minimum brightness value of the wide area image region Em.
- the selection signal Sm for the image area Pm may be output as follows. That is, an average brightness value is obtained for each image region Pm, and a tentative selection signal Sm 'for each image region Pm is obtained from each average brightness value.
- the provisional selection signal Sm ′ has a value of a subscript number of the gradation conversion curve candidates G 1 to G p. Further, for each image area included in the wide area image area Em, the value of the provisional selection signal Sm 'is averaged to obtain the selection signal Sm of the image area Pm.
- the selection signal Sm for the image area Pm is output based on the average brightness value of the wide area image area Em.
- the selection signal Sm for the image area Pm may be output based on a weighted average (weighted average) instead of the simple average of the wide area image area Em.
- the details are the same as those described with reference to FIG. 11 in [Second Embodiment] above.
- the average brightness value of each image region constituting the wide area image region Em is obtained, and the average brightness value of the image region Pm is obtained. Images with average brightness values that differ significantly from the values For the regions P s 1, P s 2, 1, 2, the weight is reduced and the average brightness value of the wide area image region Em is obtained.
- the selection signal S m The influence of the lightness value of the specific area on the output of is reduced, and a more appropriate selection signal Sm is output.
- the visual processing device 21 may further include a profile data creation unit that creates profile data that is a value stored in the two-dimensional LUT.
- the profile data creation unit is composed of an image division unit 2 and a gradation conversion curve derivation unit 10 in the visual processing device 1 (see FIG. 1). Is stored in the 2D LUT as profile data.
- each of the gradation conversion curves stored in the two-dimensional LUT may be associated with the spatially processed input signal IS.
- the image division unit 22, the selection signal derivation unit 23, and the selection signal correction unit 24 may be replaced with a spatial processing unit that spatially processes the input signal IS.
- a visual processing device 61 as a fourth embodiment of the present invention will be described with reference to FIGS.
- the visual processing device 61 shown in FIG. 22 is a device that performs visual processing such as spatial processing and gradation processing of an image signal.
- the visual processing device 61 constitutes an image processing device together with a device that performs color processing of an image signal in a device that handles images, such as a computer, a television, a digital camera, a mobile phone, a PDA, a printer, and a scanner.
- the visual processing device 61 is a device that performs visual processing using an image signal and a blur signal obtained by subjecting the image signal to spatial processing (blurring filter processing), and has features in spatial processing. .
- Japanese Patent Application Laid-Open No. 10-75 3 95 discloses a technique in which a plurality of pocket signals having different degrees of blur are generated, and the respective blur signals are combined or switched. Outputs a blur signal. This aims to change the filter size of spatial processing and suppress the influence of pixels with different densities.
- the visual processing device 61 as the fourth embodiment of the present invention aims to output an appropriate blur signal and to reduce the circuit scale or processing load in the device.
- FIG. 22 shows the basic configuration of a visual processing device 61 that performs visual processing on an image signal (input signal I S) and outputs a visual processing image (output signal O S).
- the visual processing device 61 performs spatial processing on the brightness value of each pixel of the original image obtained as the input signal IS, and outputs a unsharp signal US.
- a visual processing unit 63 that performs visual processing of the original image using the sharp signal US and outputs an output signal OS.
- the spatial processing of the spatial processing unit 62 will be described with reference to FIG.
- the spatial processing unit 62 obtains, from the input signal IS, the pixel values of the target pixel 65 to be subjected to the spatial processing and the pixels in the peripheral area of the target pixel 65 (hereinafter, peripheral pixels 66).
- the peripheral pixel 66 is a pixel located in a peripheral area of the target pixel 65, and is a pixel included in a peripheral area of 9 vertical pixels and 9 horizontal pixels spread around the target pixel 65. Note that the size of the peripheral region is not limited to this case, and may be smaller or larger. In addition, the peripheral pixel 66 is closer according to the distance from the target pixel 65. These are divided into a first peripheral pixel 67 and a second peripheral pixel 68. In FIG. 23, it is assumed that the first peripheral pixel 67 is a pixel included in an area of five vertical pixels and five horizontal pixels centering on the target pixel 65. Further, it is assumed that the second peripheral pixel 68 is a pixel located around the first peripheral pixel 67.
- the spatial processing unit 62 performs a filter operation on the target pixel 65.
- the pixel values of the target pixel 65 and the peripheral pixel 66 are weighted and averaged using a weight based on the difference between the pixel values of the target pixel 65 and the peripheral pixel 66 and the distance.
- [W ij] is the weight coefficient of the pixel located at the i-th row and the j-th column in the target pixel 65 and the peripheral pixel 66
- [A ij] is the i-th row in the target pixel 65 and the peripheral pixel 66.
- ⁇ J means that the sum of the respective pixels of the target pixel 65 and the peripheral pixel 66 is calculated.
- the weight coefficient [W ij] is a value determined based on the difference and the distance between the pixel value of the target pixel 65 and the peripheral pixel 66. More specifically, the larger the absolute value of the difference between the pixel values is, the smaller the weight coefficient is given. Also, the smaller the distance, the smaller the weighting factor.
- the weight coefficient [W ij] is the value [1].
- the weight coefficient [W ij] is represented by the value [1]. is there.
- the weight coefficient [W ij] of the first peripheral pixel 67 having the pixel value whose absolute value of the difference is larger than the predetermined threshold value is [1/2]. That is, even for the pixels included in the first peripheral pixels 67, the weight coefficients given according to the pixel values are different.
- the absolute value of the difference between the pixel value of the target pixel 65 and the pixel value of the target pixel 65 is smaller than a predetermined threshold value. 2].
- the weight coefficient [W ij] is the value [1/4] for the pixel having the pixel value whose absolute value of the difference is larger than the predetermined threshold value among the second peripheral pixels 68. That is, even the pixel included in the second peripheral pixel 68 is given according to the pixel value. Weight coefficients are different. Further, the second peripheral pixel 68 having a distance from the target pixel 65 larger than the first peripheral pixel 67 has a smaller weighting factor.
- the predetermined threshold value is a value [2 0 2 5 6 to 6 0 2 5 6] with respect to the pixel value of the target pixel 65 taking a value in the range of the value [0.0 to 1.0]. ].
- the visual processing unit 63 performs visual processing using the values of the input signal I S and the unsharp signal U S for the same pixel.
- the visual processing performed here is processing such as contrast enhancement of the input signal IS or dynamic range compression.
- contrast enhancement a signal emphasized using a function for enhancing the difference or ratio between the input signal I S and the unsharp signal U S is added to the input signal I S to sharpen an image.
- dynamic range compression the unsharp signal U S is subtracted from the input signal I S.
- the processing in the visual processing unit 63 may be performed using a two-dimensional LUT that receives the input signal IS and the unsharp signal USB as inputs and outputs the output signal OS.
- the above processing may be executed by a computer or the like as a visual processing program.
- the visual processing program is a program for causing a computer to execute the visual processing method described below.
- the visual processing method includes a spatial processing step of performing spatial processing on the brightness value of each pixel of the original image acquired as the input signal IS and outputting an unsharp signal US, and an input signal IS and an unsharp signal US of the same pixel. And a visual processing step of outputting an output signal OS by performing visual processing of the original image.
- the weighted average described in the description of the spatial processing unit 62 is performed for each pixel of the input signal IS, and an unsharp signal U S is output. Details are omitted because they have been described above.
- the visual processing described in the description of the visual processing section 63 is performed using the input signal IS and the unsharp signal US for the same pixel, and the output signal is output. Issue OS. Details are omitted because they have been described above.
- FIGS. 25 (a) and (b) show processing by a conventional filter.
- FIG. 25 (b) shows the processing by the filter of the present invention.
- FIG. 25A shows a state in which the peripheral pixels 66 include objects 71 having different densities.
- a smoothing filter having a predetermined filter coefficient is used in the spatial processing of the target pixel 65. Therefore, the target pixel 65 which is not originally a part of the object 71 is affected by the density of the object 71.
- FIG. 25 (b) shows the state of the spatial processing of the present invention.
- the peripheral pixel 66 includes a portion 66 a including the object 71, the first peripheral pixel 67 not including the object 71, the second peripheral pixel 68 including no object 71, Spatial processing is performed on each of the target pixels 65, using different weighting factors. For this reason, it is possible to suppress the effect of the spatially processed target pixel 65 from pixels having extremely different densities, and it is possible to perform more appropriate spatial processing.
- the visual processing device 61 does not need to create a plurality of blur signals as in Japanese Patent Application Laid-Open No. H10-73595. For this reason, it is possible to reduce the circuit scale or processing load in the device.
- the filter size of the spatial filter and the shape of the image referred to by the filter can be adaptively changed substantially according to the image content. For this reason, it is possible to perform spatial processing suitable for the image content.
- the sizes of the above-described peripheral pixels 66, the first peripheral pixels 67, the second peripheral pixels, etc. are examples, and may be other sizes.
- weighting coefficients described above are merely examples, and other weighting coefficients may be used.
- the weight coefficient may be given as the value [0]. This makes it possible to eliminate the influence of the spatially processed target pixel 65 from pixels having extremely different densities. This is intended for contrast enhancement This application has the effect of not overemphasizing the contrast in the part where the contrast is originally large to some extent.
- weight coefficient may be given as a function value as shown below.
- the value of the weighting coefficient may be given by a function using the absolute value of the difference between the pixel values as a variable.
- the function is such that when the absolute value of the pixel value difference is small, the weight coefficient is large (close to 1), and when the absolute value of the pixel value difference is large, the weight coefficient is small (close to 0). Is a function that monotonically decreases with respect to the absolute value of the pixel value difference.
- the value of the weight coefficient may be given by a function using the distance from the target pixel 65 as a variable.
- the function is, for example, such that the weight coefficient is large (close to 1) when the distance from the target pixel 65 is short, and small when the distance from the target pixel 65 is long (close to 0).
- the weighting factor is given more continuously. For this reason, it is possible to give a more appropriate weighting coefficient compared to the case of using a threshold, suppress excessive contrast enhancement, suppress the generation of false contours, etc., and perform processing with higher visual effect. Can be done.
- each pixel described above may be performed in units of blocks including a plurality of pixels. Specifically, first, an average pixel value of a target block to be subjected to spatial processing and an average pixel value of peripheral blocks around the target block are calculated. Furthermore, each average pixel value is weighted and averaged using the same weighting factor as above. As a result, the average pixel value of the target block is further spatially processed.
- the spatial processing unit 62 can be used as the selection signal deriving unit 13 (see FIG. 6) or the selection signal deriving unit 23 (see FIG. 19). In this case, it is the same as described in [Second Embodiment] ⁇ Modification> (6) or [Third Embodiment] ⁇ Modification> (5). This will be further described with reference to FIGS. 26 to 28.
- FIG. 26 is a block diagram illustrating a configuration of a visual processing device 961 that performs the processing described with reference to FIGS. 22 to 25 in units of blocks including a plurality of pixels.
- the visual processing device 961 includes an image dividing unit 964 that divides an image input as the input signal IS into a plurality of image blocks, and a spatial processing unit 9 that performs spatial processing for each of the divided image blocks. 6 and a visual processing unit 963 that performs visual processing using the input signal IS and the spatial processing signal US2 output from the spatial processing unit 962.
- the image dividing unit 964 divides an image input as the input signal IS into a plurality of image blocks. Further, a processing signal U S1 including the characteristic parameters for each of the divided image blocks is derived.
- the feature parameters are, for example, parameters representing image features of each divided image block, and include, for example, an average value (simple average, weighted average, etc.) and a representative value (maximum value, minimum value, median value, etc.). is there.
- the spatial processing unit 96 2 acquires a processing signal U S 1 including a feature parameter for each image block, and performs spatial processing.
- FIG. 27 shows an input signal IS divided into image blocks including a plurality of pixels.
- each image block is divided into an area including nine pixels, three pixels vertically and three pixels horizontally. Note that this division method is an example, and is not limited to such a division method. Further, in order to sufficiently exhibit the visual processing effect, it is preferable to generate the spatial processing signal U S 2 for a considerably wide area.
- the spatial processing unit 9 6 2 is a feature parameter of the target image block 9 6 5 to be subjected to spatial processing and each peripheral image block included in the peripheral area 9 6 6 located around the target image block 9 6 5. Is obtained from the processing signal US 1.
- the peripheral area 966 is an area located around the target image block 965, and is an area of 5 vertical blocks and 5 horizontal blocks spread around the target image block 965. Note that the size of the peripheral region 966 is not limited to this case, and may be smaller or larger. In addition, the peripheral area 966 is divided into the first peripheral area 966 and the second peripheral area 966 which are closer to each other according to the distance from the target image block 966. It is divided.
- the first peripheral area 967 is an area of three vertical blocks and three horizontal blocks centered on the target image block 965. Further, it is assumed that the second peripheral region 968 is a region located around the first peripheral region 967.
- the spatial processing unit 962 performs a filter operation on the feature parameters of the target image block 965. ⁇
- the values of the characteristic parameters of the target image block 965 and the peripheral image blocks of the peripheral region 966 are weighted and averaged.
- the weight of the weighted average is determined based on the distance between the target image block 965 and the peripheral image block and the difference between the characteristic values.
- [W ij] is a weighting factor for the image block located in the i-th row and j-th column in the target image block 965 and the peripheral region 966
- [A ij] is the target image block 965 and the peripheral region 966. Is the value of the feature parameter of the image block located in the i-th row and j-th column.
- “ ⁇ ” means that the total calculation is performed for each image block of the target image block 965 and the peripheral region 966.
- the weight coefficient [W ij] is a value determined based on the difference between the distance between the target image block 965 and the peripheral image block in the peripheral region 966 and the value of the characteristic parameter. More specifically, the larger the absolute value of the difference between the characteristic parameter values, the smaller the weight coefficient. Also, the smaller the distance, the smaller the weighting factor. For example, for the target image block 965, the weight coefficient [W i j] is the value [1].
- a weight coefficient [W ij ] Is the value [1].
- the weighting factor [W ij] is the value [1/2]. That is, even in the peripheral image block included in the first peripheral region 967, the weighting factor given according to the value of the characteristic parameter is different.
- a weight coefficient [W ij ] Is the value [1/2].
- the weight coefficient [W ij] is represented by the value [1 Z4]. is there. That is, even in the peripheral image block included in the second peripheral area 968, the weighting coefficient given according to the value of the special parameter is different. In the second peripheral region 968 whose distance from the target image block 965 is larger than the first peripheral region 967, a smaller weighting factor is given.
- the predetermined threshold value is a value [20/256 to 60/256] or the like with respect to the value of the feature parameter of the target image block 965 having a value in the range of [0.0 to 1.0]. It is a value of such a magnitude.
- the weighted average calculated as described above is output as the spatial processing signal US2.
- the visual processing section 963 performs the same visual processing as the visual processing section 63 (see FIG. 22). However, the difference from the visual processing unit 63 is that the spatial processing signal US2 of the target image block including the target pixel to be subjected to the visual processing is used instead of the unsharp signal US. Further, the processing in the visual processing unit 963 may be performed collectively for each target image block including the target pixel, but is performed by switching the spatial processing signal US 2 in the order of pixels obtained from the input signal IS. You may. The above processing is performed for all pixels included in the input signal IS.
- processing is performed in units of image blocks. For this reason, the processing amount of the spatial processing unit 962 can be reduced, and higher-speed visual processing can be realized. In addition, the hardware scale can be reduced.
- weighting factor ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇
- some values of the weighting factor may be the value [0]. In this case, this is the same as making the shape of the peripheral region 966 an arbitrary shape.
- the spatial processing unit 962 performs the spatial processing using the characteristic parameters of the target image block 965 and the peripheral region 966, but the spatial processing is performed using the characteristic parameters of only the peripheral region 966. There may be. That is, in the weight of the weighted average of the spatial processing, the weight of the target image block 965 may be set to the value [0].
- the processing in the visual processing unit 63 is not limited to the above.
- the value C calculated by (A / B) may be output as the value of the output signal OS.
- the visual processing unit 63 When such processing is performed in the visual processing unit 63, if the appropriate unsharp signal US output by the spatial processing unit 62 of the present invention is used, the dynamic range of the input signal IS can be compressed, The contrast can be emphasized.
- the contrast enhancement cannot be performed properly although it is edge enhancement. If there is too much blur, contrast can be enhanced, but dynamic range compression cannot be performed properly.
- a visual processing device is a device that performs gradation processing of an image by being built in or connected to a device that handles images, such as a computer, a television, a digital camera, a mobile phone, or a PDA, and an integrated circuit such as an LSI. As realized.
- each functional block of the above embodiment may be individually made into one chip, or may be made into one chip so as to include a part or all of them.
- the LSI is used, but depending on the degree of integration, it may be called IC, system LSI, super LSI, or ultra LSI.
- the method of circuit integration is not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor.
- a field programmable gate array FPGA
- a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
- each block in FIGS. 1, 6, 19, 22, and 26 is performed by, for example, a central processing unit (CPU) included in the visual processing device.
- programs for performing the respective processes are stored in a storage device such as a hard disk and a ROM, and are read out from the ROM or read out to the RAM and executed.
- the two-dimensional LUT referred to in the gradation processing execution units 14 and 25 in FIGS. 6 and 19 is stored in a storage device such as a hard disk and a ROM, and is referred to as necessary.
- the two-dimensional LUT may be provided from a two-dimensional LUT providing device connected directly to a visual processing device or indirectly connected via a network. The same applies to the one-dimensional LUT referred to in the gradation processing execution unit 44 in FIG.
- the visual processing device may be a device that performs gradation processing of an image for each frame (for each field) built in or connected to a device that handles moving images.
- each visual processing device executes the visual processing method described in the first to fourth embodiments.
- the visual processing program is stored in a storage device such as a hard disk or ROM in a device built in or connected to an image processing device such as a computer, television, digital camera, mobile phone, PDA, etc., and performs gradation processing of the image.
- a program to be executed for example, provided via a recording medium such as a CD-ROM or via a network.
- the processing is performed on the brightness value of each pixel.
- the present invention does not depend on the color space of the input signal IS. That is, in the processing in the above embodiment, when the input signal IS is a YCbGr color space, a YUV color space, a Lab color space, a LuV color space, a YIQ color space, an XYZ color space, a YPbPr color space, When expressed in RGB color space, etc., the same can be applied to luminance and lightness in each color space.
- the processing in the above embodiment may be performed independently for each component of RGB.
- FIG. 29 is a block diagram showing an overall configuration of a content supply system eX100 realizing a content distribution service.
- the area for providing communication services is divided into a desired size, and base stations e X 107 to e X 110 are fixed radio stations in each cell.
- the content supply system eX100 is, for example, an Internet service provider eX102, a telephone network ex104, and a base station eX107-eX110 on the Internet eX101.
- a computer ex 1 1 1 PDA (personal digital assistant) ex 1 1 2
- camera exl 1 3 mobile phone ex 1 14
- mobile phone with camera ex 1 15 Is done.
- each device is connected directly to the telephone network ex 104 without going through the base station e X 107-e X 110, which is a fixed wireless station. It may be connected directly.
- the camera eX113 is a device capable of shooting moving images such as a digital video camera.
- PDG Personal Digital Communications
- CDM A Code Division Multiple Access
- W-CDMA Wideband-Code Division Multiple Access
- GSM Global System for Mobile Communi cat ions
- Mobile phone or PHS Personal Handypho e System
- the streaming server ex 103 is connected from the camera ex 113 to the base station ex 110 and the telephone network ex 104, and performs an encoding process transmitted by the user using the camera ex 113. Live distribution and the like based on the received data becomes possible.
- the encoding processing of the captured data may be performed by the camera e X 1 1 3 or may be performed by a server or the like that performs data transmission processing.
- moving image data shot by the camera ex116 may be transmitted to the streaming server ex1O3 via the computer eX111.
- the camera ex116 is a device such as a digital camera that can shoot still images and moving images. In this case, the video data may be encoded by the camera e X 1 1 6 or the computer e X 1 1 1.
- the encoding process is performed in the LSI ex117 of the combi- ter ex111 or the camera ex116. It is a recording medium that can read software for image encoding and decoding with a computer eX111 or the like. It may be incorporated in any storage media (CD-ROM, flexible disk, hard disk, etc.).
- the video data may be transmitted by a mobile phone with a camera eX115. The moving image data at this time is data encoded by the LSI included in the mobile phone eX115.
- the content for example, a video image of a live music performance
- the user with the camera ex1 "I3, power camera ex116, or the like is encoded and processed.
- the streaming server eX103 While transmitting to the streaming server eX103, the streaming server eX103 distributes the above-mentioned content data to the requested client in a stream.
- the ex 100 supply system can receive and play back encoded data at the client, and can also realize personal broadcasting by receiving, decoding, and playing back the data in real time at the client.
- the visual processing device, visual processing method, and visual processing program described in the above embodiment may be used.
- the computer ex, 111, PDA eX112, camera exl13, mobile phone ex114, etc. are provided with the visual processing device shown in the above embodiment, and perform the visual processing method and the visual processing program. It may be realized.
- the streaming server eX103 may provide profile data to the visual processing device via the Internet eX101. Further, a plurality of streaming servers eX103 may be provided, each providing different opening file data. Further, the storage server ex 103 may create a profile. As described above, when the visual processing device can acquire profile data via the Internet eX101, the visual processing device does not need to store profile data used for the visual processing in advance. It is also possible to reduce the storage capacity of the device. Further, since profile data can be acquired from a plurality of servers connected via the Internet ex011, different visual processing can be realized. A mobile phone will be described as an example.
- FIG. 30 is a diagram showing a mobile phone eX115 provided with the visual processing device of the above embodiment.
- the mobile phone eX115 has an antenna ex210 for transmitting and receiving radio waves to and from the base station eX110, a camera capable of capturing images and still images from CCD cameras, etc. ex203, camera unit eX203, display unit such as a liquid crystal display that displays data obtained by decoding the video, etc. received by antenna ex201, etc. ex202, operation keys ex A main unit composed of 204 groups, an audio output unit eX208, such as a speaker for outputting audio, an audio input unit ex205, such as a microphone for inputting audio, Encoded or decoded data such as still image data, received mail data, video data or still image data, etc.
- an audio output unit eX208 such as a speaker for outputting audio
- an audio input unit ex205 such as a microphone for inputting audio
- Encoded or decoded data such as still image data, received mail data, video data or still
- the recording media eX207 is a type of EE PROM (Electrically ly Eraseble and Programmable Read Only Memory), a non-volatile memory that can be electrically rewritten and erased in a plastic case such as an SD card. It is stored. ,
- EE PROM Electrically ly Eraseble and Programmable Read Only Memory
- the mobile phone ex 115 has a power supply circuit ex ex, compared to a main control unit e X 31 1 that controls all parts of the main unit including a display unit e X 202 and operation keys ex 204 in a comprehensive manner.
- the unit eX308, the recording / reproducing unit ex307, the modulation / demodulation circuit unit eX306, and the audio processing unit ex305 are connected to each other via a synchronous bus ex313.
- the power supply circuit eX310 is a digital mobile phone with a camera, which supplies power to each part from the battery pack when the call is turned on and the power key is turned on by user operation. Is activated.
- the mobile phone ex115 based on the control of the main control unit eX311, consisting of a CPU, ROM, RAM, etc., outputs the voice signal collected by the voice input unit eX205 in the voice call mode.
- the processing unit eX 305 converts the digital audio data into digital voice data, and the modulation / demodulation circuit unit eX 306 performs spread spectrum processing.
- the transmission / reception circuit unit ex 301 performs digital / analog conversion processing and frequency conversion processing. Send via eX201.
- the mobile phone ex115 also amplifies the received signal received by the antenna eX201 in the voice communication mode, performs frequency conversion processing and analog digital conversion processing, and performs spectrum inverse processing in the modulation / demodulation circuit section eX306. After being subjected to diffusion processing and converted into an analog audio signal by the audio processing unit eX 305, this is output via the audio output unit eX 208.
- the e-mail text data input by operating the operation key eX204 on the main unit is controlled by the operation input system. It is sent to the main control unit ex311 via the control unit ex3O4.
- the main control unit ex311 performs spread spectrum processing on the text data in the modulation and demodulation circuit unit eX306, and performs digital-analog conversion processing and frequency conversion processing in the transmission and reception circuit unit eX301.
- the signal is transmitted to the base station ex110 via the antenna eX201.
- the image data captured by the camera unit ex203 is supplied to the image encoding unit eX312 via the camera interface: L-source unit eX303. I do.
- the image data captured by the camera unit ex203 is transferred to the display unit ex2O2 via the camera interface unit eX303 and the LCD control unit eX302. It is also possible to display directly ⁇
- the image encoding unit eX312 converts the image data supplied from the camera unit eX203 into encoded image data by performing compression encoding, and converts the image data into a demultiplexing unit eX308. Send out.
- the mobile phone eX115 simultaneously outputs the audio collected by the audio input unit eX205 through the audio processing unit ex305 while capturing images with the camera unit ex203.
- the data is sent to the demultiplexing unit eX308 as audio data.
- the demultiplexing unit eX308 multiplexes the encoded image data supplied from the image encoding unit ex312 and the audio data supplied from the audio processing unit eX305 in a predetermined manner, The resulting multiplexed data is subjected to spread spectrum processing in a modulation / demodulation circuit section eX306 and digital-analog conversion processing and frequency conversion processing in a transmission / reception circuit section eX301, followed by an antenna eX2. 0 through 1 to send.
- the received signal received from the base station ex110 via the antenna eX201 is modulated and demodulated by the modem circuit eX306. Then, the spectrum despreading process is performed, and the resulting multiplexed data is sent to the demultiplexing unit eX308.
- the demultiplexing unit ex308 demultiplexes the multiplexed data to form an encoded bit stream of the image data. And an encoded bit stream of audio data, and supplies the encoded image data to the image decoding unit eX309 via the synchronous bus eX313 and also converts the audio data to the audio processing unit. eX305.
- the image decoding unit ex 309 generates replayed moving image data by decoding the coded bit stream of the image data, and performs this through the CD control unit e X302.
- the data is supplied to the display unit eX202, and thereby, for example, moving image data included in a moving image file linked to a homepage is displayed.
- the audio processing unit eX305 simultaneously converts the audio data into an analog audio signal, and then supplies the analog audio signal to the audio output unit ex208, whereby, for example, a moving image linked to a homepage The audio data contained in the file is played.
- the image decoding unit eX309 may include the visual processing device of the above embodiment.
- the visual processing device described in the above embodiment is also applied to a digital broadcasting system.
- Visual processing methods and visual processing programs can be incorporated. Specifically, at the broadcasting station eX409, the coded bit stream of the video information is transmitted via radio waves to the communication or broadcasting satellite eX410.
- the broadcasting satellite eX410 receiving this transmits a radio wave for broadcasting, receives this radio wave with the home antenna eX406 having the satellite broadcasting receiving equipment, and sets the television (receiver) eX
- the coded bitstream is decoded by a device such as 401 or set-top box (STB) eX407 and reproduced.
- a device such as a television (receiver) ex401 or a set-top box (STB) eX407 may include the visual processing device described in the above embodiment. Further, the visual processing method of the above embodiment may be used. Further, a visual processing program may be provided.
- the visual processing device described in the above embodiment is also provided to a reproducing device eX403 that reads and decodes an encoded bitstream recorded on a storage medium eX02 such as a CD or DVD that is a recording medium. It is possible to implement visual processing methods and visual processing programs. In this case, the reproduced video signal is displayed on the monitor eX404.
- the visual processing device described in the above embodiment is installed in a set-top box ex 407 connected to a cable eX405 for cable television or an antenna eX406 for satellite terrestrial broadcasting. It is also conceivable to implement a processing method and a visual processing program, and reproduce this on the TV monitor eX408.
- the visual processing device described in the above embodiment may be incorporated in a television.
- a car ex 12 having an antenna e X 4 11 1 receives a signal from a satellite ex 4 10 or a base station e X 10 7 or the like, and receives a signal from a car ex X 10 7 or the like. It is also possible to play a moving image on a display device such as 3rd.
- the image signal can be encoded and recorded on a recording medium.
- a recorder eX420 such as a DVD recorder for recording an image signal on a DVD disk eX421 and a disk recorder for recording on a hard disk.
- it can be recorded in the SD force field eX422. If the recorder ex420 is equipped with the decoding device of the above embodiment, it interpolates and reproduces the image signal recorded on the DVD disc eX421 or the SD card eX422, and outputs it to the monitor ex4. 0 8 can be displayed.
- the configuration of the car navigation eX413 is, for example, of the configuration shown in Fig. 31, where the camera section eX203, the camera interface section eX303, and the image encoding section eX313 are included.
- the same configuration can be applied to the computer eX111 / TV (receiver) eX401, etc.
- the terminal such as the mobile phone e X 1 1 4 is a transmission terminal having only an encoder and a receiving terminal having only a decoder.
- the terminal such as the mobile phone e X 1 1 4 is a transmission terminal having only an encoder and a receiving terminal having only a decoder.
- the visual processing device, the visual processing method, and the visual processing program described in the above embodiment can be used in any of the above-described device 'systems, and the effects described in the above embodiment can be obtained.
- Image region dividing means for dividing an input image signal into a plurality of image regions; and means for deriving a gradation conversion characteristic for each of the image regions, wherein a target image region from which the gradation conversion characteristic is derived is provided.
- a gradation conversion characteristic deriving unit that derives the gradation conversion characteristic of the target image region using a gradation characteristic of the target image region with a peripheral image region.
- a gradation processing means for performing gradation processing of the image signal based on the derived gradation conversion characteristics
- a visual processing device comprising:
- the tone conversion characteristic is a tone conversion curve.
- the tone conversion characteristic deriving means creates a histogram using the tone characteristic. Histogram creating means; and the histogram based on the created histogram.
- the visual processing device according to attachment 1.
- the gradation conversion characteristic is a selection signal for selecting one gradation conversion table from among a plurality of gradation conversion tables for gradation processing the image signal,
- the gradation processing means has the plurality of gradation conversion tables as a two-dimensional LUT.
- the visual processing device according to attachment 1.
- the two-dimensional LUT stores the plurality of gradation conversion tables in the order in which the value of the image signal subjected to gradation processing with respect to the value of the selection signal is monotonously increased or monotonously decreased for all values of the image signal. are doing,
- the value of the selection signal is derived as a feature amount of an individual selection signal that is a selection signal derived for each of the target image region and the peripheral image region.
- the selection signal is derived based on a gradation characteristic feature amount that is a feature amount derived using gradation characteristics of the target image region and the peripheral image region.
- the visual processing device according to any one of supplementary notes 3 to 5.
- the gradation processing unit includes: a gradation processing execution unit configured to perform gradation processing of the target image area using the gradation conversion table selected by the selection signal; Means for correcting a tone, based on the gradation processing table selected for an image region including a target pixel to be corrected and an image region adjacent to the image region including the target pixel. Correction means for correcting the gradation of the target pixel.
- the visual processing device according to any one of appendices 3 to 7.
- the gradation processing unit corrects the selection signal, and derives a correction selection signal for selecting a gradation processing table for each pixel of the image signal.
- the correction unit selects the gradation selection unit.
- a tone processing execution unit for executing tone processing of the image signal using a tone conversion table.
- the visual processing device according to any one of appendices 3 to 7.
- a visual processing method comprising:
- the tone conversion characteristic is a tone conversion curve
- the tone conversion characteristic deriving step includes: a histogram creation step of creating a histogram using the tone characteristics; and a tone curve creating the tone conversion curve based on the created histogram. Having a creating step,
- Appendix 10 The visual processing method according to 0.
- the gradation conversion characteristic is a selection signal for selecting one gradation conversion table from among a plurality of gradation conversion tables for gradation processing the image signal,
- the gradation processing step includes a gradation processing execution step of performing gradation processing of the target image area using the gradation conversion table selected by the selection signal, and a step of the image signal subjected to the gradation processing. Correcting the tone, based on the gradation processing table selected for the image area including the target pixel to be corrected and the image area adjacent to the image area including the target pixel, A correction step for correcting the gradation of the pixel,
- Appendix 10 The visual processing method according to 0.
- the gradation conversion characteristic is a selection signal for selecting one gradation conversion table from among a plurality of gradation conversion tables for gradation processing the image signal,
- the gradation processing step includes a correction step of correcting the selection signal and deriving a correction selection signal for selecting a gradation processing table for each pixel of the image signal, and the gradation selected by the correction selection signal.
- Appendix 10 The visual processing method according to 0.
- a visual processing program for performing a visual processing method by a computer wherein the visual processing program is stored in a computer
- a visual processing method comprising:
- the gradation conversion characteristic is a gradation conversion curve
- the tone conversion characteristic deriving step includes: a histogram creation step of creating a histogram using the tone characteristics; and a tone curve creating the tone conversion curve based on the created histogram. Having a creating step,
- Appendix 14 The visual processing program according to 4.
- the gradation conversion characteristic is a selection signal for selecting one gradation conversion table from among a plurality of gradation conversion tables for gradation processing the image signal,
- the gradation processing step includes a gradation processing execution step of performing gradation processing of the target image area using the gradation conversion table selected by the selection signal; and a gradation processing of the image signal after the gradation processing. Correcting the tone based on the gradation processing table selected for an image area including a target pixel to be corrected and an image area adjacent to the image area including the target pixel. A correction step of correcting the gradation of the pixel.
- Appendix 14 The visual processing program according to 4.
- the gradation conversion characteristic is a selection signal for selecting one gradation conversion table from among a plurality of gradation conversion tables for gradation processing the image signal,
- the gradation processing step includes a correction step of correcting the selection signal and deriving a correction selection signal for selecting a gradation processing table for each pixel of the image signal, and the gradation selected by the correction selection signal.
- the gradation processing means includes parameter output means for outputting a curve parameter of a gradation conversion curve for gradation processing of the image signal based on the gradation conversion characteristics, and the gradation conversion specification And gradation processing of the image signal using the gradation conversion curve specified based on the curve parameter,
- the parameter output means is a look-up table for storing a relationship between the gradation conversion characteristic and the curve parameter.
- Appendix 18 The visual processing device according to 8.
- the curve parameter includes a value of the gradation-processed image signal with respect to a predetermined value of the image signal.
- APPENDIX 1 The visual processing device according to 1 8 or 1 9.
- the curve parameter includes a gradient of the gradation conversion curve in a predetermined section of the image signal.
- APPENDIX 1 The visual processing device according to any one of 8 to 20.
- the visual processing device according to any one of supplementary notes 18 to 21, wherein the curve parameter includes coordinates of at least one point through which the gradation conversion curve passes.
- a spatial processing unit that performs weighted averaging of the gradation characteristics of the target image area and the surrounding image area using weighting based on a difference in gradation characteristics from the image area;
- Visual processing means for performing visual processing of the target image region based on the gradation characteristics of the target image region and the spatial processing signal;
- a visual processing device comprising:
- the visual processing device wherein the weighting decreases as the absolute value of the difference in gradation characteristics increases.
- the weighting decreases as the distance between the target image area and the surrounding image area increases.
- Appendix 2 The visual processing device according to 3 or 2 4.
- the image area is composed of a plurality of pixels
- the gradation characteristics of the target image area and the peripheral image area are determined as feature values of pixel values constituting each image area.
- Target image area determining means for determining a target image area for which a gradation conversion characteristic is to be derived from an input image signal
- Peripheral image region determining means for determining at least one peripheral image region including a plurality of pixels located around the target image region;
- Tone conversion characteristic deriving means for deriving the tone conversion characteristic of the target image region using peripheral image data of the peripheral image region
- Tone processing means for performing tone processing of an image signal of the target image area based on the derived tone conversion characteristic
- a visual processing device comprising:
- a visual processing method comprising:
- a visual processing program for performing a visual processing method for performing visual processing of an input image signal using a computer
- the visual processing method comprises:
- a target image region determining unit that determines a target image region for which a gradation conversion characteristic is to be derived from an input image signal
- a peripheral image region determining unit that determines at least one peripheral image region including a plurality of pixels and located around the target image region
- a gradation conversion characteristic deriving unit that derives the gradation conversion characteristic of the target image region using peripheral image data of the peripheral image region;
- a gradation processing unit that performs gradation processing of an image signal of the target image area based on the derived gradation conversion characteristic
- a semiconductor device comprising:
- the visual processing device includes an image area dividing unit, a gradation conversion characteristic deriving unit, and a gradation processing unit.
- the image area dividing means divides the input image signal into a plurality of image areas.
- the gradation conversion characteristic deriving means is a means for deriving the gradation conversion characteristic for each image area, and the gradation characteristic between the target image area from which the gradation conversion characteristic is derived and the surrounding image area of the target image area. Is used to derive the tone conversion characteristics of the target image area.
- the gradation processing means performs gradation processing on the image signal based on the derived gradation conversion characteristics.
- the gradation conversion characteristic is a characteristic of gradation processing for each image area.
- the gradation characteristics are pixel values such as brightness and brightness for each pixel, for example.
- the visual processing device of the present invention when determining the gradation conversion characteristics for each image area, not only the gradation characteristics for each image area, but also the gradation characteristics of a wide-area image area including the surrounding image areas. Use it to make a decision. Therefore, it is possible to add a spatial processing effect to the gradation processing for each image area, and it is possible to realize gradation processing that further improves the visual effect.
- the visual processing device is the visual processing device according to attachment 1, wherein the gradation conversion characteristic is a gradation conversion curve.
- the tone conversion characteristic deriving means includes a histogram creating means for creating a histogram using the tone characteristics, and a tone curve creating means for creating a tone conversion curve based on the created histogram. are doing.
- the histogram is, for example, a distribution with respect to gradation characteristics of pixels included in the target image region and the peripheral image region.
- the gradation curve creating means uses a cumulative curve obtained by accumulating the values of the hysteresis as a gradation conversion curve.
- a histogram is created using not only the gradation characteristics for each image area but also the wide-range gradation characteristics including the surrounding image areas. For this reason, it is possible to increase the number of divisions of the image signal and reduce the size of the image area, and it is possible to suppress the generation of the pseudo contour due to the gradation processing. In addition, it is possible to prevent the boundary of the image area from being noticeable unnaturally.
- the visual processing device is the visual processing device according to attachment 1, wherein the gradation conversion characteristic is one gradation conversion table among a plurality of gradation conversion tables for gradation processing of an image signal. Is a selection signal for selecting.
- the gradation processing means includes a plurality of gradations. It has a conversion table as a two-dimensional LUT.
- the gradation conversion table is, for example, a look-up table (L U T) that stores pixel values of image signals that have been subjected to gradation processing on pixel values of image signals.
- the selection signal has a value assigned to one gradation conversion table selected from values assigned to each of the plurality of gradation conversion tables.
- the gradation processing means outputs a pixel value of the image signal subjected to the gradation processing by referring to the two-dimensional LUT from the value of the selection signal and the pixel value of the image signal.
- gradation processing is performed two-dimensionally and with reference to UT. For this reason, it is possible to speed up the gradation processing.
- gradation processing is performed by selecting one gradation conversion table from a plurality of gradation conversion tables, appropriate gradation processing can be performed.
- the visual processing device is the visual processing device according to appendix 3, wherein the two-dimensional LUT is the value of the image signal subjected to gradation processing with respect to the value of the selection signal in all values of the image signal.
- a plurality of gradation conversion tables are stored in the order of increasing or decreasing monotonically.
- the value of the selection signal indicates the degree of gradation conversion.
- the visual processing device according to attachment 5 is the visual processing device according to attachment 3 or 4, wherein the two-dimensional LUT can be changed by registering profile data.
- the profile data is data stored in the two-dimensional LUT, and has, for example, pixel values of the image signal subjected to gradation processing as elements.
- the visual processing device of the present invention by changing the two-dimensional LUT, it is possible to variously change the characteristics of the gradation processing without changing the hardware configuration.
- the visual processing device is the visual processing device according to any one of attachments 3 to 5, and the value of the selection signal is derived for each of the target image region and the surrounding image region. It is derived as a feature amount of the individual selection signal which is the selected signal.
- the feature amount of the individual selection signal is, for example, an average value (simple average or weighted average), a maximum value, or a minimum value of the selection signals derived for each image region.
- the selection signal for the target image region is derived as a feature amount of the selection signal for a wide image region including the peripheral image region. For this reason, it is possible to add a spatial processing effect to the selection signal, and it is possible to prevent the boundary of the image region from being noticeable naturally.
- the visual processing device is the visual processing device according to any one of appendices 3 to 5, wherein the selection signal is derived using the gradation characteristics of the target image region and the peripheral image region. It is derived based on the gradation characteristic feature quantity which is a quantity.
- the gradation characteristic feature amount is, for example, an average value (simple average or weighted average), a maximum value, or a minimum value of a wide range of gradation characteristics between the target image region and the peripheral image region.
- the selection signal for the target image area is derived based on the gradation characteristic feature amount for a wide image area including the peripheral image area. For this reason, it is possible to add a spatial processing effect to the selection signal, and it is possible to prevent the boundary of the image area from being noticeable unnaturally.
- the visual processing device is the visual processing device according to any of Supplementary Notes 3 to 7, wherein the gradation processing unit includes a gradation processing execution unit and a correction unit.
- the gradation processing execution means executes gradation processing of the target image region using the gradation conversion table selected by the selection signal.
- the correction unit is a unit that corrects the gradation of the gradation-processed image signal, and is selected for an image area including the target pixel to be corrected and an image area adjacent to the image area including the target pixel. The tone of the target pixel is corrected based on the tone processing table.
- the adjacent image region may be the same image region as the peripheral image region when the gradation conversion characteristics are derived, or may be a different image region.
- the adjacent image areas are selected as three image areas having a short distance from the target pixel among the image areas adjacent to the image area including the target pixel.
- the correction unit corrects the gradation of the image signal subjected to the gradation processing using, for example, the same gradation conversion table for each target image area.
- the correction of the target pixel is performed so that, for example, the influence of each gradation conversion table selected for the adjacent image area appears according to the position of the target pixel.
- the visual processing device is the visual processing device according to any one of attachments 3 to 7, wherein the gradation processing means includes correction means and gradation processing execution means.
- the correction means corrects the selection signal and derives a correction selection signal for selecting a gradation processing table for each pixel of the image signal.
- the gradation processing execution means executes gradation processing of the image signal using the gradation conversion table selected by the correction selection signal.
- the correcting unit corrects the selection signal derived for each target image region based on the pixel position and the selection signal derived for the image region adjacent to the target image region, and derives a selection signal for each pixel.
- a selection signal can be derived for each pixel. For this reason, it is further prevented that the boundary of the image area stands out unnaturally, and the visual effect can be improved.
- the visual processing method includes an image area dividing step, a gradation conversion characteristic deriving step, and a gradation processing step.
- the image region dividing step divides the input image signal into a plurality of image regions.
- the gradation conversion characteristic deriving step is a step of deriving gradation conversion characteristics for each image area, and the gradation characteristics of the target image area from which the gradation conversion characteristics are derived and the surrounding image area of the target image area. Is used to derive the tone conversion characteristics of the target image area.
- gradation processing step gradation processing of the image signal is performed based on the derived gradation conversion characteristics.
- the gradation conversion characteristic is a characteristic of gradation processing for each image area.
- the gradation characteristics are pixel values such as brightness and brightness for each pixel, for example.
- the visual processing method of the present invention when determining the gradation conversion characteristics for each image area, not only the gradation characteristics for each image area, but also the gradation characteristics of a wide-area image area including the peripheral image areas. Use it to make a decision. For this reason, it is possible to add a spatial processing effect to the gradation processing for each image area, and it is possible to realize gradation processing with a higher visual effect.
- the visual processing method according to Supplementary Note 11 is the visual processing method according to Supplementary Note 10;
- the gradation conversion characteristic is a gradation conversion curve.
- the tone conversion characteristic deriving step includes a histogram creation step of creating a histogram using the tone characteristics, and a tone curve creation step of creating a tone conversion curve based on the created histogram. You.
- the histogram is, for example, a distribution with respect to gradation characteristics of pixels included in the target image region and the peripheral image region.
- a cumulative curve obtained by accumulating the values of the histograms is used as a gradation conversion curve.
- a histogram is created using not only the tone characteristics for each image region but also a wide range of tone characteristics including peripheral image regions. For this reason, it is possible to increase the number of divisions of the image signal and reduce the size of the image area, and it is possible to suppress the generation of pseudo contours due to gradation processing. In addition, it is possible to prevent the boundary of the image area from being noticeable unnaturally.
- the visual processing method described in 2 is the visual processing method described in Appendix 10, wherein the gradation conversion characteristic is one of a plurality of gradation conversion tables for gradation processing of an image signal. It is a selection signal for selecting a key conversion table.
- the gradation processing step has a gradation processing execution step and a correction step. In the gradation processing execution step, the gradation processing of the target image area is executed using the gradation conversion table selected by the selection signal.
- the correction step is a step of correcting the gradation of the gradation-processed image signal, and is selected for an image area including the target pixel to be corrected and an image area adjacent to the image area including the target pixel. The gradation of the target pixel is corrected based on the gradation processing table.
- the gradation conversion table is, for example, a look-up table (L U T) that stores pixel values of image signals that have been subjected to gradation processing on pixel values of image signals.
- the adjacent image region may be the same image region as the peripheral image region when the gradation conversion characteristics are derived, or may be a different image region.
- the adjacent image areas are selected as three image areas having a short distance from the target pixel among the image areas adjacent to the image area including the target pixel.
- the selection signal has, for example, a value assigned to one tone conversion table selected from values assigned to each of the plurality of tone conversion tables.
- the pixel value of the gradation-processed image signal is output by referring to the LUT based on the value of the selection signal and the pixel value of the image signal.
- the correction step for example, the gradation of the image signal subjected to gradation processing using the same gradation conversion table for each target image area is corrected.
- the correction of the target pixel is performed so that, for example, the influence of each gradation conversion table selected for the adjacent image region appears according to the position of the target pixel.
- gradation processing is performed with reference to LU.
- gradation processing is performed by selecting one gradation conversion table from a plurality of gradation conversion tables, appropriate gradation processing can be performed. Furthermore, the gradation of the image signal can be corrected for each pixel. For this reason, it is further prevented that the boundary of the image area stands out unnaturally, and the visual effect can be improved.
- the visual processing method described in appendix 13 is the visual processing method described in appendix 10, and the gradation conversion characteristic is one level among a plurality of gradation conversion tables that perform gradation processing on an image signal. It is a selection signal for selecting a key conversion table.
- the gradation processing step has a correction step and a gradation processing execution step.
- the correction step corrects the selection signal and derives a correction selection signal for selecting a gradation processing table for each pixel of the image signal.
- gradation processing execution step gradation processing of the image signal is executed using a gradation conversion table selected by the correction selection signal.
- the gradation conversion table is, for example, a look-up table (L U T) that stores pixel values of image signals that have been subjected to gradation processing on pixel values of image signals.
- the selection signal has a value assigned to one gradation conversion table selected from values assigned to each of the plurality of gradation conversion tables.
- the gradation processing step outputs a pixel value of the gradation-processed image signal by referring to the two-dimensional LUT from the value of the selection signal and the pixel value of the image signal.
- the correction step for example, the selection signal derived for each target image region is corrected based on the pixel signal and the selection signal derived for the image region adjacent to the target image region, and a selection signal for each pixel is derived.
- gradation processing is performed with reference to the LUT. For this reason, it is possible to speed up the gradation processing.
- one from multiple tone conversion tables Since the gradation conversion table is selected and gradation processing is performed, appropriate gradation processing can be performed.
- a selection signal can be derived for each pixel. For this reason, it is further prevented that the boundary of the image area stands out unnaturally, and the visual effect can be improved.
- the visual processing program described in appendix 14 is a visual processing program that causes a computer to execute a visual processing method including an image region dividing step, a gradation conversion characteristic deriving step, and a gradation processing step.
- the image area dividing step divides the input image signal into a plurality of image areas.
- the step of deriving the gradation conversion characteristic is a step of deriving the gradation conversion characteristic for each image region. It is used to derive the gradation conversion characteristics of the target image area.
- gradation processing of the image signal is performed based on the derived gradation conversion characteristics.
- the gradation conversion characteristic is a characteristic of gradation processing for each image area.
- the gradation characteristic is, for example, a pixel value such as luminance and brightness for each pixel.
- the visual processing program of the present invention when determining the gradation conversion characteristics for each image area, not only the gradation characteristics for each image area but also the gradation characteristics of a wide image area including the peripheral image areas are used. Make a decision. For this reason, spatial processing effects can be added to the gradation processing for each image area, and gradation processing with higher visual effects can be realized.
- the visual processing program according to attachment 15 is the visual processing program according to attachment 14 in which the gradation conversion characteristic is a gradation conversion curve.
- the tone conversion characteristic deriving step includes a histogram creation step of creating a histogram using the tone characteristics, and a tone curve creation step of creating a tone conversion curve based on the created histogram. are doing.
- the histogram is, for example, a distribution with respect to gradation characteristics of pixels included in the target image region and the peripheral image region.
- a cumulative curve obtained by accumulating the values of the histograms is used as a gradation conversion curve.
- the visual processing program of the present invention when creating a histogram, not only the gradation characteristics of each image region but also the wide range of gradation characteristics including the peripheral image regions are used. Create a gram. For this reason, it is possible to increase the number of divisions of the image signal and reduce the size of the image area, and it is possible to suppress the occurrence of a false contour due to the gradation processing. In addition, it is possible to prevent the boundary of the image area from being noticeable unnaturally.
- the visual processing program according to Supplementary Note 16 ' is the visual processing program according to Supplementary Note 14, wherein the gradation conversion characteristic is one of a plurality of gradation conversion tables for performing gradation processing on an image signal.
- the gradation processing step has a gradation processing execution step and a correction step.
- gradation processing execution step gradation processing of the target image area is performed using a gradation conversion table selected by the selection signal.
- the correction step is a step of correcting the gradation of the gradation-processed image signal.
- the correction step includes selecting an image region including the target pixel to be corrected and an image region adjacent to the image region including the target pixel. The gradation of the target pixel is corrected based on the gradation processing table.
- the gradation conversion table is, for example, a look-up table (L U T) that stores pixel values of image signals that have been subjected to gradation processing on pixel values of image signals.
- the adjacent image region may be the same image region as the peripheral image region when the gradation conversion characteristics are derived, or may be a different image region.
- the adjacent image areas are selected as three image areas having a short distance from the target pixel among the image areas adjacent to the image area including the target pixel.
- the selection signal has, for example, a value assigned to one tone conversion table selected from values assigned to each of the plurality of tone conversion tables.
- the pixel value of the gradation-processed image signal is output by referring to the LUT based on the value of the selection signal and the pixel value of the image signal.
- the correction step for example, the gradation of the image signal subjected to gradation processing using the same gradation conversion table for each target image area is corrected.
- the correction of the target pixel is performed so that, for example, the influence of each gradation conversion table selected for the adjacent image region appears according to the position of the target pixel.
- gradation processing is performed with reference to the LUT.
- the visual processing program described in 7 is the visual processing program described in APPENDIX 14, wherein the gradation conversion characteristic is one floor among a plurality of gradation conversion tables for gradation processing of image signals. It is a selection signal for selecting a key conversion table. Further, the gradation processing step has a correction step and a gradation processing execution step. The correction step corrects the selection signal and derives a correction selection signal for selecting a gradation processing table for each pixel of the image signal. In the gradation processing execution step, the gradation processing of the image signal is executed using a gradation conversion table selected by the correction selection signal.
- the gradation conversion table is, for example, a look-up table (L U T) that stores pixel values of image signals that have been subjected to gradation processing on pixel values of image signals.
- the selection signal has a value assigned to one gradation conversion table selected from values assigned to each of the plurality of gradation conversion tables.
- the gradation processing step outputs a pixel value of the gradation-processed image signal by referring to the two-dimensional LUT from the value of the selection signal and the pixel value of the image signal.
- the correction step for example, the selection signal derived for each target image region is corrected based on the pixel signal and the selection signal derived for the image region adjacent to the target image region, and a selection signal for each pixel is derived.
- gradation processing is performed with reference to LUT. For this reason, it is possible to speed up the gradation processing.
- gradation processing is performed by selecting one gradation conversion table from a plurality of gradation conversion tables, appropriate gradation processing can be performed.
- a selection signal can be derived for each pixel. For this reason, it is further prevented that the boundary of the image area stands out unnaturally, and the visual effect can be improved.
- the visual processing device is the visual processing device according to Supplementary Note 1, wherein the gradation processing unit converts a curve parameter of a gradation conversion curve for performing gradation processing on the image signal into a gradation conversion curve. It has parameter output means for outputting based on characteristics.
- the gradation processing means performs gradation processing on the image signal using a gradation conversion curve specified based on the gradation conversion specification and the curve parameter.
- the gradation conversion curve includes a curve that is at least partially linear.
- the curve parameter is a parameter for distinguishing the gradation conversion curve from other gradation conversion curves, such as the coordinates on the gradation conversion curve, the gradient of the gradation conversion curve, and the curvature.
- the parameter output means includes, for example, a lookup table for storing curve parameters for gradation conversion characteristics, and a calculation means for calculating curve parameters by performing calculations such as curve approximation using curve parameters for predetermined gradation conversion characteristics. It is.
- the visual processing device of the present invention performs gradation processing on an image signal according to gradation conversion characteristics. For this reason, it is possible to perform gradation processing more appropriately. In addition, it is not necessary to store in advance the values of all tone conversion curves used for tone processing, and tone processing is performed by specifying the tone conversion curve from the output curve parameters. For this reason, it is possible to reduce the storage capacity for storing the gradation conversion curve.
- the visual processing device according to attachment 19 is the visual processing device according to attachment 18 and the parameter output means is a look-up table that stores the relationship between the gradation conversion characteristics and the curve parameters.
- the look-up table stores the relationship between tone conversion characteristics and curve parameters.
- the gradation processing means performs gradation processing on the image signal using the specified gradation conversion curve.
- the visual processing device of the present invention performs gradation processing on an image signal according to gradation conversion characteristics. For this reason, it is possible to perform gradation processing more appropriately. Furthermore, it is not necessary to store in advance the values of all tone conversion curves that are used, only the curve parameters are stored. For this reason, it is possible to reduce the storage capacity for storing the gradation conversion curve.
- the visual processing device according to attachment 20 is the visual processing device according to attachment 18 or 19, wherein the curve parameter includes a value of the gradation-processed image signal with respect to a predetermined value of the image signal.
- the gradation processing means non-linearly or linearly internally divides the value of the gradation-processed image signal included in the curve parameter using the relationship between a predetermined value of the image signal and the value of the image signal to be subjected to visual processing. Then, the value of the gradation-processed image signal is derived.
- the visual processing device of the present invention it is possible to specify a gradation conversion curve from the value of a gradation-processed image signal with respect to a predetermined value of the image signal and perform gradation processing.
- the visual processing device according to attachment 21 is the visual processing device according to any one of attachments 18 to 20, wherein the curve parameter represents a slope of a gradation conversion curve in a predetermined section of the image signal. Including.
- the gradation conversion curve is specified by the gradient of the gradation conversion curve in a predetermined section of the image signal. Further, using the specified gradation conversion curve, the value of the image signal subjected to gradation processing with respect to the value of the image signal is derived.
- the visual processing device of the present invention it is possible to specify a gradation conversion curve and perform gradation processing based on the inclination of the gradation conversion curve in a predetermined section of the image signal.
- the visual processing device according to attachment 22 is the visual processing device according to any of attachments 18 to 21, wherein the curve parameter includes coordinates of at least one point through which the gradation conversion curve passes.
- the coordinates of at least one point through which the gradation conversion curve passes are specified. That is, at least one point of the value of the image signal after gradation processing is specified for the value of the image signal.
- the gradation processing means uses the relationship between the value of the specified image signal and the value of the image signal that is the target of visual processing, and adds the value of the specified image signal after gradation processing in a nonlinear or linear manner. By dividing, the value of the image signal subjected to gradation processing is derived.
- the visual processing device of the present invention it is possible to specify the gradation conversion curve and perform the gradation processing based on the coordinates of at least one point through which the gradation conversion curve passes.
- the visual processing device includes spatial processing means and visual processing means.
- the spatial processing means is means for performing spatial processing for each of a plurality of image regions in the input image signal to derive a spatial processing signal.
- Spatial processing uses the weighting based on the difference in gradation characteristics between the target image area that is the target of spatial processing and the surrounding image area of the target image area, and the gradation characteristics between the target image area and the surrounding image area. Perform a weighted average.
- the visual processing means performs visual processing of the target image area based on the gradation characteristics of the target image area and the spatial processing signal.
- the image area is an area including a plurality of pixels in an image, or a pixel.
- the gradation characteristics are values based on pixel values such as luminance and brightness for each pixel.
- the gradation characteristics of an image area include an average value (simple average or weighted average), a maximum value, or a minimum value of pixel values of pixels included in the image area.
- the spatial processing means performs spatial processing of the target image area using the gradation characteristics of the peripheral image area.
- the gradation characteristics of the target image area and the surrounding image area are weighted averaged.
- the weight in the weighted average is set based on the difference in gradation characteristics between the target image area and the peripheral image area.
- the visual processing device of the present invention it is possible to suppress the influence received from image regions having greatly different gradation characteristics in the spatial processing signal. For example, it is possible to derive an appropriate spatial processing signal even when the peripheral image area is an image including the boundary of an object and the gradation characteristics are significantly different from the target image area. As a result, even in visual processing using spatially processed signals, it is possible to suppress the occurrence of pseudo contours. This makes it possible to realize visual processing that improves the visual effect.
- the visual processing device according to attachment 24 is the visual processing device according to attachment 23, wherein the weighting decreases as the absolute value of the difference in gradation characteristics increases.
- the weight may be given as a value that monotonously decreases in accordance with a difference in gradation characteristics, or is set to a predetermined value by comparing a predetermined threshold value with a difference in gradation characteristics. It may be a thing.
- the visual processing device of the present invention it is possible to suppress the influence received from image regions having greatly different gradation characteristics in the spatial processing signal. For example, it is possible to derive an appropriate spatial processing signal even when the peripheral image area is an image including the boundary of an object and the tone characteristics are largely different from the target image area. As a result, even in visual processing using spatially processed signals, it is possible to suppress the occurrence of pseudo contours. This makes it possible to realize visual processing that improves the visual effect.
- the visual processing device according to attachment 25 is the visual processing device according to attachment 23 or 24, wherein the weighting is smaller as the distance between the target image region and the peripheral image region is larger. 1
- the weight may be given as a value that monotonically decreases according to the magnitude of the distance between the target image area and the peripheral image area, or by comparing a predetermined threshold value with the magnitude of the distance. It may be set to a predetermined value.
- the visual processing device of the present invention it is possible to suppress the influence of the spatial processing signal from the peripheral image region that is distant from the target image region. For this reason, the peripheral image area is an image including the boundary of the object, etc., and even if the gradation characteristics are significantly different from the target image area, if the peripheral image area and the target image area are separated from each other, It is possible to suppress the influence from the image area and derive a more appropriate spatial processing signal.
- the visual processing device according to attachment 26 is the visual processing device according to any one of attachments 23 to 25, wherein the image area includes a plurality of pixels.
- the gradation characteristics of the target image area and the peripheral image area are determined as feature values of pixel values constituting each image area.
- the visual processing device of the present invention when performing spatial processing for each image area, not only the pixels included in each image area but also the gradation characteristics of the pixels included in a wide-area image area including the surrounding image areas Process using. This makes it possible to perform more appropriate spatial processing. As a result, even in visual processing using spatially processed signals, it is possible to suppress the occurrence of pseudo contours. Therefore, it is possible to realize visual processing that improves the visual effect.
- the visual processing device includes a target image region determining unit, a peripheral image region determining unit, a gradation conversion characteristic deriving unit, and a gradation processing unit.
- the target image area determining means determines a target image area for which the gradation conversion characteristic is to be derived from the input image signal.
- the peripheral image area determining means determines at least one peripheral image area located around the target image area and including a plurality of pixels.
- the gradation conversion characteristic deriving means derives a gradation conversion characteristic of the target image region using the peripheral image data of the peripheral image region.
- the gradation processing means performs gradation processing on the image signal of the target image area based on the derived gradation conversion characteristics.
- the target image area is, for example, a pixel included in the image signal or a predetermined unit of the image signal. It is an image block divided into several places, or an area composed of a plurality of pixels.
- the peripheral image area is, for example, an area composed of an image block obtained by dividing an image signal into predetermined units and other plural pixels.
- the peripheral image data is image data of the peripheral image area or data derived from the image data, and includes, for example, a pixel value of the peripheral image area, a gradation characteristic (brightness and brightness for each pixel), a thumbnail (a reduced image) Or thinned images with reduced resolution). Further, the peripheral image area only needs to be located around the target image area, and need not be an area surrounding the target image area.
- the visual processing device of the present invention when judging the gradation conversion characteristics of the target image area, the judgment is made using peripheral image data of the peripheral image area. Therefore, it is possible to add a spatial processing effect to the gradation processing for each target image area, and it is possible to realize gradation processing that further improves the visual effect.
- the visual processing method includes a target image region determining step, a peripheral image region determining step, a gradation conversion characteristic deriving step, and a gradation processing step.
- the target image area determining step determines a target image area for which the gradation conversion characteristic is to be derived from the input image signal.
- the peripheral image region determining step determines at least one peripheral image region that is located around the target image region and includes a plurality of pixels.
- the gradation conversion characteristic of the target image area is derived using the peripheral image data of the peripheral image area.
- the gradation processing step performs gradation processing of the image signal of the target image area based on the derived gradation conversion characteristics.
- the visual processing method of the present invention when judging the gradation conversion characteristics of the target image area, the judgment is made using peripheral image data of the peripheral image area. Therefore, it is possible to add a spatial processing effect to the gradation processing for each target image area, and it is possible to realize gradation processing that further improves the visual effect.
- the visual processing program according to Supplementary Note 29 is a visual processing program for performing a visual processing method of performing visual processing on an input image signal using a computer.
- the visual processing method includes a target image region determining step, a peripheral image region determining step, a gradation conversion characteristic deriving step, and a gradation processing step.
- the target image area determining step determines a target image area from which the gradation conversion characteristic is to be derived from the input image signal.
- the peripheral image area determination step is located around the target image area. At least one peripheral image area including a plurality of pixels is determined.
- the gradation conversion characteristic deriving step derives a gradation conversion characteristic of the target image region using the peripheral image data of the peripheral image region.
- the gradation processing step performs gradation processing on the image signal of the target image area based on the derived gradation conversion characteristics.
- the visual processing program of the present invention when judging the gradation conversion characteristics of the target image area, the judgment is made using the peripheral image data of the peripheral image area. For this reason, it is possible to add a spatial processing effect to the gradation processing for each target image area, and it is possible to realize gradation processing that further enhances the visual effect.
- the semiconductor device includes a target image region determining unit, a peripheral image region determining unit, a gradation conversion characteristic deriving unit, and a gradation processing unit.
- the target image area determination unit determines a target image area from which the gradation conversion characteristic is to be derived from the input image signal.
- the peripheral image region determining unit determines at least one peripheral image region located around the target image region and including a plurality of pixels.
- the gradation conversion characteristic deriving unit derives a gradation conversion characteristic of the target image region using peripheral image data of the peripheral image region.
- the gradation processing unit performs gradation processing on the image signal of the target image area based on the derived gradation conversion characteristics.
- the semiconductor device of the present invention when judging the gradation conversion characteristics of the target image area, the judgment is made using the peripheral image data of the peripheral image area. For this reason, it is possible to add a spatial processing effect to the gradation processing for each target image area, and it is possible to realize gradation processing that further enhances the visual effect.
- the visual processing device is also applicable to applications such as a visual processing device that performs a gradation process on an image signal that needs to realize a gradation process for further improving a visual effect.
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP04773245A EP1667064A4 (en) | 2003-09-11 | 2004-09-10 | IMAGE PROCESSING DEVICE, METHOD AND PROGRAM AND SEMICONDUCTOR ELEMENT |
US10/571,120 US7783126B2 (en) | 2003-09-11 | 2004-09-10 | Visual processing device, visual processing method, visual processing program, and semiconductor device |
KR1020107029130A KR101089394B1 (ko) | 2003-09-11 | 2004-09-10 | 시각 처리 장치, 시각 처리 방법, 시각 처리 프로그램 및 반도체 장치 |
US12/838,689 US7945115B2 (en) | 2003-09-11 | 2010-07-19 | Visual processing device, visual processing method, visual processing program, and semiconductor device |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003-320059 | 2003-09-11 | ||
JP2003320059 | 2003-09-11 | ||
JP2004115166 | 2004-04-09 | ||
JP2004-115166 | 2004-04-09 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/571,120 A-371-Of-International US7783126B2 (en) | 2003-09-11 | 2004-09-10 | Visual processing device, visual processing method, visual processing program, and semiconductor device |
US12/838,689 Continuation US7945115B2 (en) | 2003-09-11 | 2010-07-19 | Visual processing device, visual processing method, visual processing program, and semiconductor device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005027041A1 true WO2005027041A1 (ja) | 2005-03-24 |
Family
ID=34315661
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/013601 WO2005027041A1 (ja) | 2003-09-11 | 2004-09-10 | 視覚処理装置、視覚処理方法、視覚処理プログラムおよび半導体装置 |
Country Status (5)
Country | Link |
---|---|
US (2) | US7783126B2 (ja) |
EP (1) | EP1667064A4 (ja) |
JP (1) | JP2009211708A (ja) |
KR (2) | KR101089394B1 (ja) |
WO (1) | WO2005027041A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2026561A1 (en) * | 2006-06-02 | 2009-02-18 | Rohm Co., Ltd. | Image processing circuit, semiconductor device, and image processing device |
US7990465B2 (en) | 2007-09-13 | 2011-08-02 | Panasonic Corporation | Imaging apparatus, imaging method, storage medium, and integrated circuit |
US8144214B2 (en) | 2007-04-18 | 2012-03-27 | Panasonic Corporation | Imaging apparatus, imaging method, integrated circuit, and storage medium |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101010937A (zh) * | 2004-09-01 | 2007-08-01 | 日本电气株式会社 | 图像修正处理系统以及图像修正处理方法 |
US7551234B2 (en) * | 2005-07-28 | 2009-06-23 | Seiko Epson Corporation | Method and apparatus for estimating shot boundaries in a digital video sequence |
JP4345757B2 (ja) * | 2006-02-22 | 2009-10-14 | セイコーエプソン株式会社 | 画像データの色の補正 |
US7809210B2 (en) * | 2006-12-12 | 2010-10-05 | Mitsubishi Digital Electronics America, Inc. | Smart grey level magnifier for digital display |
JP4525719B2 (ja) * | 2007-08-31 | 2010-08-18 | カシオ計算機株式会社 | 階調補正装置、階調補正方法、及び、プログラム |
JP4462359B2 (ja) * | 2008-02-27 | 2010-05-12 | コニカミノルタビジネステクノロジーズ株式会社 | 画像圧縮装置 |
ITTO20090161A1 (it) * | 2009-03-03 | 2010-09-04 | Galileo Avionica Spa | Equalizzazione ed elaborazione di immagini ir |
JP2010278724A (ja) * | 2009-05-28 | 2010-12-09 | Olympus Corp | 画像処理装置、画像処理方法及び画像処理プログラム |
JP2011049650A (ja) * | 2009-08-25 | 2011-03-10 | Canon Inc | 画像処理装置及び画像処理方法 |
JP5493717B2 (ja) * | 2009-10-30 | 2014-05-14 | 大日本印刷株式会社 | 画像処理装置、画像処理方法、および、画像処理用プログラム |
JP5140206B2 (ja) * | 2010-10-12 | 2013-02-06 | パナソニック株式会社 | 色信号処理装置 |
US9245495B2 (en) | 2011-12-29 | 2016-01-26 | Intel Corporation | Simplification of local contrast compensation by using weighted look-up table |
KR20130106642A (ko) * | 2012-03-20 | 2013-09-30 | 삼성디스플레이 주식회사 | 휘도 보정 시스템 및 그 방법 |
US11113821B2 (en) | 2017-12-20 | 2021-09-07 | Duelight Llc | System, method, and computer program for adjusting image contrast using parameterized cumulative distribution functions |
TWI473039B (zh) * | 2013-03-05 | 2015-02-11 | Univ Tamkang | 影像的動態範圍壓縮與局部對比增強方法及影像處理裝置 |
JP6904838B2 (ja) * | 2017-07-31 | 2021-07-21 | キヤノン株式会社 | 画像処理装置、その制御方法およびプログラム |
KR102349376B1 (ko) * | 2017-11-03 | 2022-01-11 | 삼성전자주식회사 | 전자 장치 및 그의 영상 보정 방법 |
KR102368229B1 (ko) * | 2018-02-06 | 2022-03-03 | 한화테크윈 주식회사 | 영상처리장치 및 방법 |
JP7104575B2 (ja) * | 2018-06-29 | 2022-07-21 | キヤノン株式会社 | 画像処理装置、制御方法、及びプログラム |
JP7212466B2 (ja) * | 2018-07-06 | 2023-01-25 | キヤノン株式会社 | 画像処理装置、画像処理方法、プログラム、記憶媒体 |
CN108877673B (zh) * | 2018-07-27 | 2020-12-25 | 京东方科技集团股份有限公司 | 控制显示面板驱动电流的方法及装置、电子设备、存储介质 |
CN110120021B (zh) * | 2019-05-05 | 2021-04-09 | 腾讯科技(深圳)有限公司 | 图像亮度的调整方法、装置、存储介质及电子装置 |
CN112950515A (zh) * | 2021-01-29 | 2021-06-11 | Oppo广东移动通信有限公司 | 图像处理方法及装置、计算机可读存储介质和电子设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5524070A (en) | 1992-10-07 | 1996-06-04 | The Research Foundation Of State University Of New York | Local adaptive contrast enhancement |
JPH11122488A (ja) * | 1997-10-17 | 1999-04-30 | Sharp Corp | 画像処理装置 |
JP2000004379A (ja) * | 1995-09-25 | 2000-01-07 | Matsushita Electric Ind Co Ltd | 画像表示方法及びその装置 |
WO2001026054A2 (en) | 1999-10-01 | 2001-04-12 | Microsoft Corporation | Locally adapted histogram equalization |
JP2001243463A (ja) * | 2000-02-28 | 2001-09-07 | Minolta Co Ltd | 記録媒体、並びに、画像処理装置および画像処理方法 |
Family Cites Families (74)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3024459A1 (de) | 1979-07-03 | 1981-01-08 | Crosfield Electronics Ltd | Pyramideninterpolation |
JPS5827145A (ja) | 1981-08-10 | 1983-02-17 | Fuji Photo Film Co Ltd | カラ−画像信号の輪郭強調処理方法及びカラ−画像走査装置 |
US4837722A (en) * | 1986-05-14 | 1989-06-06 | Massachusetts Institute Of Technology | Digital high speed 3-dimensional interpolation machine |
JP2718448B2 (ja) | 1987-10-19 | 1998-02-25 | 富士写真フイルム株式会社 | 画像処理データの設定方法 |
JP2655602B2 (ja) | 1987-11-04 | 1997-09-24 | 日本電気株式会社 | 画像強調回路 |
JP2832954B2 (ja) | 1988-09-09 | 1998-12-09 | 日本電気株式会社 | 画像強調回路 |
JPH0348980A (ja) | 1989-07-18 | 1991-03-01 | Fujitsu Ltd | 輪郭強調処理方式 |
JP2663189B2 (ja) * | 1990-01-29 | 1997-10-15 | 富士写真フイルム株式会社 | 画像のダイナミックレンジ圧縮処理方法 |
JP2961953B2 (ja) | 1991-06-14 | 1999-10-12 | 松下電器産業株式会社 | 色補正方法および装置 |
JP3303427B2 (ja) | 1992-05-19 | 2002-07-22 | ミノルタ株式会社 | デジタル画像形成装置 |
JP3222577B2 (ja) | 1992-09-16 | 2001-10-29 | 古河電気工業株式会社 | 熱交換器用アルミニウム合金フィン材 |
JPH06259543A (ja) | 1993-03-05 | 1994-09-16 | Hitachi Medical Corp | 画像処理装置 |
JP3196864B2 (ja) * | 1993-04-19 | 2001-08-06 | 富士写真フイルム株式会社 | 画像のダイナミックレンジ圧縮処理方法 |
JPH07220066A (ja) | 1994-01-28 | 1995-08-18 | Matsushita Electric Ind Co Ltd | 画像処理装置 |
US5483360A (en) | 1994-06-06 | 1996-01-09 | Xerox Corporation | Color printer calibration with blended look up tables |
JP3518913B2 (ja) | 1994-12-22 | 2004-04-12 | 株式会社リコー | 階調変換曲線生成装置 |
US5479926A (en) | 1995-03-10 | 1996-01-02 | Acuson Corporation | Imaging system display processor |
US5774599A (en) | 1995-03-14 | 1998-06-30 | Eastman Kodak Company | Method for precompensation of digital images for enhanced presentation on digital displays with limited capabilities |
JP3501252B2 (ja) * | 1995-06-16 | 2004-03-02 | 三菱電機株式会社 | 階調補正装置 |
US6094185A (en) * | 1995-07-05 | 2000-07-25 | Sun Microsystems, Inc. | Apparatus and method for automatically adjusting computer display parameters in response to ambient light and user preferences |
JP3003561B2 (ja) * | 1995-09-25 | 2000-01-31 | 松下電器産業株式会社 | 階調変換方法及びその回路と画像表示方法及びその装置と画像信号変換装置 |
JP4014671B2 (ja) | 1995-09-29 | 2007-11-28 | 富士フイルム株式会社 | 多重解像度変換方法および装置 |
EP0766202B1 (en) * | 1995-09-29 | 2002-12-18 | Fuji Photo Film Co., Ltd. | Image processing method and apparatus |
JP3816151B2 (ja) | 1995-09-29 | 2006-08-30 | 富士写真フイルム株式会社 | 画像処理方法および装置 |
JP3171081B2 (ja) | 1995-12-18 | 2001-05-28 | 富士ゼロックス株式会社 | 画像処理装置 |
JPH09231353A (ja) | 1996-02-23 | 1997-09-05 | Toshiba Corp | カラー画像処理システム |
JPH09275496A (ja) | 1996-04-04 | 1997-10-21 | Dainippon Screen Mfg Co Ltd | 画像の輪郭強調処理装置および方法 |
JP3797442B2 (ja) | 1996-06-18 | 2006-07-19 | 富士写真フイルム株式会社 | 画像再生方法および装置 |
JPH1065930A (ja) | 1996-08-19 | 1998-03-06 | Fuji Xerox Co Ltd | カラー画像処理方法およびカラー画像処理装置 |
US6351558B1 (en) * | 1996-11-13 | 2002-02-26 | Seiko Epson Corporation | Image processing system, image processing method, and medium having an image processing control program recorded thereon |
US6453069B1 (en) * | 1996-11-20 | 2002-09-17 | Canon Kabushiki Kaisha | Method of extracting image from input image using reference image |
JPH10154223A (ja) | 1996-11-25 | 1998-06-09 | Ricoh Co Ltd | データ変換装置 |
KR100261214B1 (ko) * | 1997-02-27 | 2000-07-01 | 윤종용 | 영상처리 시스템의 콘트라스트 확장장치에서 히스토그램 등화방법 및 장치 |
JP2951909B2 (ja) * | 1997-03-17 | 1999-09-20 | 松下電器産業株式会社 | 撮像装置の階調補正装置及び階調補正方法 |
JPH10334218A (ja) | 1997-06-02 | 1998-12-18 | Canon Inc | 画像処理装置およびその方法、並びに、記録媒体 |
JP3585703B2 (ja) * | 1997-06-27 | 2004-11-04 | シャープ株式会社 | 画像処理装置 |
JP3671616B2 (ja) | 1997-08-21 | 2005-07-13 | 富士ゼロックス株式会社 | 画像処理装置 |
US6147664A (en) * | 1997-08-29 | 2000-11-14 | Candescent Technologies Corporation | Controlling the brightness of an FED device using PWM on the row side and AM on the column side |
US6069597A (en) | 1997-08-29 | 2000-05-30 | Candescent Technologies Corporation | Circuit and method for controlling the brightness of an FED device |
JP4834896B2 (ja) * | 1997-10-31 | 2011-12-14 | ソニー株式会社 | 画像処理装置及び方法、画像送受信システム及び方法、並びに記録媒体 |
US6411306B1 (en) * | 1997-11-14 | 2002-06-25 | Eastman Kodak Company | Automatic luminance and contrast adustment for display device |
JP3907810B2 (ja) | 1998-01-07 | 2007-04-18 | 富士フイルム株式会社 | 3次元ルックアップテーブルの補正法およびこれを行う画像処理装置ならびにこれを備えたデジタルカラープリンタ |
US6323869B1 (en) * | 1998-01-09 | 2001-11-27 | Eastman Kodak Company | Method and system for modality dependent tone scale adjustment |
JP3809298B2 (ja) | 1998-05-26 | 2006-08-16 | キヤノン株式会社 | 画像処理方法、装置および記録媒体 |
JP2000032281A (ja) | 1998-07-07 | 2000-01-28 | Ricoh Co Ltd | 画像処理方法、装置および記録媒体 |
US6161311A (en) * | 1998-07-10 | 2000-12-19 | Asm America, Inc. | System and method for reducing particles in epitaxial reactors |
JP3791199B2 (ja) | 1998-08-05 | 2006-06-28 | コニカミノルタビジネステクノロジーズ株式会社 | 画像処理装置、画像処理方法及び画像処理プログラムを記録した記録媒体 |
US6643398B2 (en) * | 1998-08-05 | 2003-11-04 | Minolta Co., Ltd. | Image correction device, image correction method and computer program product in memory for image correction |
US6275605B1 (en) * | 1999-01-18 | 2001-08-14 | Eastman Kodak Company | Method for adjusting the tone scale of a digital image |
US6674436B1 (en) | 1999-02-01 | 2004-01-06 | Microsoft Corporation | Methods and apparatus for improving the quality of displayed images through the use of display device and display condition information |
JP2000278522A (ja) | 1999-03-23 | 2000-10-06 | Minolta Co Ltd | 画像処理装置 |
US6580835B1 (en) | 1999-06-02 | 2003-06-17 | Eastman Kodak Company | Method for enhancing the edge contrast of a digital image |
JP2001111858A (ja) | 1999-08-03 | 2001-04-20 | Fuji Photo Film Co Ltd | 色修正定義作成方法、色修正定義作成装置、および色修正定義作成プログラム記憶媒体 |
JP2001069352A (ja) | 1999-08-27 | 2001-03-16 | Canon Inc | 画像処理装置およびその方法 |
JP2001078047A (ja) | 1999-09-02 | 2001-03-23 | Seiko Epson Corp | プロファイル合成方法、プロファイル合成装置、並びにプロファイル合成プログラムを記憶した媒体、及びデータ変換装置 |
US7006668B2 (en) | 1999-12-28 | 2006-02-28 | Canon Kabushiki Kaisha | Image processing method and image processing apparatus |
US6618045B1 (en) * | 2000-02-04 | 2003-09-09 | Microsoft Corporation | Display device with self-adjusting control parameters |
US6822762B2 (en) * | 2000-03-31 | 2004-11-23 | Hewlett-Packard Development Company, L.P. | Local color correction |
US6813041B1 (en) * | 2000-03-31 | 2004-11-02 | Hewlett-Packard Development Company, L.P. | Method and apparatus for performing local color correction |
JP4081219B2 (ja) * | 2000-04-17 | 2008-04-23 | 富士フイルム株式会社 | 画像処理方法及び画像処理装置 |
JP2002044451A (ja) | 2000-07-19 | 2002-02-08 | Canon Inc | 画像処理装置およびその方法 |
DE60110399T2 (de) | 2000-08-28 | 2006-01-19 | Seiko Epson Corp. | Sich an die umgebung anpassendes anzeigesystem, bildverarbeitungsverfahren und bildspeichermedium |
US6856704B1 (en) | 2000-09-13 | 2005-02-15 | Eastman Kodak Company | Method for enhancing a digital image based upon pixel color |
JP3793987B2 (ja) | 2000-09-13 | 2006-07-05 | セイコーエプソン株式会社 | 補正カーブ生成方法、画像処理方法、画像表示装置および記録媒体 |
US6915024B1 (en) * | 2000-09-29 | 2005-07-05 | Hewlett-Packard Development Company, L.P. | Image sharpening by variable contrast mapping |
JP2002204372A (ja) * | 2000-12-28 | 2002-07-19 | Canon Inc | 画像処理装置およびその方法 |
JP2002281333A (ja) | 2001-03-16 | 2002-09-27 | Canon Inc | 画像処理装置およびその方法 |
US7023580B2 (en) * | 2001-04-20 | 2006-04-04 | Agilent Technologies, Inc. | System and method for digital image tone mapping using an adaptive sigmoidal function based on perceptual preference guidelines |
US6826310B2 (en) * | 2001-07-06 | 2004-11-30 | Jasc Software, Inc. | Automatic contrast enhancement |
JP3705180B2 (ja) * | 2001-09-27 | 2005-10-12 | セイコーエプソン株式会社 | 画像表示システム、プログラム、情報記憶媒体および画像処理方法 |
JP3752448B2 (ja) | 2001-12-05 | 2006-03-08 | オリンパス株式会社 | 画像表示システム |
JP2003242498A (ja) | 2002-02-18 | 2003-08-29 | Konica Corp | 画像処理方法および画像処理装置ならびに画像出力方法および画像出力装置 |
JP4367162B2 (ja) | 2003-02-19 | 2009-11-18 | パナソニック株式会社 | プラズマディスプレイパネルおよびそのエージング方法 |
JP4089483B2 (ja) * | 2003-03-27 | 2008-05-28 | セイコーエプソン株式会社 | 異なった特徴の画像が混在する画像を表す画像信号の階調特性制御 |
-
2004
- 2004-09-10 KR KR1020107029130A patent/KR101089394B1/ko active IP Right Grant
- 2004-09-10 US US10/571,120 patent/US7783126B2/en active Active
- 2004-09-10 WO PCT/JP2004/013601 patent/WO2005027041A1/ja active Application Filing
- 2004-09-10 KR KR1020067005005A patent/KR101027825B1/ko active IP Right Grant
- 2004-09-10 EP EP04773245A patent/EP1667064A4/en not_active Ceased
-
2009
- 2009-05-08 JP JP2009113313A patent/JP2009211708A/ja active Pending
-
2010
- 2010-07-19 US US12/838,689 patent/US7945115B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5524070A (en) | 1992-10-07 | 1996-06-04 | The Research Foundation Of State University Of New York | Local adaptive contrast enhancement |
JP2000004379A (ja) * | 1995-09-25 | 2000-01-07 | Matsushita Electric Ind Co Ltd | 画像表示方法及びその装置 |
JPH11122488A (ja) * | 1997-10-17 | 1999-04-30 | Sharp Corp | 画像処理装置 |
WO2001026054A2 (en) | 1999-10-01 | 2001-04-12 | Microsoft Corporation | Locally adapted histogram equalization |
JP2001243463A (ja) * | 2000-02-28 | 2001-09-07 | Minolta Co Ltd | 記録媒体、並びに、画像処理装置および画像処理方法 |
Non-Patent Citations (2)
Title |
---|
See also references of EP1667064A4 |
VOSSEPOEL, A.M.; STOEL, B.C.; MEERSHOEK, A.P., PATTERN RECOGNITION, 1988., 9TH INTERNATIONAL CONFERENCE ON 14-17 NOV. 1988, vol. 1, 14 November 1988 (1988-11-14), pages 351 - 353 |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2026561A1 (en) * | 2006-06-02 | 2009-02-18 | Rohm Co., Ltd. | Image processing circuit, semiconductor device, and image processing device |
EP2026561A4 (en) * | 2006-06-02 | 2010-06-02 | Rohm Co Ltd | IMAGE PROCESSING CIRCUIT, SEMICONDUCTOR ARRANGEMENT AND IMAGE PROCESSING ARRANGEMENT |
US8159558B2 (en) | 2006-06-02 | 2012-04-17 | Rohm Co., Ltd. | Image processing circuit, semiconductor device and image processing device |
US8144214B2 (en) | 2007-04-18 | 2012-03-27 | Panasonic Corporation | Imaging apparatus, imaging method, integrated circuit, and storage medium |
US8488029B2 (en) | 2007-04-18 | 2013-07-16 | Panasonic Corporation | Imaging apparatus, imaging method, integrated circuit, and storage medium |
US8711255B2 (en) | 2007-04-18 | 2014-04-29 | Panasonic Corporation | Visual processing apparatus and visual processing method |
US7990465B2 (en) | 2007-09-13 | 2011-08-02 | Panasonic Corporation | Imaging apparatus, imaging method, storage medium, and integrated circuit |
US8786764B2 (en) | 2007-09-13 | 2014-07-22 | Panasonic Intellectual Property Corporation Of America | Imaging apparatus, imaging method, and non-transitory storage medium which perform backlight correction |
Also Published As
Publication number | Publication date |
---|---|
US7945115B2 (en) | 2011-05-17 |
EP1667064A1 (en) | 2006-06-07 |
US20070071318A1 (en) | 2007-03-29 |
KR101027825B1 (ko) | 2011-04-07 |
US7783126B2 (en) | 2010-08-24 |
JP2009211708A (ja) | 2009-09-17 |
KR20110003598A (ko) | 2011-01-12 |
KR20060121876A (ko) | 2006-11-29 |
US20100309216A1 (en) | 2010-12-09 |
EP1667064A4 (en) | 2009-06-10 |
KR101089394B1 (ko) | 2011-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005027041A1 (ja) | 視覚処理装置、視覚処理方法、視覚処理プログラムおよび半導体装置 | |
JP4157592B2 (ja) | 視覚処理装置、表示装置、視覚処理方法、プログラムおよび集積回路 | |
JP4688945B2 (ja) | 視覚処理装置、視覚処理方法、テレビジョン、携帯情報端末、カメラおよびプロセッサ | |
JP5259785B2 (ja) | 視覚処理装置、視覚処理方法、テレビジョン、携帯情報端末、カメラおよびプロセッサ | |
JP4481333B2 (ja) | 視覚処理装置、視覚処理方法、画像表示装置、テレビジョン、情報携帯端末、カメラおよびプロセッサ | |
JP4437150B2 (ja) | 視覚処理装置、表示装置、視覚処理方法、プログラムおよび集積回路 | |
JP4641784B2 (ja) | 階調変換処理装置、階調変換処理方法、画像表示装置、テレビジョン、携帯情報端末、カメラ、集積回路および画像処理プログラム | |
JP2008159069A5 (ja) | ||
KR20090009232A (ko) | 시각 처리 장치, 시각 처리 방법, 프로그램, 기록 매체, 표시 장치 및 집적 회로 | |
JP4126297B2 (ja) | 視覚処理装置、視覚処理方法、視覚処理プログラム、集積回路、表示装置、撮影装置および携帯情報端末 | |
JP4414307B2 (ja) | 視覚処理装置、視覚処理方法、視覚処理プログラムおよび半導体装置 | |
JP4094652B2 (ja) | 視覚処理装置、視覚処理方法、プログラム、記録媒体、表示装置および集積回路 | |
JP4414464B2 (ja) | 視覚処理装置、視覚処理方法、視覚処理プログラムおよび半導体装置 | |
JP4126298B2 (ja) | 視覚処理装置、視覚処理方法、視覚処理プログラムおよび半導体装置 | |
JP4437149B2 (ja) | 視覚処理装置、視覚処理方法、プログラム、記録媒体、表示装置および集積回路 | |
CN101453544B (zh) | 灰度变换处理装置 | |
JP2005322205A5 (ja) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200480026254.5 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BW BY BZ CA CH CN CO CR CU CZ DK DM DZ EC EE EG ES FI GB GD GE GM HR HU ID IL IN IS KE KG KP KR LC LK LR LS LT LU LV MA MD MG MN MW MX MZ NA NI NO NZ OM PG PL PT RO RU SC SD SE SG SK SL SY TM TN TR TT TZ UA UG US UZ VC YU ZA ZM |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SZ TZ UG ZM ZW AM AZ BY KG MD RU TJ TM AT BE BG CH CY DE DK EE ES FI FR GB GR HU IE IT MC NL PL PT RO SE SI SK TR BF CF CG CI CM GA GN GQ GW ML MR SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1020067005005 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2004773245 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2004773245 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007071318 Country of ref document: US Ref document number: 10571120 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 1020067005005 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 10571120 Country of ref document: US |