|Publication number||US7646400 B2|
|Application number||US 11/056,597|
|Publication date||Jan 12, 2010|
|Priority date||Feb 11, 2005|
|Also published as||CN1854887A, CN1854887B, DE112006000358T5, US20060181619, WO2006085827A1|
|Publication number||056597, 11056597, US 7646400 B2, US 7646400B2, US-B2-7646400, US7646400 B2, US7646400B2|
|Inventors||Yuen Khim Liow, Siang Thia Goh, Guoran Liu|
|Original Assignee||Creative Technology Ltd|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (11), Referenced by (21), Classifications (18), Legal Events (2)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This invention relates to a method and apparatus for forming a panoramic image and refers particularly, though not exclusively, to such a method and apparatus that facilitates the formation of a panoramic image with no portion of the panoramic image missing.
There are three known methods for forming a panoramic image. In the first, the first frame is taken and displayed. The next frame is displayed live on the screen as the user moves the camera. The user moves that camera in such a way that there is a slight overlap in photograph information in the new frame to be taken. This is tedious, and sometimes confusing, as the user is observing the edge and there is no clear indication that the position of the shot has sufficient overlap for subsequent stitching operations.
In the second, video clips can be analyzed and a panoramic photograph created from it. This becomes difficult when the size of the photographs (e.g. 4 to 8 megapixels) and the number of photographs becomes large. Video takes more storage than multiple images, and requires more processing.
The third and final known method is to automatically perform the stitching each time a photograph is taken. If there is insufficient or no overlap, the stitching process will fail and the user will be prompted to take another photograph. Again, processing takes power and time.
In accordance with a first aspect there is provided a method for forming a panoramic image using a camera. The method comprises placing the camera into a panoramic image mode and taking a first photograph using the camera. On a display of the camera a blending area is formed. The blending area includes a part of a first image of the first photograph displayed on the display. The blending area is at a side of the display. The camera is moved before being prepared to take a second photograph. In the blending area only, a pixel matching process is performed for determining an alignment of the portion of the first image in the blending area and part of the second image of the second photograph in the blending area.
According to a second preferred aspect there is provided a computer useable medium comprising a computer program code that is configured to cause a processor in a camera to execute one or more functions to enable the performance of the above method.
According to a third preferred aspect there is provided a camera for taking a panoramic image. The camera comprises a body, a lens, a display, an image capturing device, a processor, a controller, and a memory. The display is for displaying images of photographs. The display comprises a blending area for determining an alignment of a portion of a first image in the blending area with a part of an image of a yet-to-be-taken second photograph.
For all aspects the blending area may be at a side of the display and may comprise a predetermined percentage of the first image, the predetermined percentage being the range of 5% to 25%. The red, green and blue values of all pixels in the blending area may be reduced to a fixed amount, the fixed amount being in the range 40% to 60%. It is preferably 45%. The red, green and blue values of the pixels of the portion of the first image in the blending area and the red, green and blue values of the pixels of the part of the second image in the blending area may be summed for display on the display prior to being reduced. After summing and reduction, pixels in the blending area may be further reduced by a set amount. The set amount may be 10%. In this way the red, green and blue values in the portion are at 90% being 45% from the first image and 45% from the second image.
The blending area may extend for the full length of the side of the display, the side being selected from: top, bottom, left side, and right side. The side may be determined by a direction of the movement of the camera.
A preset percentage of the first image may be removed from the display to leave the predetermined percentage in the display, the present percentage being in the range 75% to 95%. The preset percentage may be 85% and the predetermined percentage may be 15%.
All pixels in the blending area may be given a weight value representing visual importance. Pixels that form the boundary edges of the first and second images area may be given higher weight values. A boundary edge may be a collection or group of pixels that defines a clear separation of visual contrast within an image; and may further be the outlines of objects as opposed to a differently contrasted background.
A match status indicator may be displayed on the display, the match status indicator being variable in consequence of the result of the pixel matching process.
A determination of the side on which the blending area is to be displayed may be made after the pixel matching process. A subsequent preview image of a yet-to-be-taken subsequent photograph may have the side in common with a previous photograph. The previous photograph may be an immediately proceeding photograph.
According to a penultimate aspect there is provided method of forming a panoramic image using a camera. The method comprises taking a first photograph and displaying at least a portion of an image of the first photograph on a display; moving the camera for a second photograph and displaying at least a part of preview image of the second photograph on the display; conducting a pixel match for the portion and the part; and using a match status indicator to indicate a result of the pixel match.
The match status indicator may be variable in consequence of the pixel matching process, and may be of a first colour for a poor pixel match, a second colour for a good pixel match, and a third colour for a perfect pixel match; the first, second and third colours being different.
According to a final preferred aspect there is provided a computer useable medium comprising a computer program code that is configured to cause a processor in a camera to execute one or more functions to enable the performance of the above method.
In order that the invention may be fully understood and readily put into practical effect, there shall now be described by way of non-limitative example only a preferred embodiment of the present invention, the description being with reference to the accompanying illustrative drawings in which:
To refer to
Although a simple form of digital still camera is shown, the present invention is further applicable to all forms of digital still cameras including single lens reflex cameras, and digital motion photograph cameras in still camera mode. The term “camera” is to be interpreted accordingly.
The camera 10 has an imaging system generally indicated as 12 and comprising a lens 14, view finder 16, shutter 18, built-in flash 20, shutter release 22, and other controls 24. Within the camera 10 is an image capturing device 36 such as, for example, a charge-coupled device; and a processor 26 for processing the image data received in a known manner, memory 28 for storing each image as image data, and a controller 30 for controlling data sent for display on display 32. Processor 26 performs conventional digital photographic image processing such as, for example, compressing and formatting a captured photographic image. The imaging system 12, including the image capturing device 36, is able to take and capture photographic images of every day scenes. The imaging system 12 may have a fixed or variable focus, zoom, and other functions found in digital still cameras.
When the camera 10 is set to a panoramic image or stitch assist mode to create a panoramic image, the first shot 30 as shown in
The blending area 32 will normally be at a side of display 32, but it may be adjacent a side, or even remote from a side. Depending on the direction of the panoramic series capture sequence, the blending area 32 could reside at any of the four sides of the LCD display—top, bottom, left side or right side.
As shown in
Following is the display algorithm for blending area:
If (pixel P belongs to blending areas) then
RGB [LCD] P=(RGB [image (previous)[ P × 0.5 + RGB [preview]
P × 0.5) × 0.9
//multiple 0.9 makes blending area 10 percent dark
RGB [LCD] P = RGB [preview] P
For example a 2″ LCD may have 206,000 pixels. In such a case the blending area will consist of 30,900 pixels. The blending and matching processes applies only to the pixels within the blending area. Therefore, to process in a stitch assist mode where tone is blending the display process, to calculate the match status only 30,900 pixels are involved.
A status indicator 34 is provided on the image 30 and the color of the status indicator 34 provides an indication of how well the previously captured image 30 and the current preview image 36 match within the blending area 32. For example, a red color may represent poor matching, a yellow color may represent partial matching, and a green color may represent perfect matching.
Depending on the accuracy desired, the number of pixels within the blending area used in determining the matching status could be varied. The highest accuracy is obtained when all the pixels within the blending area are considered during the match process.
Each pixel used in the matching process is given a weight value to represent its visual importance. It is preferable not to use equal weighting for all the pixels because certain pixels that make up boundary edges are visually more important. The following is how each pixel's weight value (W) and matching value (M) can be computed:
The pixels subset in the blending area used is defined at MP. M means the matching status to be used to evaluate the result of the matching. A small value represents good matching. This value is used to determine the color of the matching status index.
Denote P (i, j) for the i row and j column pixel on the blending area. W(i, j) for the weight of pixel P (i, j).
To compute pixel's weight value:
s = 1; //the pixel step for calculate weight
for((i,j) ε MP)
W(i, j) = 0;
for(int u=−1; u<2;u++)
for(int v=−1; v<2;v++)
W(i, j) = W(i, j)+abs(R [image(previous)] p(i+s*u,j+s*v)
−R [image(previous)] p(i,j))
+abs(G [image(previous)] p(i+s*u,j+s*v)
−G [image(previous)] p(i,j))
+abs(B [image(previous)] p(i+s*u,j+s*v)
−B [image(previous)] p(i,j));
W_max = max(W(i,j), (i,j) ε MP)
W(i, j) = W(i, j)/W_max; // P pixel's weight value W
To compute matching value:
M = 0; //M is the matching value
for((i,j) ε MP)
M = M + W(i,j)* (abs(R [image(preview)] p(i,j)
−R [image(previous)] p(i,j))
+abs(G [image(preview)] p(i,j)
− G [image(previous)] p(i,j))
+abs(B [image(preview)] p(i,j)
−B [image(previous)] p(i,j)));
M = M/nPixel;
R [image(previous)]p(i,j) meant previous image at pixel P(i,j) Red
G [image(previous)]p(i,j) meant previous image at pixel P(i,j) Green
B [image(previous)]p(i,j) meant previous image at pixel P(i,j) Blue
R [image(preview)]p(i,j) meant preview image at pixel P(i,j) Red
G [image(preview)]p(i,j) meant preview image at pixel P(i,j) Green
B [image(preview)]p(i,j) meant preview image at pixel P(i,j) Blue
nPixel meant number of MP;
for((i,j) ε MP)
M = M +abs(W[image(preview)]p(i,j)−W[image(previous)]p(i,j))
M = M/nPixel;
W[image(previous)]p(i,j) meant previous image at pixel P(i,j) weight
W[image(preview)]p(i,j) meant preview image at pixel P(i,j) weight
nPixel meant number of MP;
Method 1 is faster than method 2 because it doesn't need calculate the preview image pixels' weight But method 2 is more accurate because it detects the boundary edges' information for matching. Therefore, even under different lighting conditions method 2 may achieve a good result, whereas under different lighting conditions method 1 may not.
The use of automated pixel-matching together with an on-screen blending area as a visual guidance minimizes human errors while allowing manual intervention when the automated pixel-matching does not provide a satisfactory result. For example, it may not be possible for the pixel-matching algorithm to provide a good result if there are moving objects across successive shots. A moving car may be present in the previous shot but may not be present in the current shot. In this situation, it would be impossible for the automated pixel-matching to provide a satisfactory result. However, the user will be able to use the visual guidance to provide a rough match between the images and ignore the result from the automated pixel-matching.
On the other hand, it is usually very difficult for user to properly match the images when the objects in the scene are small and/or not distinct, (e.g. forest, clouds). In such a situation, the automated pixel-matching will be able to provide a more accurate guide as to whether a photograph can be taken.
To now refer to
The number of pixels in the blending area 32 is also reduced by 50% (805).
At step (802) what is displayed is as shown in
The pixel matching process described above is then performed (806) only within the blending area 32. For the purposes of the status indicator 34, queries 807, 809 and 811 are raised to determine the match status. If NO at 807 and 809 (808 and 810 respectively) and YES (812) at (811), the status indicator 34 is displayed as red (813) to indicate a poor or bad match. The camera is then moved (814) until there is a visual matching of features in the blending area 32. The process then reverts back to (806).
If at (809) the answer is YES, the status indicator 34 is displayed as yellow (
If there are no more photographs to be taken to form the panoramic image (820, 812) the process ends (823). If there are more photographs (821), the process reverts to (803).
At (816), it may be possible to take the next photograph as there is a sufficiently good match for stitching to take place.
If desired, there may be included a shutter release lockout so that when camera 10 is in the panoramic mode and the second photograph is to be taken, if the status indicator 34 is red the second photograph cannot be taken. This would also be relevant for subsequent photographs.
A different situation arises when the panoramic image is not formed from a single line sequence of photographs due to the camera moving in one direction only such as, for example, left to right, right to left, top to bottom, or bottom to top. In these instances the blending area 32 is always in the one location within display 32—left side, right side, top side, and bottom side respectively. If the camera is to take a sequence of photographs to form a panoramic image and the camera is moved in different directions, stitching will need to take place on different side of images. For example, a panoramic image of a large mountain may require several photographs in a grid:
The order or sequence of the photographs taken may be different, and may be random. Normal stitching systems can't cope with such an arrangement.
To enable to stitching to take place the process of
In consequence of the pixel match, the blending area 32 is moved to be on the correct side of the display (1507) so that blending can take place as described above (1508) in relation to the first embodiment. The second photograph is then taken. A query is raised (150) to determine if more photographs are required. If yes (1512), the process continues to (1514). If no (1511), the process ends.
If yes, the camera is then moved (1514) and a pixel match attempted (1515). In doing so a common edge between a previous image and the image to be taken must be found (1516). If there is no common or overlapping edge with a previous photograph (1517), an error message is displayed (1518) and the process reverts to (1514). The previous photograph may be any previous photograph taken as part of the image sequence to from the panoramic image; or may be limited to the immediately previous image, particularly if memory and/or processing power is limited.
Using the above Table 1 as an example, the order of photographs may be in any sequence. Each subsequent photograph must have a common side with the photograph taken prior to it for stitching to take place. For example, if photograph 6 were the first photograph, only photographs 1, 5 and 7 could be used for the second photograph. If photograph 5 is the second photograph, any one of photographs 12, 4, 7 or 8 could be the third photograph as each has an edge in common with photograph 6 or photograph 5. However, photographs 3 and 9 could not be the third photograph as they have no common edge with photograph 6 or photograph 5. Therefore, the photographs can be taken in any order provided there is a common edge with a previous photograph.
If at (1516) the answer is yes (1519) the blending area 32 is located on the correct side as is described above (1520), the blending process followed as is described in relation to the first embodiment (1521); and the photograph taken (1522). A query is raised (1523) to determine if more photographs are to be taken. If yes (1525) the process reverts to (1514), if no (1524) the process ends (1526).
The first photograph may be taken from a library or card of precedent photographs. As such precedent photographs may have very high resolution and high megapixels so a low megapixel camera may be able to be used to from a high megapixel image.
This embodiment is illustrated in
First and as shown in
After taking a certain number of photographs, the photograph stitch software is used to stitch them together on computer to create a large and high quality photograph.
For all embodiments, the status indicator 34 may be accompanied by an audible indication such as, for example, a “beep” at a low repetition frequency corresponding to the red colour and a poor match; the “beep” at middle repetition frequency corresponding to the yellow colour and a good match; and the “beep” at a high repetition frequency corresponding to the green colour and perfect match.
Whilst there has been described in the foregoing description preferred embodiments of the present invention, it will be understood by those skilled in the technology that many variations or modifications in details of design or construction or operation may be made without departing from the present invention.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5138460||Jun 22, 1990||Aug 11, 1992||Canon Kabushiki Kaisha||Apparatus for forming composite images|
|US6128416 *||Feb 7, 1997||Oct 3, 2000||Olympus Optical Co., Ltd.||Image composing technique for optimally composing a single image from a plurality of digital images|
|US6552744 *||Sep 26, 1997||Apr 22, 2003||Roxio, Inc.||Virtual reality camera|
|US6657667 *||Nov 25, 1997||Dec 2, 2003||Flashpoint Technology, Inc.||Method and apparatus for capturing a multidimensional array of overlapping images for composite image generation|
|US6731779 *||Dec 4, 2000||May 4, 2004||Nec Corporation||Fingerprint certifying device and method of displaying effective data capture state|
|US6867801 *||Aug 28, 1998||Mar 15, 2005||Casio Computer Co., Ltd.||Electronic still camera having photographed image reproducing function|
|US7239760 *||May 16, 2005||Jul 3, 2007||Enrico Di Bernardo||System and method for creating, storing, and utilizing composite images of a geographic location|
|US7239805 *||Feb 1, 2005||Jul 3, 2007||Microsoft Corporation||Method and system for combining multiple exposure images having scene and camera motion|
|US7324135 *||Oct 14, 2003||Jan 29, 2008||Seiko Epson Corporation||Panoramic composition of multiple image data|
|US20010045986 *||Feb 8, 2001||Nov 29, 2001||Sony Corporation And Sony Electronics, Inc.||System and method for capturing adjacent images by utilizing a panorama mode|
|WO1998025402A1||Dec 5, 1997||Jun 11, 1998||Flashpoint Technology, Inc.||A method and system for assisting in the manual capture of overlapping images for composite image generation in a digital camera|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US8111296 *||Feb 7, 2012||Core Logic, Inc.||Apparatus and method for generating panorama image and computer readable medium stored thereon computer executable instructions for performing the method|
|US8488908 *||Nov 8, 2011||Jul 16, 2013||Sony Corporation||System and method for capturing adjacent images by utilizing a panorama mode|
|US8509518 *||Jan 8, 2009||Aug 13, 2013||Samsung Electronics Co., Ltd.||Real-time image collage method and apparatus|
|US8526763 *||May 27, 2011||Sep 3, 2013||Adobe Systems Incorporated||Seamless image composition|
|US8558801||Dec 16, 2008||Oct 15, 2013||Samsung Electronics Co., Ltd.||Mobile terminal having touch screen and function controlling method of the same|
|US8606042 *||Feb 26, 2010||Dec 10, 2013||Adobe Systems Incorporated||Blending of exposure-bracketed images using weight distribution functions|
|US8611654||Jan 5, 2010||Dec 17, 2013||Adobe Systems Incorporated||Color saturation-modulated blending of exposure-bracketed images|
|US8922518||Oct 11, 2013||Dec 30, 2014||Samsung Electronics Co., Ltd.||Mobile terminal having touch screen and function controlling method of the same|
|US9055209 *||Nov 5, 2008||Jun 9, 2015||Samsung Electronics Co., Ltd.||Method to provide user interface to display menu related to image to be photographed, and photographing apparatus applying the same|
|US9250778||Nov 7, 2014||Feb 2, 2016||Samsung Electronics Co., Ltd.||Mobile terminal having touch screen and function controlling method of the same|
|US9343043||Aug 1, 2013||May 17, 2016||Google Inc.||Methods and apparatus for generating composite images|
|US9411502||Oct 11, 2013||Aug 9, 2016||Samsung Electronics Co., Ltd.||Electronic device having touch screen and function controlling method of the same|
|US20080266408 *||Apr 28, 2008||Oct 30, 2008||Core Logic, Inc.||Apparatus and method for generating panorama image and computer readable medium stored thereon computer executable instructions for performing the method|
|US20090160809 *||Dec 16, 2008||Jun 25, 2009||Samsung Electronics Co., Ltd.||Mobile terminal having touch screen and function controlling method of the same|
|US20090265664 *||Nov 5, 2008||Oct 22, 2009||Samsung Electronics Co., Ltd.||Method to provide user interface to display menu related to image to be photographed, and photographing apparatus applying the same|
|US20100172586 *||Jul 8, 2010||Samsung Electronics Co., Ltd.||Real-time image collage method and apparatus|
|US20120056979 *||Nov 8, 2011||Mar 8, 2012||Edwards Eric D||System and method for capturing adjacent images by utilizing a panorama mode|
|US20120120099 *||May 17, 2012||Canon Kabushiki Kaisha||Image processing apparatus, image processing method, and storage medium storing a program thereof|
|US20130114894 *||Feb 26, 2010||May 9, 2013||Vikas Yadav||Blending of Exposure-Bracketed Images Using Weight Distribution Functions|
|US20140095349 *||Sep 16, 2013||Apr 3, 2014||James L. Mabrey||System and Method for Facilitating Social E-Commerce|
|US20140129370 *||Jan 15, 2014||May 8, 2014||James L. Mabrey||Chroma Key System and Method for Facilitating Social E-Commerce|
|U.S. Classification||348/36, 348/333.11, 348/239|
|International Classification||H04N5/262, H04N5/222, H04N7/00|
|Cooperative Classification||H04N5/23293, H04N5/23238, G03B17/20, G03B37/04, G06T3/4038, G03B17/02|
|European Classification||H04N5/232M, G03B17/02, G03B17/20, H04N5/232V, G03B37/04, G06T3/40M|
|Feb 11, 2005||AS||Assignment|
Owner name: CREATIVE TECHNOLOGY LTD., SINGAPORE
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIOW, YUEN KHIM;GOH, SIANG THIA;LIU, GUORAN;REEL/FRAME:016310/0866;SIGNING DATES FROM 20050131 TO 20050201
|Jul 12, 2013||FPAY||Fee payment|
Year of fee payment: 4