Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20030062675 A1
Publication typeApplication
Application numberUS 10/254,789
Publication dateApr 3, 2003
Filing dateSep 26, 2002
Priority dateSep 28, 2001
Publication number10254789, 254789, US 2003/0062675 A1, US 2003/062675 A1, US 20030062675 A1, US 20030062675A1, US 2003062675 A1, US 2003062675A1, US-A1-20030062675, US-A1-2003062675, US2003/0062675A1, US2003/062675A1, US20030062675 A1, US20030062675A1, US2003062675 A1, US2003062675A1
InventorsHideo Noro, Hiroaki Sato, Taichi Matsui
Original AssigneeCanon Kabushiki Kaisha
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Image experiencing system and information processing method
US 20030062675 A1
Abstract
The invention intends to increase the feeling of reality in a board game in addition to the interestingness of the game itself and to facilitate understanding of the proceeding situation of the game.
The invention provides an image experiencing system for a game which proceeds by placing items on a game board, comprising player position/attitude determining unit for determining the position/direction information of the view of the player, a generation unit for generating computer graphics based on the items on the game board, according to the position/attitude information of the view of the player, and a head-mounted display for displaying the generated computer graphics in superposition with the image of the real field.
Images(40)
Previous page
Next page
Claims(29)
What is claimed is:
1. An image experiencing system for a game which proceeds by placing an item on a game board, the system comprising:
player position and attitude determining means for determining the position and attitude information of a view point of a player;
generation means for generating computer graphics corresponding to the item on the game board, based on the position and attitude information of the view point of said player; and
a head mounted display for displaying said generated computer graphics in superposition with an image of the real world.
2. A system according to claim 1, wherein the position and attitude information of said player is information indicating a relative position of the view point of said player relative to said game board.
3. A system according to claim 1, further comprising:
measurement means for measuring attitude information of said player;
wherein said player position and attitude determining means determines the position and attitude information of the view point of said player from the attitude information and pre-calibrated position information of said player.
4. A system according to claim 1, further comprising:
a camera fixed to said head mounted display;
wherein said player position and attitude determining means analyzes the image of said camera and executes image recognition thereon, thereby obtaining the position and attitude information of the view point of said player.
5. A system according to claim 1, further comprising:
a position and attitude sensor for measuring the position and attitude of the player; and
a camera fixed to said head mounted display;
wherein said player position and attitude determining means includes a first position and attitude determining unit for determining the position and attitude information of the view point of the player from the output of said position and attitude sensor, and a second position and attitude determining unit for determining the position and attitude information of the view point of the player from an image taken by said camera, and the position and attitude information of the view point of said player is determined according to the reliability of the respective output values of said first and second position and attitude determining units.
6. A system according to claim 1, further comprising:
an attitude sensor for measuring the attitude of the player; and
a camera fixed to said head mounted display;
wherein said player position and attitude determining means includes a first position and attitude determining unit for determining the position and attitude information of the view point of the player from the output of said attitude sensor, and a second position and attitude determining unit for determining the position and attitude information of the view point of the player from an image taken by said camera, and the position and attitude information of the view point of said player is determined according to the reliability of the respective output values of said first and second position and attitude determining units.
7. A system according to claim 5 or 6, wherein a correction value is determined from the output value of said first or second position and attitude determining unit, and said correction value is used for correcting the output value of said first or second position and attitude determining unit.
8. A system according to claim 1, further comprising item operation recognition means for recognizing a change in the item on said game board.
9. A system according to claim 8, wherein said item operation recognition means recognizes a special mark identifier attached to the item.
10. A system according to claim 9, wherein a visible or invisible bar code is used as the special mark identifier.
11. A system according to claim 9, wherein a RFID transponder is used as the special mark identifier.
12. A system according to claim 8, wherein said item operation recognition means recognizes a shape of the item and/or a pattern on the item by image recognition.
13. A system according to claim 12, wherein an image taken by a camera fixed to said head mounted display is used.
14. A system according to claim 12, wherein said item operation recognition means recognizes a placement of an item in a specified position of a specified camera, thereby recognizing an item to be changed.
15. A system according to claim 14, wherein, in placing an item at said specified position of said specified camera, a guide facilitating the placement of the item is prepared by computer graphics and is displayed in the head mounted display of said player.
16. A system according to claim 1, wherein plural players play on a single game board, and the result of complex operations by the plural players is displayed in each head mounted display in the view point of each player.
17. An information processing method for a game which proceeds by placing an item on a game board, the method comprising steps of:
entering position and attitude information of a view point of a player;
generating computer graphics corresponding to the item on the game board, based on the position and attitude information of the view point of said player; and
displaying said generated computer graphics in superposition with an image of the real world in a head mounted display worn by the player.
18. A method according to claim 17, wherein the position and attitude information of said player is information indicating a relative position of the view point of said player relative to said game board.
19. A program for controlling an information processing apparatus thereby executing the information processing according to claim 17.
20. A recording medium storing the program according to claim 19.
21. An image experiencing system for a game which proceeds by placing an item on a game board bearing plural marks, the system comprising:
a position and attitude grasp unit for recognizing the kind of plural items placed on said game board and the position of said items;
game management means for managing the proceeding of the game, based on the kinds of said plural items and the positions thereof; and
generation means for generating computer graphics of a game scene corresponding to the kinds of the items of said plural items and the positions thereof.
22. A system according to claim 21, wherein:
said game is a battle type game;
said system further comprises determination means for determining the characteristics of a character of an own side based on the combination of the own side; and
said generation means generates said computer graphics based said determined characteristics of the character of the own side.
23. A system according to claim 21, wherein said game management means manages the proceeding of the game by a combination of the characteristics of the characters of the own side and the opponent side.
24. A system according to claim 21, wherein said game board has a three-dimensional structure, and said computer graphics are displayed in the head mounted display worn by the player, based on a model of the three-dimensional structure of said game board.
25. An information processing method for a game which proceeds by placing an item on a game board bearing plural marks, the method comprising steps of:
entering an image from an image pickup unit in a head mounted display worn by a player;
detecting a mark in said image thereby determining the position and attitude of a view point of said player on said game board;
identifying the item on said game board and generating computer graphics indicating a game scene based on the result of said identification; and
displaying said computer graphics, based on the position and attitude of said view point in said head mounted display.
26. A method according to claim 25, further comprising a step of:
entering position and attitude information of the player from a three-dimensional position and attitude measuring unit for measuring the position and attitude of said player;
wherein the position and attitude of said view point is obtained from said position and attitude information and from said detected mark.
27. A method according to claim 25, wherein the item on said game board is identified from an image from an image pickup unit for taking the image on said game board.
28. A method according to claim 25, further comprising a step of recognizing an instruction from the player for advancing the game.
29. A program for realizing an information processing method for a game which proceeds by placing an item on a game board bearing plural marks, the method comprising steps of:
entering an image from an image pickup unit in a head mounted display worn by a player;
detecting a mark in said image thereby determining the position and attitude of a view point of said player on said game board;
identifying the item on said game board and generating computer graphics indicating a game scene based on the result of said identification; and
displaying said computer graphics, based on the position and attitude of said view point in said head mounted display.
Description
BACKGROUND OF THE INVENTION

[0001] 1. Field of the Invention

[0002] The present invention relates to a mixed reality technology for providing a game which proceeds by positioning items on a game board.

[0003] 2. Related Background Art

[0004] Among various games, there is already known a board game, which utilizes a board including areas divided thereon and proceeds by placing, removing or displacing pieces on such areas. Among the games utilizing objects of certain three-dimensional shapes as such pieces, there are well known, for example, chess, checker, backgammon, igo, Japanese chess, Japanese backgammon etc. Also there are known games utilizing cards as such pieces, called card games. The games utilizing playing cards are known in numerous kinds, such as bridge, stud poker, draw poker, black jack etc.

[0005] Also among the card games, there is known so-called card battle, which utilizes cards specific to the game and in which each card is given a specific function. Also many games utilizing the playing cards do not use a particular board because the divided areas are quite simple, but, in such games, it can be considered that a board including invisible divided areas is present and its existence is recognized and shared by the players.

[0006] Such board game or card game itself often assumes a certain event, and the item (for example piece) often assumes a particular animal or a particular person. The board or piece has a shape determined in advance, and the pattern thereof does not change according to the proceeding of the game.

[0007] On the other hand, there is also known a game utilizing the MR (mixed reality) technology. In such game, the environment of the game is constructed with a real setting with scene settings and stage properties, and the players execute the game by actually entering such environment. In most cases, each player wears a see-through HMD (head mounted display), which displays a CG (computer graphics) image matching the proceeding of the game, in superposition with an image that can be seen when the HMD is not worn.

[0008] In the conventional board games mentioned above, the shape or pattern of the pieces do not change according to the situation of the game. For example, in a battle scene, it is not that an actual battle takes place in front of the player, or, in case the player draws a card indicating “the angel gives an instruction”, it is not that an angle actually speaks up.

[0009] Therefore, even though the game itself assumes a certain scene, the game lacks the feeling of reality because of the lack of corresponding display. Also for a similar reason, it is difficult to grasp the situation of proceeding of the game at a sight.

[0010] On the other hand, the conventional MR game mentioned in the foregoing provides sufficient feeling of reality but involves a very tedious setting of the game environment. There is often required a large-scale work for preparing the scene setting, and the positions of the objects in the setting have to be measured for each setting. It is also difficult to alter the content of the game.

SUMMARY OF THE INVENTION

[0011] In consideration of the foregoing, an object of the present invention is to improve the feeling of reality of a board game in addition to the interestingness thereof, and to facilitate understanding of the situation of proceeding of the game.

[0012] Another object of the present invention is, in comparison with the conventional games utilizing the MR technology, to facilitate installation of the setting and to enable relatively flexible alteration of the content of the game.

[0013] The above-mentioned objects can be attained, according to the present invention, by an image experiencing system for a game which proceeds by placing items on a game board, the system comprising:

[0014] player position and attitude determining means for obtaining position/attitude information of the view of a player;

[0015] generation means for generating computer graphics according to the items of the game board, corresponding to the position/attitude information of the view of the aforementioned player; and

[0016] a head mounted display capable of displaying thus generated computer graphics in superposition with the image of a real world.

BRIEF DESCRIPTION OF THE DRAWINGS

[0017]FIG. 1 is a view showing an example of the configuration of an image experiencing system in a first embodiment;

[0018]FIG. 2 is a view showing a board and the appearance thereof in calculating the height of view point of a player;

[0019]FIG. 3 is a view showing the internal configuration of a see-through HMD;

[0020]FIG. 4 is a view showing an example of the configuration of an image experiencing system in a first embodiment;

[0021]FIG. 5 is a view showing an example of the configuration of an image experiencing system in a second embodiment;

[0022]FIG. 6 is a view showing an example of the configuration of an image experiencing system in a third embodiment;

[0023]FIG. 7 is a UML activity chart showing the process of a most likelihood position and attitude determining unit constituting a component of the image experiencing system of the third embodiment;

[0024]FIG. 8 is a chart indicating the difference between a value, measured with a position and attitude sensor constituting a component of the image experiencing system of a fourth embodiment, and a real value as a function of elapsing time;

[0025]FIG. 9 is a view showing an example of the configuration of an image experiencing system in a fourth embodiment;

[0026]FIG. 10 is a UML activity chart showing the process of a position-and-attitude sensor information processing unit constituting a component of the image experiencing system of the fourth embodiment;

[0027]FIG. 11 is a view showing an example of the configuration of an image experiencing system in a fifth embodiment;

[0028]FIG. 12 is a UML activity chart showing the process of a most likelihood position and attitude determining unit constituting a component of the image experiencing system of the fifth embodiment;

[0029]FIG. 13 is a view showing an example of the configuration of an image experiencing system in a sixth embodiment;

[0030]FIG. 14 is a UML activity chart showing the process of an attitude sensor information processing unit constituting a component of the image experiencing system of the sixth embodiment;

[0031]FIG. 15 is a view showing an example of the configuration of an image experiencing system in a seventh embodiment;

[0032]FIG. 16 is a UML activity chart showing the process of a piece operation recognition unit constituting a component of the image experiencing system of the seventh embodiment;

[0033]FIG. 17 is a view showing an example of the configuration of an image experiencing system in a tenth embodiment;

[0034]FIG. 18 is a view showing an example of card patterns to be used for explaining the function of the image experiencing system of the tenth embodiment;

[0035]FIG. 19 is a view showing recognition areas on a card, to be used for explaining the function of the image experiencing system of the tenth embodiment;

[0036]FIG. 20 is a UML activity chart showing the process of a piece image recognition unit constituting a component of the image experiencing system of the tenth embodiment;

[0037]FIG. 21 is a view showing an example of the configuration of an image experiencing system in a eleventh embodiment;

[0038]FIG. 22 is a UML activity chart showing the process of an on-board piece image recognition unit constituting a component of the image experiencing system of the eleventh embodiment;

[0039]FIG. 23 is a view showing an example of the configuration of an image experiencing system in a twelfth embodiment;

[0040]FIGS. 24 and 25 are UML activity charts showing the process of a piece operation recognition unit constituting a component of the image experiencing system of the twelfth embodiment;

[0041]FIG. 26 is a view showing an example of the configuration of an image experiencing system in a thirteenth embodiment;

[0042]FIG. 27 is a view showing the difference in the output from the piece image recognition unit, for explaining a piece image recognition-guide display instruction unit constituting a component of the image experiencing system of the thirteenth embodiment;

[0043]FIG. 28 is a UML activity chart showing the process of the piece image recognition-guide display instruction unit constituting a component of the image experiencing system of the thirteenth embodiment;

[0044]FIG. 29 is a view showing an example of the guide to be displayed in the display unit of the HMD by the image experiencing system of the thirteenth embodiment;

[0045]FIG. 30, composed of FIGS. 30A and 30B, and FIG. 31, composed of FIGS. 31A, 31B and 31C, are views showing examples of the configuration of an image experiencing system in a fourteenth embodiment;

[0046]FIG. 32 is a view showing a fifteenth embodiment in a conceptual image;

[0047]FIG. 33 is a schematic view showing the configuration of the fifteenth embodiment;

[0048]FIG. 34 is a view showing markers;

[0049]FIG. 35 is a view showing guides;

[0050]FIG. 36 is a view showing card identification;

[0051]FIG. 37 is a view showing a phase proceeding by voice;

[0052]FIG. 38 is a flow chart of a card reading unit;

[0053]FIG. 39 is a flow chart of a position and attitude grasp unit; and

[0054]FIG. 40 is a flow chart of a game management unit.

DESCRIPTION OF THE PREFERRED EMBODIMENTS

[0055] Now the present invention will be clarified in detail by the following description, which is to be taken in conjunction with the accompanying drawings, in which equivalent configurations are represented by same numbers.

[0056] In the image experiencing system to be explained in the following, each player wears a see-through HMD (head mounted display) for displaying CF (computer graphics) in superposition with the actual situation of the board game or card game, executed in a limited field of a game board. The CG changes according to the proceeding of the game. For example, in a chess game, a knight piece is represented by CG of a knight on horse back, and at the displacement of the piece, the CG shows a running horse. Also, when a piece captures an opponent piece, there is given fighting and winning CG against CG corresponding to the opponent piece.

[0057] Such image experiencing system provides, in addition to the interestingness of the actual board game itself, an improved feeling of reality and facilitates grasping the situation of proceeding of the game.

[0058] Also in comparison with the conventional game utilizing the MR technology, the image experiencing system of the present invention is easier in setting and allows to relatively easily accommodate alteration of the content of the game.

[0059] (First Embodiment)

[0060]FIG. 1 is a view showing an example of the configuration of the image experiencing system of a first embodiment.

[0061] There is provided a game board 101 constituting the field of game, and players execute the game by placing, removing or moving pieces on the board 101. The player wears a see-through HMD 103 in playing the game. A position and attitude sensor 104 is fixed to the HMD 103 and detects the position and attitude of the view of the player.

[0062] In the following there will be given definitions for the terms “position/attitude (posture)”, “position” and “attitude” to be used in the present specification. “Position/attitude” includes both the “position” and “attitude”. Thus, “position/attitude information” means both the “position information” and “attitude information”.

[0063] “Position” means information indicating a point in a specified spatial coordinate system, and is represented, in case of an XYZ orthogonal coordinate system, by a set of three values (x, y, z). Also in case of representing an object on the earth, there can be employed a set of three values of a latitude, a longitude and a height (or a depth). “Attitude” means a direction from the point represented by the “position”, and can be represented by the position of an arbitrary point on such direction, or, in case of the XYZ orthogonal coordinate system, by the angles of the viewing line with the axes of the coordinate system or by specifying a direction of the viewing line (for example -Z direction) and indicating the amounts of rotation from such specified direction about the axes of the coordinate system.

[0064] In the absence of other limiting conditions, the “position” has 3 freedoms, and the “attitude” also has 3 freedoms.

[0065] The position and attitude sensor is capable of values of 6 freedoms on “position” and “attitude”.

[0066] Such position and attitude sensor is commercially available in various forms, for example one utilizing a magnetic field, one based on image processing of a marker photographed by an external camera, or one based on the combination of a gyro sensor and an acceleration sensor.

[0067] Because of the nature of the board game, the head position of the player scarcely changes during the play. It is therefore possible also to use the position information calibrated at the start of the game and to use an attitude sensor which measures the attitude information only. In such case, the measurement is made only on the attitude information of 3 freedoms during the game, but the system can process the values of 6 freedoms including the position information initially calibrated. In other words, the attitude sensor and the calibrated position data can be considered to constitute a position and attitude sensor.

[0068]FIG. 2 shows a method of calibration before the game. A game board 101 is provided with markers 201 for identification on four corners. For the purpose of simplicity, it is assumed that the markers 201 are provided in a square arrangement, with a length of a side of unity (1). The player is positioned in front of the board, with a view point 202 positioned at the center position of the board and at a distance d from the front side of the board. When the player observes the board in this state, the rear side of the board appears shorter than the front side. Based on an observed length m1 of the front side and an observed length m2 of the rear side, the height h of the view point 202 from the plane of the board 101, though dependent on the projection method, can be determined as:

h=((m 2 2(d+1)2 m 1 2 d 2)/(m 2 m 2 2))0.5

[0069] The input/output of the image to or from the HMD 103 and the position/attitude information from the position/attitude sensor 104 are processed by a game console or a PC 102.

[0070]FIG. 3 is a view showing the internal configuration of the see-through HMD 103, which includes a video see-through type and an optical see-through type.

[0071] In case of the video see-through type, the light from the external field does not directly reach the eyes of the player. The light from the external field is deflected by a two-sided mirror 301 and enters an image pickup device 302. An image presented to the player is displayed on a display device (display unit) 303 and enters the eyes the player via the two-sided mirror 301. If the output image of the image pickup device (image pickup unit) 392 is directly supplied to the display device 303, the HMD becomes a mere seeing glass, but such output image is processed by the game console or PC 102 in the course of such supply to display the generated CG in superposition.

[0072] In case of the optical see-through type, the light directly reaches the eyes of the player, and the separately generated CG are simultaneously displayed and appear in superposition to the eyes of the player.

[0073] The light from the external field pass through a half mirror and enters the eyes of the player. At the same time, an image displayed on a display device is reflected by the half mirror and enters the eyes of the player. The image pickup device is unnecessary in this case, but it is required if an image at the view point of the player is used in image processing. Instead of the image pickup device 302, a separate camera for image processing may be fixed on the HMD 103.

[0074] The game console or PC 102 manages the proceeding of the game as in the ordinary game.

[0075] In case of a board game, the required position/attitude information is limited to the position/attitude relationship between the board 101 and the HMD 103, and there is not required setting of the scene or stage properties at each installation or the calibration of the sensors to be mounted on the players.

[0076] The present embodiment requires only a more compact set, in comparison with the MR game, and is easy in the installation work, including the calibration. The details of the MR game are described, for example, in the 22A: Mixed Reality in the papers of the fourth Convention of the Japanese Virtual Reality Society, by “Design and Implementation for MR Amusement Systems”.

[0077] The present embodiment can also improve the feeling of reality in comparison with the conventional board games.

[0078]FIG. 4 shows an example of the configuration of an image experiencing system in which the present embodiment is applied.

[0079] The game console or PC 102 manages the proceeding of the game by a game management unit 401. A CG generation unit 405 generates CG (computer graphics) corresponding to each scene. For generating a CG image seen from the view point of the player, the CG generation unit 405 acquires the position and attitude information of the view of the player from a player position and attitude determining means 402, which includes, for example, a position and attitude sensor 104, and a position/attitude sensor information processing unit 403 for analyzing such information thereby determining the position and attitude information of the view of the player.

[0080] The position and attitude sensor information processing unit 403 executes, for example, a format conversion of the data obtained from the position and attitude sensor 104, a transformation into a coordinate system employed in the present system, and a correction for the difference between the mounting position of the position and attitude sensor and the view point of the HMD 103.

[0081] The CG, generated in the CG generation unit 405 and corresponding to an image seen from the view point of the player, are superimposed in an image composition unit 404, in case of the HMDof video tYsee-through type, with an image obtained from the image pickup unit 302 of the HMD 103, for display on the image display unit 303.

[0082] In case of the HMD of optical see-through type, the image pickup unit 302 and the image composition unit 404 can be dispensed with since the image synthesis is unnecessary, and the output of the CG generation unit 405 is directly displayed on the display unit 303.

[0083] The game management unit 401 stores information relating to the game itself, or the rules of the game, and, in the course of a game, retains the current status or scene and determines and manages a next state to which the game is to proceed. Also for presenting a scene by CG to the player, it issues a drawing instruction for CG to the CG generation unit 405.

[0084] According to the instruction from the game management unit 401, the CG generation unit 405 places model data, which are an internal representation corresponding to each character, in a world, which is an internal representation of a virtual world in which the players are playing. The model data and the world are internally represented by a method called scene graph, and, after the generation of the scene graph of the world, the scene graphs is subjected to rendering. In this operation, the rendering is executed on a scene seen from the position and attitude, given from the player position and attitude determining means 402.

[0085] The rendering may be executed on an unrepresented internal memory or on a display memory called a frame buffer. For the purpose of simplicity, the rendering is assumed to be executed on the unrepresented internal memory.

[0086] The image composition unit 404 superimposes the CG, generated by the CG generation unit 405, with an image obtained by the image pickup unit 302 in the HMD 103. For superimposed display of images, there can be utilized a method called alpha blending. In case the image composition unit 404 has a pixel output format RGBA including an opacity A (alpha value; 0 A 1) in addition to the intensities of three primary colors RGB, the image synthesis can be executed utilizing such opacity value A.

[0087] As an example, let us consider a case where a pixel on the output from the image pickup unit 302 has RGB values (R1, G1, B1), while a corresponding pixel of the image composition unit 404 has RGB values (R2, G2, B2) and an opacity value A.

[0088] In such case, the corresponding pixel values outputted to the display unit 303 are given by:

(R 1*(1−A)+R2 *A, G 1*(1A)+G 2 *A, B 1*(1A)+B 2 *A).

[0089] Such alpha blending process can also be executed in the CG generation unit 405, but separate components are illustrated for ease of explanation of the functions.

[0090] The above-described configuration allows the player to feel a heightened feeling of reality in addition to the interestingness of the game itself. Also the player can easily grasp the proceeding situation of the game, as the superimposed CG are synchronized with the proceeding of the game.

[0091] As the position and attitude information, there is only required the relative relationship between the board 101 and the HMD 103 in position and attitude, and there is not required the scene setting or the setting of stage properties or the calibration of the sensor to be mounted on the player, at each installation of the game.

[0092] The present invention is not limited to games but also is applicable to various field such as education, presentation, simulation or visualization.

[0093] (Second Embodiment)

[0094]FIG. 5 shows an example of the configuration of the image experiencing system of a second embodiment, which is different from the embodiment shown in FIG. 4 only in the configuration of the player position and attitude determining means 402.

[0095] The player position and attitude determining means 402 is composed of a camera 501 fixed to the HMD 103 and a board image recognition unit 502. The image of the board 101, in the image taken by the camera 501, varies depending on the position of the view point 202 of the player. Therefore the image taken by the camera 501 is analyzed by the board image recognition unit 502, to determine the position and the attitude of the view of the player.

[0096] The board image recognition unit recognizes the image of the board 101, but the image obtained by the camera 501 appears distorted when the markers 201 are attached to the game board 101. The position and attitude of the camera 501 can be determined based on such distortion. It is known that the position and attitude of the camera 501 can be determined if the markers 201 in at least four points are correlated. Thus determined position and attitude of the camera 501 are corrected based on the difference between the camera position and the position of the view point of the player to output the position and attitude of the view point 202 of the player.

[0097] In case the image pickup unit 302 is attached to HMD 103, such image pickup unit 302 may be used instead of the camera 501 for similar effects.

[0098] Thus, in the present embodiment, the player position and attitude determining means is composed of the camera fixed to the see-through HMD and the board image recognition unit, and the board image recognition unit determines the relative position and attitude between the board and the view point of the player based on the image of the board taken by the camera, so that the position and attitude information of the view point of the player can be determined without the position and attitude sensor, whereby the configuration can be simplified.

[0099] (Third Embodiment)

[0100]FIG. 6 shows an example of the configuration of the image experiencing system of a third embodiment, which is different from the embodiments shown in FIGS. 4 and 5 only in the configuration of the player position and attitude determining means 402.

[0101] The player position and attitude determining means 402 is provided with both components of the means 402 shown in FIGS. 4 and 5, and additionally with a most likely position and attitude determining unit 601.

[0102] In general, the output of the position and attitude sensor 104 is susceptible to external perturbations and is rather unstable. Therefore, it is conceivable to use the position and attitude information from the board image recognition unit 502 as the output of the player position and attitude determining means, but the board 101 is not necessarily always included in the image taking range of the camera, and there may also be an element hindering the recognition, such as a hand of the player. Therefore, the reliability of the position and attitude information is deteriorated in such situation.

[0103] Therefore, the information from the position and attitude sensor 104 is utilized only in such situation. Such configuration allows to obtain the output of the position and attitude information without interruption, and to obtain higher precise position and attitude information while the board 101 is recognized.

[0104]FIG. 7 is a UML activity chart showing the process of the most likely position and attitude determining unit.

[0105] At first, the unit awaits the position and attitude information from the position and attitude sensor information processing unit 403 and from the board image recognition unit 502. When both data become available, there is discriminated whether the position and attitude information data from the board image recognition unit 502 are suitable. If suitable, the information from the board image recognition unit 502 is used as the output of the player position and attitude determining means 402, but, if not suitable, there is used the information from the position and attitude sensor information processing unit 403.

[0106] In the foregoing it is assumed that highly precise values are obtained from the board image recognition unit 502 during a proper recognition but unsuitable values are obtained otherwise, but the present invention is not limited to such case. A certain image from the camera 501 may only provide values of low reliability, and in a certain position and attitude sensor, the reliability may be high only in a limited range and gradually decreases outside such range. Consequently the system has to be so designed as to provide most suitable values based on the reliability of the output values of the position and attitude sensor information processing unit 403 and the board image recognition unit 502.

[0107] The present embodiment allows to obtain reliable position and attitude information even in case any of the position and attitude information determined by plural methods.

[0108] (Fourth Embodiment)

[0109] The position and attitude sensor is available in various types, but such sensor in many types is associated with a drawback of fluctuation of the obtained values. FIG. 8 is a chart showing the difference dV between the measured value and the true value as a function of elapsing time in the abscissa. There are experienced a small fluctuation as indicated by a broken line, and a large shift as indicated by a solid line, and the present embodiment deals with the case of a large shift as indicated by the solid line.

[0110] In case of a large shift, the difference of two consecutive samples of dV is small if the sampling is executed with a sufficiently high frequency. Therefore, in a state where the output value of the board image recognition unit 502 is suitable, dV is calculated assuming such value as the true value and −dV is used as the correction value for the position and attitude sensor information.

[0111] In this manner, a large shift in the value starts from 0 when the output value of the board image recognition unit 502 becomes unsuitable, and the position and attitude information released by the player position and attitude determining means 402 in such state remains continuous, so that the player is relieved from an unpleasant feeling caused by a sudden shift in the position of the CG image.

[0112] Also in case the unsuitable period of the value from the board image recognition unit 502 is sufficiently short, the change in dV resulting from a large shift is small, so that the CG drawing position does not become discontinuous when the output value of the board image recognition unit 502 is adopted as the value of the player position and attitude determining means 402.

[0113]FIG. 9 shows an example of the image experiencing system of the fourth embodiment.

[0114] The basic configuration is same as in FIG. 6, except that the output of the board image recognition unit 502 is entered into the position and attitude sensor information processing unit 403 in addition to the most likely position and attitude determining unit 601, and the output of the position and attitude sensor information processing unit 403 is entered into the board image recognition unit 502 in addition to the most likely position and attitude determining unit 601. In the following description, however, the input of the output of the position and attitude sensor information processing unit 403 into the board image recognition unit 502 is considered negligible.

[0115]FIG. 10 is a UML activity chart showing the process of the position and attitude sensor information processing unit 403.

[0116] The information from the position and attitude sensor 104 is processed in normal manner to calculate the position and attitude information, and its value is retracted as a variable LastSensorPro. The variable LastSensorPro is an object variable which is referred to also from another thread to be explained in the following.

[0117] Subsequently, the calculated position and attitude information is added with a correction value to obtain a return value which is temporarily retracted. The correction value is also an object variable, of which value is set by another thread to be explained in the following. The return value is a local variable, which is only temporarily used for an exclusive execution. Finally, the return value is returned as the output of the position and attitude sensor information processing unit 403.

[0118] The aforementioned correction value, which is from time to time renewed in response to an output from the board image recognition unit 502, is calculated in the following manner.

[0119] At first there is discriminated whether the image recognition information is suitable. If unsuitable, the renewing process is not executed. If suitable, the correction value is set by subtracting the variable LastSensorPro from the position and attitude information obtained by the image recognition.

[0120] In the foregoing, the input from the position and attitude sensor information processing unit 403 to the board image recognition unit 502 is considered negligible, but the present invention is not limited to such case. In case highly precise information can be obtained from the position and attitude sensor 104, the correction value can be renewed also in the board image recognition unit 502 in a similar manner as in the position and attitude sensor information processing unit. Also the extent of correction on the respective values may be varied depending on the confidences on such values. For example, in case the difference of the confidences is very large, the correction value for the position and attitude determining means at the lower side is so renewed as to substantially directly release the output at the higher side, but, in case the difference is not so large, the correction value is so renewed as to execute corrections in small amounts on all the values.

[0121] According to the present embodiment, the renewal of the correction value allows to constantly obtain the position and attitude information on the view point of the player, with a high reliability, in a continuous manner, thereby avoiding the unpleasant feeling resulting from a sudden shift in the CG drawing position.

[0122] (Fifth Embodiment)

[0123] It is explained in the foregoing that the position and attitude sensor can be provided by the combination of a gyro sensor and an acceleration sensor, in which the gyro sensor detects the attitude information only. Such attitude sensor can be utilized as the position and attitude sensor if the position is calibrated in advance. Such calibration can be dispensed with if there is simultaneously provided position and attitude determining means consisting of the camera 501 and the board image recognition unit 502.

[0124] The position and attitude information is basically calculated by image processing, and, if such information obtained by the image processing is unsuitable, the attitude data alone can be compensated by the value supplied from the attitude sensor. In case of a board game or a card game, the change in the viewing field resulting from a change in the attitude of the HMD 103 is considered much larger than that resulting from a change in the position of the HMD 103, so that the compensation of the attitude information alone can be considered significantly useful. For this reason, an attitude sensor is fixed, in addition to the camera 510, to the HMD 103.

[0125] This constitutes the image experiencing system of a fifth embodiment, of which configuration is shown in FIG. 11.

[0126] In comparison with the embodiment shown in FIG. 6, the present embodiment is different only in the configuration of the player position and attitude determining means 302. More specifically, the position and attitude sensor 103 is replaced by an attitude sensor 1101, the position and attitude sensor information processing unit 403 is replaced by an attitude sensor information processing unit 1102, and the most likely position and attitude determining unit 601 is replaced by a most likely attitude determining unit 1103.

[0127] The basic process flow is same as in the third embodiment. The output data of the attitude sensor 1101 are processed by the attitude sensor information processing unit 1102 to provide attitude information. It is to be noted that the position and attitude sensor information processing unit 403 outputs the position information, in addition to the attitude information.

[0128]FIG. 12 is a UML activity chart showing the process of the most likely attitude determining unit 1103.

[0129] At first, the unit awaits the attitude information from the attitude sensor information processing unit 1102 and the image recognition information from the board image recognition unit 502. When both data become available, there is discriminated whether the image recognition information is suitable.

[0130] If suitable, the position information alone therein is set as an object variable LastIPPos. Then the image recognition information is returned and the process is terminated.

[0131] If the information is identified not suitable, the attitude sensor information is used as the attitude information, and the variable LastIPPos set in the foregoing is used as the deficient position information. Then the position and attitude information, obtained by combining both data, is returned and the process is terminated.

[0132] In the foregoing it is assumed that the values of the board image recognition unit are more reliable that those of the attitude information processing unit with respect to the attitude information, but the most likely attitude determining unit 1103 may calculate and determine the attitude information from both values, depending on the reliabilities thereof.

[0133] In the present embodiment, as the most likely attitude determining unit determines the most reliable attitude of the view point of the player based on the respective output values, utilizing the reliabilities thereof, there can be obtained highly reliable attitude information on the view point of the player, even in case any of the output values is unsuitable.

[0134] The present embodiment allows to dispense with the calibration of the position and attitude of the view point of the user, which is indispensable in case of employing the attitude sensor only, and to provide an inexpensive system because an attitude sensor can be employed instead of the position and attitude sensor.

[0135] (Sixth Embodiment)

[0136] As explained in the foregoing, also the attitude information obtained from the attitude sensor information processing unit 1102 shows fluctuation. The present embodiment resolves such drawback by a method similar to the fourth embodiment.

[0137]FIG. 13 shows an example of the configuration of the image experiencing system of a sixth embodiment, which is same in the configuration as in FIG. 11 except that the output of the board image recognition unit 502 is supplied not only to the most likely attitude determining unit 1103 but also to the attitude sensor information processing unit 1102. However, the attitude sensor information processing unit 1102 receives only the attitude information within the position and attitude information.

[0138]FIG. 14 is a UML activity chart showing the process of the attitude sensor information processing unit 1102.

[0139] The information from the attitude sensor 1101 is processed in the normal manner to obtain the attitude information, of which value is retracted as a variable LastSensorDir. The variable LastSensorDir is an object variable which is referred to also from another thread to be explained in the following.

[0140] Subsequently, the calculated attitude information is added with a correction value to obtain a return value which is temporarily retracted. The correction value is also an object variable, of which value is set by another thread to be explained in the following. The return value is a local variable, which is only temporarily used for an exclusive execution. Finally, the return value is returned as the output of the attitude sensor information processing unit 1102.

[0141] The aforementioned correction value, which is from time to time renewed in response to an output from the board image recognition unit 502, is calculated in the following manner.

[0142] At first there is discriminated whether the image recognition information is suitable. If unsuitable, the renewing process is not executed. If suitable, the correction value is set by subtracting the variable LastSensorDir from the attitude information obtained by the image recognition.

[0143] In the foregoing, the input from the attitude sensor information processing unit 1102 to the board image recognition unit 502 is considered negligible, but the present invention is not limited to such case. In case highly precise information can be obtained from the attitude sensor 1101, the correction value can be renewed also in the board image recognition unit 502 in a similar manner as in the attitude sensor information processing unit. Also the extent of correction on the respective values may be varied depending on the confidences of such values. For example, in case the difference of the confidences is very large, the correction value for the attitude determining means at the lower side, or a portion relating to the attitude determination in the position and attitude determining means, is so renewed as to substantially directly release the output at the higher side, but, in case the difference is not so large, the correction value is so renewed as to execute corrections in small amounts on all the values.

[0144] The present embodiment allows to obtain the position and attitude information on the view point of the player, including the attitude information of a high reliability, in a continuous manner.

[0145] (Seventh Embodiment)

[0146] It is already explained that the board 101, constituting a field of the board game, includes certain areas and the players execute the game by placing, removing or moving pieces in, from or between these areas. The game management unit 401 grasps the situation of the scene or proceeding of the game, to enable the CG generation unit 405 to generate CG matching such scene or proceeding of the game, whereby the game is felt more realistic to the players.

[0147] For this purpose, there is provided, for the piece to be operated by the player, piece operation recognition means for recognizing “which piece” is “placed/removed” in or from “which area”.

[0148] It is naturally possible also to employ another item in place for the piece and to recognize the operation on such item.

[0149]FIG. 15 shows an example of the configuration of the image experiencing system of a seventh embodiment.

[0150] A piece operation recognition unit A 1501 is composed of a special mark such as a bar code attached to a piece, and a special mark recognition unit A 1502 such as a bar code reader for regonizing the special mark. The special mark is to be attached on the piece and is therefore omitted from FIG. 15.

[0151] The special mark is used only for identifying the piece, and can not only be an ordinary printed mark but can also be based on so-called RFID system utilizing an IC chip or the like.

[0152] The special mark recognition unit A 1502 may be provided in each area on the board 101, or may be provided collectively for plural or all the areas on the board.

[0153] The data from the special mark recognition unit A 1502 are transferred to a special mark recognition unit B 1503, and then to a piece operation recognition unit B 1504.

[0154] The special mark recognition unit B 1503 analyzes the information from the special mark recognition unit A 1502 and converts it into a data format required by the piece operation recognition unit B 1504. In case a special mark recognition unit B 1503 is provided in each area, the information of “which area” can be identified from the special mark recognition unit B 1503 releasing the output and need not be released. However, in case a single special mark recognition unit B 1503 covers plural areas, the information indicating “which area” is outputted for thus covered areas. Also, in case the input from the special mark recognition unit A 1502 is for example a number of 10 digits, such input is converted, for example by a conversion table, into information indicating “which area”.

[0155] The piece operation recognition unit B 1504 recognizes “which piece” is “placed/removed” in or from “which area”, and transfer the result of such recognition, as the result obtained by the piece operation recognition unit A 1501, to the game management unit 401.

[0156] The game management unit 401 causes the game to proceed, based on the result of recognition from the piece operation recognition unit A 1501. In the actual proceeding of the game, there may be required information that “which piece” is moved from “which area” to “which area”. Such information is judged by the game management unit 401, by combining information that a piece is “removed from an area j” and information that “a piece i is placed in an area k”. In this case, if the piece placed in the area j is a piece i, there is judged that “a piece i is moved from an area j to an area k”. The piece in the area j can be identified as the piece i by managing and referring to the history by the game management unit.

[0157]FIG. 16 is a UML activity chart showing the process of the piece operation recognition unit.

[0158] The special mark recognition unit A 1501 is provided in each area on the board 101, whereby a special mark recognition unit i corresponds to the area i. The special mark recognition unit B 1503 returns a special mark identifier j when the piece j is placed, and a particular special mark identifier Nothing when the piece is removed.

[0159] The piece operation recognition unit A 1501 awaits the input from the special mark recognition unit, and outputs a result “a piece is removed from the area i” or “a piece j is placed in the area i” respectively if the special mark identifier j is Nothing or otherwise.

[0160] The present embodiment allows the game to proceed, based on the actual operations of the players. Since the CG can be generated matching the scene or proceeding situation of the game, it is felt as a more realistic game to the players.

[0161] (Eighth Embodiment)

[0162] A bar code can be used as the special mark identifier, corresponding to claim 10.

[0163] The bar code is widely utilized for example in the field of distribution of commodities, and has various features such as easy availability, high accuracy in recognition, stability in recognition, inexpensiveness etc. Particularly in case of a card game, the bar code can be printed simultaneously with the printing of the cards. Also an invisible bar code can be used for attaching the special mark without affecting the design of the cards.

[0164] (Ninth Embodiment)

[0165] An RFID system, or radio frequency identification technology which is a non-contact automatic identification technology utilizing radio frequency, can be used as the special mark recognition means.

[0166] A device called tag or transponder is attached to an article, and an ID specific to the tag is read by a reader. In general, the tag is composed of a semiconductor circuitry including a control circuit, a memory etc. constructed as a single chip, and an antenna. The reader emits an inquiring electric wave, which is also used as electric energy, so that the tag does not require a battery. In response to the inquiring wave, the tag emits the ID stored in advance in the memory. The reader reads such ID, thereby identifying the article.

[0167] The RFID system is widely employed for example in the ID card or the like, and has features of easy availability, high accuracy in recognition, stability in recognition, inexpensiveness etc. If the tag is incorporated inside the piece, it can be recognized without affecting at all the external appearance of the piece. Also the piece and the board have a larger freedom in designing, since the surface of the piece need not be flat and a non-metallic obstacle may be present between the tag and the reader.

[0168] (Tenth Embodiment) The “piece” can be recognized, even without the special mark recognition unit, by an image recognition process on the image obtained with a camera. The piece recognition can be achieved by the pattern on the card surface in case of a card game, or by the shape of the piece in case of chess or the like, or by the shape of the piece and the pattern drawn thereon in other games.

[0169] In the following there will be explained an example of recognizing a pattern drawn on the surface of a rectangular card, but the present invention is also applicable to a case of recognizing the shape of the piece, or a case of recognizing the shape of the piece and the pattern thereon at the same time.

[0170]FIG. 17 shows an example of the configuration of the image experiencing system of a tenth embodiment. In comparison with the embodiment shown in FIG. 15, the present embodiment is different only in the configuration of the piece operation recognition means 1501, wherein the special mark recognition unit 1502 corresponds to a piece recognition camera 1701 and the special mark recognition unit 1503 corresponds to a piece image recognition unit 1702. The piece operation recognition unit 1504 remains same.

[0171] In the following there will be explained an example of recognizing two patterns shown in FIG. 18, but it is also possible to recognize various complex patterns such as a cartoon or a photograph, by employing more complex processing.

[0172]FIG. 20 is a UML activity chart showing the process of the piece image recognition unit 1702.

[0173] The recognition is executed in two stages, namely the detection of a frame, and then the detection of a pattern. In case the frame cannot be detected, it is judged that the card is not present, and a piece identifier Nothing is returned. The method of frame detection is not illustrated, but can be achieved, for example, by detecting straight lines by Huffman transformation or the like and judging a frame from the positional relationship of such lines.

[0174] In case the frame is detected, there is then executed detection of the pattern. As shown in FIG. 19, the interior of the frame is divided into four areas, and the color is detected in each area. Various methods are available also for the color detection. For example, in case of detecting white and black only, there is only utilized the luminocity information and the average luminocity is determined in the object area, and the area is judged as black or white respectively if such average luminocity is lower or higher than a predetermined value TB.

[0175] The areas are numbered from 1 to 4 as shown in FIG. 19, and, if the combination of colors in the areas 1 to 4 are black-white-white-black or white-black-black-white, a piece identifier 1 is returned. If the colors are black-black-white-white, white-white-black-black, black-white-black-white or white-black-white-black, a piece identifier 2 is returned. Any other combination indicates an unexpected card or an erroneous recognition of the frame, and an identifier Nothing is returned, indicating the absence of the card. In case the image recognition is repeated, a same result is outputted in succession.

[0176] In the course of a card placing or removing operation, there may be outputted at random a state where a card is placed or removed. In a game which may be hindered by such situation, there is required a measure for suppressing successive same outputs or for providing an output when a same state continues for a predetermined time, but such measure will not be explained.

[0177] (Eleventh Embodiment)

[0178] The camera 501 fixed to the HMD 103 can be used for the piece recognition. In such case the system can be simplified as the camera fixed to the HMD of the player is used for the piece operation recognition means.

[0179] If the board 101 can be recognized by the board image recognition unit 502, the areas provided on the board 101 can be identified. The piece operation recognition means can be constituted by recognizing the pieces in such areas.

[0180]FIG. 21 shows an example of the configuration of the image experiencing system of an eleventh embodiment. Piece operation recognition means 1501 is composed of an on-board piece image recognition unit 2101 and a piece operation recognition unit 1504, and the image data to be recognized are entered from the camera 501 while the recognition information of the board 101 is entered from the board image recognition unit 502. The output information of the board image recognition unit 502 indicates the position and attitude of the view point of the player, from which the position of the board on the image can be easily calculated.

[0181]FIG. 22 is a UML activity chart showing the process of the on-board piece image recognition unit. The unit receives an image input from the camera 501, and then the position and attitude of the view point of the HMD from the board image recognition unit 502.

[0182] At first the position and attitude of the board 101 on the input image is calculated from the position and attitude of the view point of the HMD. Then the positions and attitudes of the areas on the input image are calculated from the position and attitude information of the predetermined areas on the board.

[0183] Once the position and attitude of each area are known, the image in such position is cut out and subjected to image recognition to recognize the piece in each area. Since the information of each area includes the attitude information, such information may also be utilized in the image recognition to improve the accuracy.

[0184] (Twelfth Embodiment)

[0185] In case of recognizing a piece on the board by the camera 501, there may arise a situation where the number of pixels occupied by the piece on the image becomes smaller, for example depending on the distance from the camera 501 to the piece or on the attitude of the piece relative to the camera 501, thereby rendering the recognition difficult or requiring a complex configuration of the image recognition unit for correcting the deformation of the image.

[0186] Therefore, in recognizing “which piece”, the piece is brought to a predetermined specified position relative to the camera 501.

[0187] For example, the piece is brought to a position at a distance of 30 cm in front of the camera. For example, in case of a card as shown in FIG. 18, the card is judged exposed to the camera when the frame arrives at a specified position on the image, and the recognition of the card is executed in such position.

[0188] Once the piece is recognized, “which area” can be identified by tracing the piece until it is placed on the board or by recognizing “placing” of any piece by simplifying the image recognition unit in the tenth embodiment. The “removing” can also be recognized in a similar manner.

[0189] In recognizing “which piece”, the recognition rate can be improved by positioning the piece at a specified position with respect to a specified camera. Also it is possible to simplify the configuration of the recognition unit.

[0190]FIG. 23 shows an example of the configuration of the image experiencing system of a twelfth embodiment. In comparison with the embodiment shown in FIG. 21, the present embodiment is different only in the configuration of the piece operation recognition means 1501. The on-board piece image recognition unit 2101 may be same as that in the eleventh embodiment, or may be further simplified since there is only required judgment that a piece is “placed” or “removed”. The piece image recognition unit can be same as that shown in the tenth embodiment. A piece operation recognition unit 2301, different from the piece operation recognition unit 1504, receives the inputs from both the on-board piece image recognition unit 2101 and the piece image recognition unit 1702.

[0191]FIGS. 24 and 25 are UML activity charts showing the process of the piece operation recognition unit 2301. FIG. 24 shows a state of receiving information “a piece j is recognized” from the piece image recognition unit 1702, and FIG. 25 shows a state of receiving information “a piece is placed/removed in an area i” from the on-board piece image recognition unit 2101.

[0192] In case information “a piece j is recognized” is received from the piece image recognition unit 1702, the information “piece j” is recorded in an object variable, and is utilized as the information “which piece” when information “piece is placed in which area” is received later.

[0193] Also in case information “a piece is placed in an area i” is received from the on-board piece image recognition unit 2101, the information “piece j” recorded in the object variable is read out, and a result “a piece j is placed in an area i” is returned.

[0194] Also in case information “a piece is removed from an area i” is received from the on-board piece image recognition unit 2101, a result “a piece is removed from an area i” is directly returned.

[0195] The input to the piece image recognition unit 1702 may be executed, instead of the camera 501 fixed to the HMD 103, by an exclusive camera such as a separately prepared document camera. Also a similar effect can be attained by replacing the combination of the exclusive camera and the piece image recognition unit 1702 by the special mark recognition unit 1502 and the special mark recognition unit 1503, prepared separately.

[0196] (Thirteenth Embodiment)

[0197] The piece can be recognized by the exposure of such piece in front of the camera 501, but such exposure position is not easily understandable for the player. Also if the image experiencing system is so designed as to improve the ease of use by the players, the spatial range for recognition inevitably becomes wider to result in complication of the recognition unit or in a loss in the recognition rate.

[0198] It is therefore desired that the player can expose the piece, without doubt or hesitation, to the spatially limited recognition area. This can be achieved by displaying a guide on the display unit 303 of the HMD 103, and exposing the piece by the player so as to match the displayed guide.

[0199]FIG. 26 shows an example of the configuration of the image experiencing system of a thirteenth embodiment. In comparison with the embodiment shown in FIG. 23, the configuration remains same except that the piece image recognition unit 1702 is replaced by a piece image recognition/guide display instruction unit 2601, from which information is outputted to the CG generation unit 405.

[0200] The piece image recognition/guide display instruction unit 2601 is same in configuration as the piece image recognition unit 1702, except that it outputs a guide display instruction in case the confidence on the result of recognition is less than a certain level. FIG. 27 shows the difference of the outputs of the piece image recognition unit 1702 and the piece image recognition/guide display instruction unit 2601. In case the recognition engine is similar to the piece image recognition unit 1702, there is judged a recognized state and the result of recognition is outputted if the confidence on the recognition is at least equal to a certain value Th higher than the threshold value of the piece image recognition unit 1702. A high recognition rate can be easily realized as the threshold value Th is higher.

[0201] On the other hand, there is judged a non-recognized state or a non-exposed state of the piece in case the confidence is lower than a certain value Tl. Such value Tl is same as the recognition threshold Th in case of the piece image recognition unit 1702, but is selected lower in case of the piece image recognition/guide display instruction unit 2601. A guide display instruction is given in case the confidence is neither “recognized” nor “not recognized”.

[0202]FIG. 28 is a UML chart showing the process of the piece image recognition/guide display instruction unit 2601. If the confidence is selected within a range of 1-0, there stands a relationship 0<Tl<Th<1. After the piece image recognition process, if the confidence on the result of recognition is lower than Tl, the situation is judged non-recognized and no action is executed. If the confidence is higher than Th, the situation is judged recognized and the result of recognition is transferred to the piece operation recognition unit 2301. If the situation is neither of the foregoing, a guide display instruction is issued. The guide display can be, for example, as shown in FIG. 29.

[0203] In the present embodiment, in case of exposing a piece at a specified position of a specified camera, a guide for assisting such exposure is prepared by CG and is displayed in superposition in the HMD of the player, the player can easily place and expose the piece in a spatially appropriate position.

[0204] (Fourteenth Embodiment)

[0205] In a game played by plural players, the event on the board 101, including the display, has to be shared by all the players. This can be achieved logically by sharing a game management unit 401 by all the players. Physically, such unit may consist of a single exclusive PC or a specified exclusive game console including other constituent components, or may be provided in plural game consoles or PCs as in the case of a dispersed database.

[0206] Stated differently, for each player, the game console or PC 102 assumes a configuration as shown in FIG. 4, and the gate management unit 401 also reflects the result of operations executed by other players.

[0207]FIGS. 30A and 30B show examples of the configuration of the image experiencing system of a fourteenth embodiment.

[0208] A game console or PC 102 is assigned to each player, and such consoles or PCs are mutually connected by a network. The information flowing in the network is utilized for synchronizing the contents of the game management units 401 in the game consoles or PCs.

[0209] The piece operation recognition means is provided in each game console or PC 102, with each piece being recognized from plural view points. Ikn such case, it is also possible to exchange the information of recognition through the network, and to utilize the result of recognition of a higher reliability.

[0210] Also FIGS. 31A to 31C show another examples of the configuration of the image experiencing system of the fourteenth embodiment. Game contents are contained in a game server on the internet, and the game consoles or PCs 102 of the players are connected through the internet.

[0211] The game management unit 401 is composed of local game management units 3101 and a game server 3102, the latter being provided in an independent equipment. The local game management unit 3101 deals with matters relating only to each player and those requiring feedback to each local player without delay in time. Also data and programs relating to the individual game contents are downloaded from the game server 3102 through the internet, either at the start of the game or in the course of execution thereof.

[0212] The present embodiment allows plural players to play on a single board, and to display the result of complex operations by the plural players in each HMD based on the view point of each player, thereby enabling experience of a game played by plural players.

[0213] (Fifteenth Embodiment)

[0214] The present embodiment provides a system of executing a game by adapting MR (mixed reality) technology to a card game, thereby combining a real field and a virtual field by CG (computer graphics).

[0215] The card game is already known in various forms, such as poker or black jack utilizing the playing cards. Recently in fashion is a card game utilizing cards in which the cartoon characters are assigned.

[0216] Such card game is played on a play sheet or a game board, by players each holding cards. Each card records a cartoon character and its attributes or its specialty skill. The players execute the game by using these cards, and the game is won or lost by the offensive method or power and the defensive method or power, which are determined by the combination of the cards.

[0217] In the following there will be explained the system of the present embodiment, applied to a battle-type card game played on a board, by two players holding the cards.

[0218]FIG. 32 is a conceptual view of the game of the present embodiment. The two players respectively wear the see-through HMDs, and are positioned across a board, which constitutes the battle space of the game. The two players play the game by placing the respective cards on the board or moving the cards placed thereon. In the see-through HMD of each player, CG matching the characteristics of each card are displayed on each card.

[0219]FIG. 33 schematically shows the configuration of the system of the present embodiment.

[0220] The player wears an HMD 3321, which is provided with a camera 3320 and a three-dimensional position and attitude measuring means 3322. The HMD 3321 is connected to a game management unit 3325 while the camera 3320 and the three-dimensional position and attitude measuring means 3322 are connected to a position and attitude grasp unit 3329, both through signal cables.

[0221] The camera 3320 is matched with the view of the player and photographs the objects observed by the player. The obtained image data are transferred to the position and attitude grasp unit.

[0222] While the player observes a play board 3326, the image taken by the camera 3320 contains markers 3331 shown in FIG. 34. The markers 3331 are provided in predetermined positions on the play board, and the positional information of such markers is inputted in advance in the position and attitude grasp unit 3329. Therefore the area observed by the player can be estimated from the markers 3321 appearing on the image of the camera 3320.

[0223] The three-dimensional position and attitude measuring means 3322 measures the position and attitude of the HMD worn by the player, and the measured position and attitude data are supplied to the position and attitude grasp unit 3329.

[0224] Based on the positions of the markers 3331 taken by the camera 3320 and the position and attitude data from the three-dimensional position and attitude measuring means, the position and attitude grasp unit 3329 calculates the range observed by the player.

[0225] Above the play board 3326, there is provided a roof 3327 on which installed is a card recognition camera 3328. The card recognition camera 3328 may cover the entire area of the play board 3326, but it is also possible to divide the play board into four areas and to place four card recognition cameras 3328 respectively corresponding to these divided areas, or to place card recognition cameras 3328 by a number corresponding to that of the areas in which the cards are to be placed. The card recognition camera 3328 constantly watches the play board during the game, and the obtained image is transferred to a card reading unit 3324, which identifies the card on the play board 3326, based on the obtained image.

[0226] In the following there will be given an explanation on the markers provided on the play board. FIG. 34 is a plan view of the play board 3326, on which the markers 3331 are provided. Each marker is formed specified color and shape, and the information of such color, shape and position is registered in advance in the position and attitude grasp unit 3329. By detecting the markers in the image taken by the camera 3320 and identifying the color and shape, a position in the play board corresponding to the taken image can be identified. Then, based on the result of such identification, it is possible to estimate the area of the board observed by the player.

[0227] In the following there will be given an explanation on guides. As shown in a plan view of the play board 3326 shown in FIG. 35, the play board 3326 is provided with guides 3341 in a 5×2 arrangement for the player at the front side and also in a 5×2 arrangement for the player at the rear side. The guide 3341 defines an area in which the card is to be placed, and a card placed in any other area will be irrelevant from the proceeding of the game. The precision of card recognition can be improved since the card position is clearly determined by the guide 3341, which can be composed, for example, of a recess or a ridge corresponding to the card size.

[0228] Now the flow of the game will be explained with reference to flow charts.

[0229] At first reference is made to FIG. 38 for explaining the card reading unit 3324.

[0230] After the process is started in a step S701, the sequence enters an image capturing phase in a step S702, in which the image of the play board from the camera 3328 is captured. Then the captured image is identified in a card identification phase of a step S703.

[0231] The card identification will be explained with reference to FIG. 36, which is a plan view of the play board 3326 in a state where three cards are placed on the guides shown in FIG. 35. With a coordinate description in which the upper left corner is defined by (0, 0) and the lower right corner by (5, 4), FIG. 36 shows a state where a card ‘G’ is placed at (5, 1), a card ‘2’ at (4, 3) and a card ‘1’ at (3, 4).

[0232] A step S703 analyzes the camera image, detects the cards placed on the board by the image recognition technology, and identifies the coordinate value and the kind of each card. The coordinate value and the kind, thus identified, of the cards are retained as card arrangement data.

[0233] A step S704 compares the present card arrangement data with the prior data. If the comparison shows no change in the arrangement data, the sequence returns to the image capturing step S702. If the arrangement is changed, a step S705 executes a change in the card arrangement data and the sequence returns to the image capturing step S702.

[0234] In the following reference is made to FIG. 39 for explaining the position and attitude grasp unit 3329.

[0235] After the process is started in a step S801, the sequence enters an image capturing phase in a step S802, in which the image from the camera 3320 attached to the HMD of the player is captured. Then a step S803 fetches the attitude data from the three-dimensional position and attitude measuring means 3322. A step S804 identifies the markers 3331 from the image fetched in the step S802, thereby estimating the view point of the player. Then a step S805 determines the more exact position and attitude of the view point of the player, based on the attitude information obtained in S803 and the estimated information in S804. Thereafter the sequence returns to S802.

[0236] In the following, reference is made to FIG. 40 for explaining the game management unit 3325.

[0237] After the process is started in a step S901, a step S902 waits for any instruction from the player on the proceeding of the game. If any event arrives, the sequence proceeds to a step S903 for identifying the kind of the event. If the event is a signal for advancing to a next phase, the sequence proceeds to a step S904, but, if otherwise, the sequence returns to S902 to wait for a next event.

[0238] The signal for advancing to a next phase is generated by identifying an operation inducing a phase advancement. Such operation inducing a phase advancement may be recognized by various methods.

[0239] The game advancement can be judged, for example, by recognition of a voice of the player by a voice recognition unit as shown in FIG. 37, or by image recognition of a card exposed by the player in a large image size to the camera 3320 attached to the HMD worn by the player, or by image recognition of a card placement in a specified position on the board in the image obtained from the card recognition camera 3328.

[0240] A step S904 reads the card arrangement data determined by the card reading unit 3324. Then a step S905 fetches, from the card arrangement data, the data of the cards relating to the current phase, and calculates the offensive character and the offensive characteristics (offensive power and method) of the offensive side, based on the arrangement and combination of the cards. Then a step S906 fetches, from the card arrangement data as in the step S905, the data of the cards relating to the current phase, and calculates the defensive character and the defensive characteristics (defensive power and method) of the defensive side, based on the arrangement and combination of the cards. The calculation of the offensive and defensive characteristics in the steps S905 and S906 is executed according to the rules of the game.

[0241] Then a step S907 calculates the result of battle according to the combination of characters of the offensive and defensive sides, and generates a battle scene matching such result. Then a step S908 acquires the view point of the player derived by the position and attitude grasp unit 3329, then a step S909 generates CG of the battle scene seen from the view point of the player, and a step S910 synthesizes the image obtained from the camera 3320 with the CG generated in S909. Then the real field and the virtual field of the CG are superimposed by the MR technology in the HMD worn by the player, thereby displaying a virtual CG character on the card.

[0242] However, such image synthesis is only required in case of a video see-through HMD, and not required in case of an optical see-through HMD.

[0243] As explained in the foregoing, the present invention allows to combine the real world with the virtual CG world by displaying the virtual CG corresponding to the viewing point, thereby realizing much higher excitement in a game utilizing a play board.

[0244] In case the card recognition camera 3328 cannot be installed in satisfactory manner, it is also possible, in order to improve the reading accuracy of the card recognition camera 3328, to adopt a method in which the player exposes a card in front of the HMD 3321 worn by the player to identify the kind of the card by the camera 3320, before such card is placed. Also in order to improve the reading accuracy of the card recognition camera 3328, it is possible to position such camera 3328 in a specified position and to identify the kind of a card by such camera 3328 before such card is placed.

[0245] In the foregoing embodiment, a card is employed as the item of the game, but there may also be employed other items such as a piece.

[0246] Also the board may have a three-dimensionally stepped structure and the character may be displayed in a position corresponding to the height of such stepped structure. In such case, a model of the three-dimensionally stepped structure of the play board is registered in advance, and the position of the synthesized CG is controlled based on the three-dimensional structural information corresponding to the position of the view point.

[0247] Also for identifying the card on the play board, the foregoing embodiment analyzes the image of the camera 3328 and recognizes the pattern on the card by the image recognition technology, but it is also possible to attach a bar code to the card and to identify the card with a bar code reader.

[0248] (Other Embodiments)

[0249] The present invention also includes a case of supplying a computer of an apparatus or a system, connected with various devices so as to operate such devices for realizing the functions of the aforementioned embodiments, with program codes of a software realizing the functions of the aforementioned embodiments and causing a computer (or CPU or MPU) of such apparatus or system to execute the program codes thereby operating such devices and realizing the functions of the aforementioned embodiments.

[0250] In such case, the program codes themselves of such software realize the functions of the aforementioned embodiments, and the program codes themselves, and means for supplying the computer with such program codes, for example a memory medium storing such program codes, constitute the present invention.

[0251] The memory medium for supplying such program codes can be, for example, a floppy disk, a hard disk, an optical disk, a magnetooptical disk, a CD-ROM, a magnetic tape, a non-volatile memory card or a ROM.

[0252] The present invention naturally includes not only a case where the functions of the aforementioned embodiments are realized by the execution of the supplied program codes by the computer but also a case where the functions of the aforementioned embodiments are realized by the cooperation of such program codes with an OS (operating system) or another application software or the like functioning on the computer.

[0253] The present invention further includes a case where the supplied program codes are once stored in a memory provided in a function expansion board of the computer or a function expansion unit connected to the computer and a CPU or the like provided on such function expansion board or function expansion unit executes all the processes or a part thereof under the instructions of such program codes.

Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7427996 *Oct 15, 2003Sep 23, 2008Canon Kabushiki KaishaImage processing apparatus and image processing method
US7474318May 28, 2004Jan 6, 2009National University Of SingaporeInteractive system and method
US7690975Oct 4, 2005Apr 6, 2010Sony Computer Entertainment Inc.Image display system, image processing system, and video game system
US7728852Mar 24, 2005Jun 1, 2010Canon Kabushiki KaishaImage processing method and image processing apparatus
US7991220 *May 25, 2005Aug 2, 2011Sony Computer Entertainment Inc.Augmented reality game system using identification information to display a virtual object in association with a position of a real object
US8096900 *Mar 19, 2004Jan 17, 2012Sports Innovation AsMat for sport and games
US8152637 *Oct 13, 2005Apr 10, 2012Sony Computer Entertainment Inc.Image display system, information processing system, image processing system, and video game system
US8248666 *Aug 2, 2006Aug 21, 2012Yoshida KenjiInformation input/output device including a stage surface on which a reflecting medium including a printed dot pattern is disposed
US8328613Nov 15, 2010Dec 11, 2012Hasbro, Inc.Game tower
US8358320Nov 3, 2008Jan 22, 2013National University Of SingaporeInteractive transcription system and method
US8585476 *Nov 16, 2005Nov 19, 2013Jeffrey D MullenLocation-based games and augmented reality systems
US8645220Aug 28, 2009Feb 4, 2014Homer Tlc, Inc.Method and system for creating an augmented reality experience in connection with a stored value token
US8730156 *Nov 16, 2010May 20, 2014Sony Computer Entertainment America LlcMaintaining multiple views on a shared stable virtual space
US20070077987 *May 3, 2006Apr 5, 2007Tangam Gaming Technology Inc.Gaming object recognition
US20110216060 *Nov 16, 2010Sep 8, 2011Sony Computer Entertainment America LlcMaintaining Multiple Views on a Shared Stable Virtual Space
US20110319166 *Jun 23, 2010Dec 29, 2011Microsoft CorporationCoordinating Device Interaction To Enhance User Experience
EP2491989A2 *Feb 16, 2012Aug 29, 2012Nintendo Co., Ltd.Information processing system, information processing method, information processing device and information processing program
EP2679290A1 *Jun 28, 2012Jan 1, 2014Alcatel-LucentMethod for supporting a joint activity of a plurality of remote communication devices
WO2010029553A1 *Sep 13, 2009Mar 18, 2010Bergig Oriel YMethod and system for compositing an augmented reality scene
WO2014001239A1 *Jun 24, 2013Jan 3, 2014Alcatel LucentMethod for supporting a joint activity of a plurality of remote communication devices
Classifications
U.S. Classification273/237
International ClassificationG06F3/01, G06F3/00, A63F3/00
Cooperative ClassificationA63F2009/2433, A63F2300/6676, A63F2300/1087, A63F3/00643, A63F2300/6661, A63F2300/8082, A63F2300/1012, A63F2300/105, A63F3/00895, G06F3/011
European ClassificationA63F3/00E, G06F3/01B, A63F3/00Q
Legal Events
DateCodeEventDescription
Sep 26, 2002ASAssignment
Owner name: CANON KABUSHIKI KAISHA, JAPAN
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NORO, HIDEO;SATO, HIROAKI;MATSUI, TAICHI;REEL/FRAME:013344/0317;SIGNING DATES FROM 20020919 TO 20020920