WO2007005900A9 - Mobile motion capture cameras - Google Patents

Mobile motion capture cameras Download PDF

Info

Publication number
WO2007005900A9
WO2007005900A9 PCT/US2006/026088 US2006026088W WO2007005900A9 WO 2007005900 A9 WO2007005900 A9 WO 2007005900A9 US 2006026088 W US2006026088 W US 2006026088W WO 2007005900 A9 WO2007005900 A9 WO 2007005900A9
Authority
WO
WIPO (PCT)
Prior art keywords
motion capture
motion
mobile
camera
cameras
Prior art date
Application number
PCT/US2006/026088
Other languages
French (fr)
Other versions
WO2007005900A2 (en
WO2007005900A3 (en
Inventor
Demian Gordon
Original Assignee
Sony Pictures Entertainment
Sony Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Pictures Entertainment, Sony Corporation filed Critical Sony Pictures Entertainment
Priority to CA2614058A priority Critical patent/CA2614058C/en
Priority to AU2006265040A priority patent/AU2006265040B2/en
Priority to KR1020087002642A priority patent/KR101299840B1/en
Priority to EP06786294A priority patent/EP1908019A4/en
Priority to CN2006800321792A priority patent/CN101253538B/en
Priority to NZ564834A priority patent/NZ564834A/en
Priority to JP2008519711A priority patent/JP2008545206A/en
Publication of WO2007005900A2 publication Critical patent/WO2007005900A2/en
Publication of WO2007005900A3 publication Critical patent/WO2007005900A3/en
Publication of WO2007005900A9 publication Critical patent/WO2007005900A9/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/782Systems for determining direction or deviation from predetermined direction
    • G01S3/785Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system
    • G01S3/786Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system the desired condition being maintained automatically
    • G01S3/7864T.V. type tracking systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment

Definitions

  • the present invention relates to three-dimensional graphics and animation, and more particularly, to a motion capture system that enables both facial and body motion to be captured simultaneously within a volume that can accommodate plural actors.
  • Motion capture systems are used to capture the movement of a real object and map it onto a computer generated object Such systems are often used in the production of motion pictures and video games for creating a digital
  • CG computer graphics
  • an actor wears a suit having markers attached at various locations (e.g., having small reflective markers attached to the body and limbs) and digital cameras record the movement of the actor from different angles while illuminating the markers. The system then analyzes the images to determine the locations (e.g., as spatial
  • the system By tracking the locations of the markers, the system creates a spatial representation of the markers over time and builds a digital representation of the actor in motion. The motion is then applied to a digital model, which may then be textured and rendered to produce a complete CG representation of the actor and/or performance. This technique has been used by special effects companies to produce incredibly realistic animations in many popular movies .
  • Motion capture systems are also used to track the motion of facial features of an actor to create a representation of the actor's facial motion and expression (e.g., laughing, crying, smiling, etc.) .
  • markers are attached to the actor's face and cameras record the actor's expressions. Since facial movement involves
  • the facial markers are typically much smaller than the corresponding body markers, and the cameras typically have higher resolution than cameras usually used for body motion capture.
  • the cameras are typically aligned in a common plane with physical movement of the actor restricted to keep the cameras focused on the actor's face.
  • the facial motion capture system may be incorporated into a helmet or other implement that is physically attached to the actor so as to uniformly illuminate the facial markers and minimize the degree of relative movement between the camera and face. For this reason, facial motion and body motion are usually captured in separate steps. The captured facial motion data is then combined with captured body motion data later as part of the subsequent animation process.
  • An advantage of motion capture systems over traditional animation techniques, such as keyframing, is the capability of real-time visualization.
  • the production team can review the spatial representation of the actor's motion in real-time or near real-time, enabling the actor to alter the physical performance in order to capture optimal data.
  • motion capture systems detect subtle nuances of physical movement that cannot be easily reproduced using other
  • Facial motion and body motion are inextricably linked, such that a facial expression is often enhanced by corresponding body motion.
  • body motion i.e., body language
  • This linkage between facial motion and body motion is lost when the motions are captured separately, and it is difficult to synchronize these separately captured motions together.
  • the present invention provides systems and methods for capturing motion using mobile motion capture cameras .
  • a system for capturing motion comprises: a motion capture volume configured to include at least one moving object having markers defining a plurality of points on the at least one moving object; at least one mobile motion capture camera, the at least one mobile motion capture camera configured to be moveable within the motion capture volume; and a motion capture processor coupled to the at least one mobile motion capture camera to produce a digital representation of movement of the at least one moving obj ect .
  • another system for capturing motion comprises: at least one mobile motion capture camera configured to be moveable, the at least one mobile motion capture camera operating to capture motion within a motion capture volume; and at least one mobile motion capture rig configured to enable the at least one mobile motion capture camera to be disposed on the at least one mobile motion capture rig such that cameras of the at least one mobile motion capture camera can be moved.
  • a method for capturing motion comprises : defining a motion capture volume configured to include at least one moving object having markers defining a plurality of points on the at least one moving object; moving at least one mobile motion capture camera within the motion capture volume; and processing data from the at least one mobile motion capture camera to produce a digital
  • a system for capturing motion comprises: means for defining a motion capture volume configured to include at least one moving object having markers defining a plurality of points on the at least one moving object; means for moving at least one mobile motion capture camera within the motion capture volume; and means for processing data from the at least one mobile motion capture camera to produce a digital representation of
  • Fig. 1 is a block diagram illustrating a motion capture system in accordance with an embodiment of the present invention
  • Fig. 2 is a top view of a motion capture volume with a plurality of motion capture cameras arranged around the periphery of the motion capture volume;
  • Fig. 3 is a side view of the motion capture volume with a plurality of motion capture cameras arranged around the periphery of the motion capture volume;
  • Fig. 4 is a top view of the motion capture volume illustrating an arrangement of facial motion cameras with respect to a quadrant of the motion capture volume;
  • Fig. 5 is a top view of the motion capture volume illustrating another arrangement of facial motion cameras with respect to comers of the motion capture volume;
  • Fig. 6 is a perspective view of the motion capture volume illustrating a motion capture data reflecting two actors in the motion capture volume ;
  • Fig. 7 illustrates motion capture data reflecting two actors in the motion capture volume and showing occlusions regions of the data
  • Fig. 8 illustrates motion capture data as in Fig. 7, in which one of the two actors has been obscured by an occlusion region
  • Fig. 9 is a block diagram illustrating an alternative embodiment of the motion capture cameras utilized in the motion capture system.
  • Fig. 10 is a block diagram illustrating a motion capture system in accordance with another embodiment of the present invention.
  • Fig. 11 is a top view of an enlarged motion capture volume defining a plurality of performance regions.
  • Figs. 12A-12C are top views of the enlarged motion capture volume of Fig. 11 illustrating another arrangement of motion capture cameras.
  • Fig. 13 shows a frontal view of one implementation of cameras positioned on a mobile motion capture rig.
  • Fig. 14 illustrates a frontal view of a particular implementation of the mobile motion capture rig shown in Fig. 13.
  • Fig. 15 illustrates a top view of a particular
  • Fig. 16 illustrates a side view of a particular
  • Fig. 17 shows a frontal view of another implementat cameras positioned on a mobile motion capture rig.
  • Fig. 18 shows a front perspective view of yet another implementation of cameras positioned on a mobile motion capture rig.
  • Fig. 19 illustrates one implementation of a method for capturing motion.
  • the present invention also satisfies the need for a motion capture system that enables audio recording simultaneously with body and facial motion capture.
  • like element numerals are used to describe like elements illustrated in one or more of the drawings .
  • FIG. 1 a block diagram illustrates a motion capture system 10 in accordance with an embodiment of the present invention.
  • the motion capture system
  • the motion capture processor 12 may further comprise a programmable computer having a data storage device 20 adapted to enable the storage of associated data files .
  • One or more computer workstations 18i-18 N may be coupled to the motion capture processor 12 using a network to enable multiple graphic artists to work with the stored data files in the process of creating a computer graphics animation.
  • the facial motion cameras 14i-l1 ⁇ 2 and body motion cameras 16i-16 N are arranged with respect to a motion capture volume (described below) to capture the combined motion of one or more actors performing within the motion capture volume.
  • Each actor's face and body is marked with markers that are detected by the facial motion cameras 14i-14 N and body motion cameras 16i-16 N during the actor's performance within the motion capture volume.
  • the markers may be reflective or illuminated elements.
  • each actor's body may be marked with a plurality of reflective markers disposed at various body locations including head, legs, arms, and torso.
  • the actor ' may be wearing a body suit formed of non-reflective material to which the markers are attached.
  • the actor's face will also be marked with a plurality of markers.
  • the facial markers are generally smaller than the body markers and a larger number of facial markers are used than body markers. To capture facial motion with sufficient resolution, it is anticipated that a high number of facial markers be utilized (e.g., more than 100) .
  • 152 small facial markers and 64 larger body markers are affixed to the actor.
  • the body markers may have a width or diameter in the range of 5 to 9 millimeters, while the face markers may have a width or diameter in the range of 2 to 4 millimeters.
  • a mask may be formed of each actor's face with holes drilled at appropriate locations corresponding to the desired marker locations.
  • the mask may be placed over the actor's face, and the hole locations marked directly on the face using a suitable pen.
  • the facial markers can then be applied to the actor's face at the marked locations.
  • the facial markers may be affixed to the actor's face using suitable materials known in the theatrical field, such as make-up glue. This way, a motion capture production that extends over a lengthy period of time (e.g., months) can obtain reasonably consistent motion data for an actor even though the markers are applied and removed each day.
  • the motion capture processor 12 processes two- dimensional images received from the facial motion cameras 14 ⁇ -14 ⁇ and body motion cameras 16 X -16 N to produce a three- dimensional digital representation of the captured motion. Particularly, the motion capture processor 12 receives the two-dimensional data from each camera and saves the data in the form of multiple data files into data storage device 20 as part of an image capture process. The two-dimensional data files are then resolved into a single set of three- dimensional coordinates that are linked together in the form of trajectory files representing movement of individual markers as part of an image processing process .
  • the image processing process uses images from one or more cameras to determine the location of each marker. For example, a marker may only be visible to a subset of the cameras due to
  • processing uses the images from other cameras that have an unobstructed view of that marker to determine the marker's location in space.
  • the image processing process evaluates the image information from multiple angles and uses a
  • Kinetic calculations are then performed on the trajectory files to generate the digital representation reflecting body and facial motion corresponding to the actors' performance. Using the spatial information over time, the calculations determine the progress of each marker as it moves through space.
  • a suitable data management process may be used to control the storage and retrieval of the large number files associated with the entire process to/from the data storage device 20.
  • the motion capture processor 12 and workstations 18i-18 N may utilize commercial software packages to perform these and other data processing functions, such as available from Vicon Motion Systems or Motion Analysis Corp.
  • the motion capture system 10 further includes the capability to record audio in addition to motion.
  • the motion capture processor 12 may be coupled to the microphones 24i-24 N , either directly or through an audio interface 22.
  • the microphones 24 X -24 N may be fixed in place, or may be moveable on booms to follow the motion, or may be carried by the actors and communicate wirelessly with the motion capture processor 12 or audio interface 22.
  • the motion capture processor 12 would receive and store the recorded audio in the form of digital files on the data storage device 20 with a time track or other data that enables synchronization with the motion data.
  • Figs. 2 and 3 illustrate a motion capture volume 30 surrounded by a plurality of motion capture cameras.
  • the motion capture volume 30 includes a peripheral edge 32.
  • the motion capture volume 30 is illustrated as a rectangular- shaped region subdivided by grid lines. It should be appreciated that the motion capture volume 30 actually comprises a three-dimensional space with the grid defining a floor for the motion capture volume. Motion would be captured within the three dimensional space above the floor.
  • the motion capture volume 30 comprises a floor area of approximately 10 feet by 10 feet, with a height of approximately 6 feet above the floor.
  • Other size and shape motion capture volumes can also be advantageously utilized to suit the particular needs of a production, such as oval, round, rectangular, polygonal, etc.
  • Fig. 2 illustrates a top view of the motion capture volume 30 with the plurality of motion capture cameras arranged around the peripheral edge 32 in a generally circular pattern.
  • Individual cameras are represented graphically as triangles with the acute angle representing the direction of the lens of the camera, so it should be appreciated that the plurality of cameras are directed toward the motion capture volume 30 from a plurality of distinct directions.
  • the plurality of motion capture cameras further include a plurality of body motion cameras 16i-16 B and a plurality of facial motion cameras 14 x - 11 ⁇ 2.
  • the high number of facial motion cameras in Fig. 2 it should be appreciated that many are not labeled.
  • the body motion cameras 16i-16 8 are arranged roughly two per side of the motion capture volume 30, and the facial motion cameras 14 X -14 N are arranged roughly twelve per side of the motion capture volume 30.
  • the facial motion cameras 14i-14 N and the body motion cameras 16i ⁇ 16 N are substantially the same except that the focusing lenses of the facial motion cameras are selected to provide narrower field of view than that of the body motion cameras.
  • Fig. 3 illustrates a side view of the motion capture volume 30 with the plurality of motion capture cameras arranged into roughly three tiers above the floor of the motion capture volume.
  • a lower tier includes a plurality of facial motion cameras 14i-14 32 , arranged roughly eight per side of the motion capture volume 30.
  • each of the lower tier facial motion cameras 14i - 14 3 2 are aimed slightly upward so as to not include a camera roughly opposite the motion capture volume 30 from being included within the field of view.
  • the motion capture cameras generally include a light source (e.g., an array of light emitting diodes) used to illuminate the motion capture volume 30.
  • a middle tier includes a plurality of body motion cameras 16 3 - 16 7 arranged roughly two per side of the motion capture volume 30. As discussed above, the body motion cameras have a wider field of view than the facial motion cameras, enabling each camera to include a greater amount of the motion capture volume 30 within its respective field of view.
  • the upper tier includes a plurality of facial motion cameras (e.g., 14 3 3- 14 5 2) , arranged roughly five per side of the motion capture volume 30.
  • each of the upper tier facial motion cameras 14 33 - 14 52 are aimed slightly downward so as to not include a camera roughly opposite the motion capture volume 30 from being included within the field of view.
  • a number of facial motion cameras e.g., 14 53 - 14 6 o
  • the middle tier focused on the front edge of the motion capture volume 30. Since the actors' performance will be generally facing the front edge of the motion capture volume 30, the number of cameras in that region are increased to reduce the amount of data lost to occlusion.
  • a number of facial motion cameras e.g., 14 e i-14s 4
  • These cameras also serve to reduce the amount of data lost to occlusion.
  • the body and facial motion cameras record images of the marked actors from many different angles so that
  • substantially all of the lateral surfaces of the actors are exposed to at least one camera at all times .
  • the arrangement of cameras provide that substantially all of the lateral surfaces of the actors are exposed to at least three cameras at all times. By placing the cameras at multiple heights, irregular
  • the present motion capture system 10 thereby records the actors' body movement simultaneously with facial movement (i.e., expressions). As discussed above, audio recording can also be conducted simultaneously with motion capture.
  • Fig. 4 is a top view of the motion capture volume 30 illustrating an arrangement of facial motion cameras.
  • the motion capture volume 30 is graphically divided into
  • Facial motion cameras are grouped into clusters 36, 38, with each camera cluster representing a plurality of cameras.
  • each camera cluster may include two facial motion cameras located in the lower tier and one facial motion camera located in the upper tier.
  • the two camera clusters 36, 38 are physically disposed adjacent to each other, yet offset horizontally from each other by a discernable
  • the two camera clusters 36, 38 are each focused on the front edge of quadrant d from an angle of approximately 45°.
  • the first camera cluster 36 has a field of view that extends from partially into the front edge of quadrant c to the right end of the front edge of quadrant d.
  • the second camera cluster 38 has a field of view that extends from the left end of the front edge of quadrant d to partially into the right edge of quadrant d.
  • the respective fields of view of the first and second camera clusters 36, 38 overlap over the substantial length of the front edge of quadrant d.
  • a similar arrangement of camera clusters is included for each of the other outer edges (coincident with peripheral edge 32) of quadrants a, b, c and d.
  • Fig. 5 is a top view of the motion capture volume 30 illustrating another arrangement of facial motion cameras.
  • the motion capture volume 30 is graphically divided into quadrants a, b, c and d.
  • Facial motion cameras are grouped into clusters 42, 44, with each camera cluster representing a plurality of cameras.
  • the clusters may comprise one or more cameras located at various heights.
  • the camera clusters 42, 44 are located at corners of the motion capture volume 30 facing into the motion capture volume. These corner camera clusters 42, 44 would record images of the actors that are not picked up by the other cameras, such as due to occlusion. Other like camera clusters would also be located at the other corners of the motion capture volume 30.
  • Having a diversity of camera heights and angles with respect to the motion capture volume 30 serves to increase the available data captured from the actors in the motion capture volume and reduces the likelihood of data occlusion. , It also permits a plurality of actors to be motion captured simultaneously within the motion capture volume 30.
  • the high number and diversity of the cameras enables the motion capture volume 30 to be substantially larger than that of the prior art, thereby enabling a greater range of motion within the motion capture volume and hence more complex performances. It should be appreciated that numerous alternative arrangements of the body and facial motion cameras can also be advantageously utilized. For example, a greater or lesser number of separate tiers can be utilized, and the actual height of each camera within an individual tier can be varied.
  • the motion capture processor 12 has a fixed reference point against which movement of the body and facial markers can be measured.
  • a drawback of this arrangement is that it limits the size of the motion capture volume 30. If it was desired to capture the motion of a performance that requires a greater volume of space (e.g., a scene in which characters are running over a larger distance) , the
  • performance would have to be divided up into a plurality of segments that are motion captured separately.
  • a number of the motion capture cameras remain fixed while others are moveable.
  • the moveable motion capture cameras are moved to new position (s) and are fixed at the new
  • the- moveable motion capture cameras are moved to follow the action.
  • the motion capture cameras perform motion capture while moving.
  • the moveable motion capture cameras can be moved using computer-controlled servomotors or can be moved manually by human camera operators. If the cameras are moved to follow the action (i.e., the camera perform motion capture while moving) , the motion capture processor 12 would track the movement of the cameras, and remove this movement in the subsequent processing of the captured data to generate the three dimensional digital representation reflecting body and facial motion corresponding to the performances of actors.
  • the moveable cameras can be moved individually or moved together by placing the cameras on a mobile motion capture rig.
  • a mobile motion capture rig 1300 includes six cameras 1310, 1312, 1314, 1316, 1320, 1322.
  • FIG. 13 shows a frontal view of the cameras positioned on the mobile motion capture rig 1300.
  • four cameras 1310, 1312, 1314, 1316 are motion capture cameras.
  • Two cameras 1320, 1322 are reference cameras.
  • One reference camera 1320 is to show the view of the motion capture cameras 1310, 1312, 1314, 1316.
  • the second reference camera 1322 is for video
  • Fig. 13 shows the mobile motion capture rig 1300 having four motion capture cameras and two reference cameras
  • the rig 1300 can include only one or more motion capture cameras.
  • the mobile motion capture rig 1300 includes two motion capture cameras.
  • the mobile motion capture rig 1300 includes one motion capture camera with a field splitter or a mirror to provide a stereo view.
  • Fig. 14, Fig. 15, and Fig. 16 illustrate front, top, and side views, respectively, of a particular implementation of the mobile motion capture rig shown in Fig. 13.
  • the dimensions of the mobile motion capture rig are approximately 40" x 40" in width and length, and approximately 14" in depth .
  • Fig. 14 shows a frontal view of the particular
  • the mobile motion capture rig 1400 includes four mobile motion capture cameras 1410, 1412, 1414, 1416 disposed on the mobile motion capture rig 1400, and are positioned approximately 40 to 48 inches apart width- and length-wise. Each mobile motion capture camera 1410, 1412, 1414, or 1416 is placed on a rotatable cylindrical base having approximately 2" outer diameter.
  • the mobile motion capture rig 1400 also includes reference cameras 1420, computer and display 1430, and a view finder 1440 for framing and focus .
  • Fig. 15 shows a top view of the particular
  • FIG. 14 illustrates the offset layout of the four mobile motion capture cameras 1410, 1412, 1414, 1416.
  • the top cameras 1410, 1412 are positioned at approximately 2 inches and 6 inches in depth, respectively, while the bottom cameras 1414, 1416 are positioned at approximately 14 inches and 1 inch in depth, respectively. Further, the top cameras 1410, 1412 are approximately 42 inches apart in width while the bottom cameras 1414, 1416 are approximately 46 inches apart in width.
  • Fig. 16 shows a side view of the particular
  • the mobile motion capture rig 1400 This view highlights the different heights at which the four mobile motion capture cameras 1410, 1412, 1414, 1416 are positioned.
  • the top cameras 1410 is positioned at approximately 2 inches above the mobile motion capture camera 1412 while the bottom cameras 1414 is positioned at approximately 2 inches below the mobile motion capture camera 1416.
  • some of the motion capture cameras should be positioned low enough (e.g., approximately 2 feet off the ground) so that the cameras can capture performances at very low heights, such as kneeling down and/or looking down on the ground.
  • a mobile motion capture rig includes a plurality of mobile motion capture cameras but no reference cameras.
  • the feedback from the mobile motion capture cameras is used as reference information.
  • various total numbers of cameras can be used in a motion capture setup, such as 200 or more cameras distributed among multiple rigs or divided among one or more movable rigs and fixed positions.
  • the setup may include 208 fixed motion capture cameras (32 performing realtime reconstruction of bodies) and 24 mobile motion capture cameras.
  • the 24 mobile motion capture cameras are distributed into six motion capture rigs, each rig including four motion capture cameras.
  • the motion capture cameras are distributed into any number of motion capture rigs including no rigs such that the motion capture cameras are moved individually.
  • a mobile motion capture rig 1700 includes six motion capture cameras 1710, 1712, 1714, 1716, 1718, 1720 and two reference cameras 1730, 1732.
  • Fig. 17 shows a frontal view of the cameras positioned on the mobile motion capture rig 1700.
  • the motion capture rig 1700 can also include one or more displays to show the images captured by the reference cameras .
  • Fig. 18 illustrates a front perspective view of a mobile motion capture rig 1800 including cameras 1810, 1812, 1814, 1816, 1820.
  • the mobile motion capture rig 1800 includes servomotors that provide at least 6 degrees of freedom (6-DOF) movements to the motion capture cameras 1810, 1812, 1814, 1816, 1820.
  • 6-DOF degrees of freedom
  • the 6-DOF movements include three translation movements along the three axes X, Y, and Z, and three rotational movements about the three axes X, Y, and Z, namely tilt, pan, and rotate, respectively.
  • the motion capture rig 1800 provides the 6-DOF movements to all five cameras 1810, 1812, 1814, 1816, 1820.
  • each of the cameras 1810, 1812, 1814, 1816, 1820 on the motion capture rig 1850 is restricted to some or all of the 6-DOF movements.
  • the upper cameras 1810, 1812 may be restricted to X and Z translation movements and pan and tilt down rotational movements; the lower cameras 1814, 1816 may be restricted to X and Z translation movements and pan and tilt up rotational movements; and the center camera 1820 may not be restricted so that it can move in all six directions
  • the motion capture rig 1800 moves, pans, tilts, and rotates during and/or between shots so that the cameras can be moved and positioned into a fixed position or moved to follow the action.
  • the motion of the motion capture rig 1800 is controlled by one or more people.
  • the motion control can be manual, mechanical, or automatic.
  • the motion capture rig moves according to a pre-programmed set of motions.
  • the motion capture rig moves automatically based on received input, such as to track a moving actor based on RF, IR, sonic, or visual signals received by a rig motion control system.
  • the lighting for one or more fixed or mobile motion capture cameras is enhanced in
  • the increased brightness allows a reduced f- stop setting to be used and so can increase the depth of the volume for which the camera is capturing video for motion capture .
  • the mobile motion capture rig includes machine vision cameras using 24P video (i.e., 24 frames per second with progressive image storage) and 60 frames per second motion capture cameras .
  • Fig. 19 illustrates one implementation of a method 1900 for capturing motion using mobile cameras.
  • a motion capture volume configured to include at least one moving object is defined, at box 1902.
  • the moving object has markers defining a plurality of points on the moving object.
  • the volume can be an open space defined by use guidelines (e.g., actors and cameras are to stay within 10 meters of a given location) or a restricted space defined by barriers (e.g., walls) or markers (e.g., tape on a floor).
  • the volume is defined by the area that can be captured by the motion capture cameras (e.g., the volume moves with the mobile motion capture cameras) .
  • At box 1904 at least one mobile motion capture camera is moved around a periphery of the motion capture volume such that substantially all laterally exposed surfaces of the moving object while in motion within the motion capture volume are within a field of view of the mobile motion capture cameras at substantially all times.
  • one or more mobile motion capture cameras move within the volume, rather than only the perimeter (instead of, or in addition to, one or more cameras moving around the periphery) .
  • data from the motion capture cameras is processed, at box 1906, to produce a digital representation of movement of the moving object.
  • Fig. 6 is a perspective view of the motion capture volume 30 illustrating motion capture data reflecting two actors 52, 54 within the motion capture volume.
  • the view of Fig. 6 reflects how the motion capture data would be viewed by an operator of a workstation 18 as described above with respect to Fig. 1. Similar to Figs. 2 and 3 (above), Fig. 6 further illustrates a plurality of facial motion cameras, including cameras 14 ⁇ -14 12 located in a lower tier, cameras 14 3 3-14 o located in an upper tier, and cameras 14 6 o, 14 6 2 located in the corners of motion capture volume 30.
  • the two actors 52, 54 appear as a cloud of dots corresponding to the reflective markers on their body and face. As shown and discussed above, there are a much higher number of markers located on the actors' faces than on their bodies. The movement of the actors' bodies and faces is tracked by the motion capture system 10, as substantially described above.
  • motion capture data is shown as it would be viewed by an operator of a workstation 18.
  • the motion capture data reflects two actors 52, 54 in which the high concentration of dots reflects the actors' faces and the other dots reflect body points.
  • the motion capture data further includes three occlusion regions 62, 64, 66 illustrated as oval shapes.
  • the occlusion regions 62, 64, 66 represent places in which reliable motion data was not captured due to light from one of the cameras falling within the fields of view of other cameras. This light overwhelms the illumination from the reflective markers, and is interpreted by motion capture processor 12 as a body or facial marker.
  • processing process executed by the motion capture processor 12 generates a virtual mask that filters out the camera illumination by defining the occlusion regions 62, 64, 66 illustrated in Figs. 7 and 8.
  • the production company can attempt to control the performance of the actors to
  • Fig. 9 illustrates an embodiment of the motion capture system that reduces the occlusion problem.
  • Fig. 9 illustrates cameras 84 and 74 that are physically disposed opposite one another across the motion capture volume (not shown) .
  • the cameras 84, 74 include respective light sources 88, 78 adapted to illuminate the fields of view of the cameras.
  • the cameras 84, 74 are further provided with polarized filters 86, 76 disposed in front of the camera lenses.
  • the polarized filters 86, 76 are arranged (i.e., rotated) out of phase with respect to each other.
  • Light source 88 emits light that is polarized by polarized filter 86.
  • polarized light reaches polarized filter 76 of camera 74, but, rather than passing through to camera 74, the polarized light is reflected off of or absorbed by polarized filter 76.
  • the camera 84 will not "see” the illumination from camera 74, thereby avoiding formation of an occlusion region and obviating the need for virtual masking.
  • markers can be used as natural markers to track motion.
  • the markers can comprise ultrasonic or electromagnetic emitters that are detected by corresponding receivers arranged around the motion capture volume.
  • the cameras described above are merely optical sensors and that other types of sensors can also be advantageously utilized.
  • the motion capture system 100 has substantially increased data capacity over the preceding embodiment described above, and is suitable to capture a substantially larger amount of data associated with an enlarged motion capture volume.
  • the motion capture system 100 includes three separate networks tied together by a master server 110 that acts as a repository for collected data.
  • the networks include a data network 120, an artists network 130, and a reconstruction render network 140.
  • the master server 110 provides central control and data storage for the motion capture system 100.
  • the data network 120 communicates the two-dimensional (2D) data captured during a performance to the master server 110.
  • the artists network 130 and reconstruction render network 140 may subsequently access these same 2D data files from the master server 110.
  • the master server 110 may further include a memory 112 system suitable for storing large volumes of data.
  • the data network 120 provides an interface with the motion capture cameras and provides initial data processing of the captured motion data, which is then provided to the master server 110 for storage in memory 112. More
  • the data network 120 is coupled to a plurality of motion capture cameras 122 ⁇ - 122 ⁇ that are arranged with respect to a motion capture volume (described below) to capture the combined motion of one or more actors performing within the motion capture volume.
  • the data network 120 may also be coupled to a plurality of microphones 126i - 126 N either directly or through a suitable audio interface 124 to capture audio associated with the performance (e.g., dialog) .
  • One of more user workstations 128 may be coupled to the data network 120 to provide operation, control and monitoring of the function of the data network.
  • the data network 120 may be provided by a
  • the artists network 130 provides a high speed
  • the data checkers access the 2D data files from the master server 110 to verify the acceptability of the data. For example, the data checkers may review the data to verify that critical aspects of the performance were captured. If important aspects of the performance were not captured, such as if a portion of the data was occluded, the performance can be repeated As necessary until the captured data is deemed acceptable.
  • the data checkers and associated workstations 132 ⁇ -132 ⁇ may be located in close physical proximity to the motion capture volume in order to facilitate communication with the actors and/or scene director.
  • the reconstruction render network 140 provides high speed data processing computers suitable for performing automated reconstruction of the 2D data files and rendering the 2D data files into three-dimensional (3D) animation files that are stored by the master server 110.
  • One of more user workstations 142i-142 N may be coupled to the reconstruction render network 140 to provide operation, control and
  • animators accessing the artists network 130 will also access the 3D animation files in the course of producing the final computer graphics animation.
  • motion e.g., video
  • a motion capture processing system such as the data network 120 (see Fig. 10) .
  • the motion capture processing system uses the captured motion to determine the location and movement of markers on a target (or targets) in front of the motion capture cameras.
  • the processing system uses the location information to build and update a three dimensional model (a point cloud) representing the target (s) .
  • a three dimensional model a point cloud representing the target (s)
  • the processing system combines the motion capture information from the various sources to produce the model .
  • the processing system determines the location of the motion capture rig and the location of the cameras in the rig by correlating the motion capture information for those cameras with information captured by other motion capture cameras (e.g., reference cameras as part of calibration) .
  • the processing system can automatically and dynamically calibrate the motion capture cameras as the motion capture rig moves.
  • the calibration may be based on other motion capture information, such as from other rigs or from fixed cameras, determining how the motion capture rig information correlates with the rest of the motion capture model .
  • processing system In another implementation, the processing system
  • Fig. 11 illustrates a top view of another motion capture volume 150.
  • the motion capture volume 150 is a generally rectangular shaped region subdivided by gridlines.
  • the motion capture volume 150 is intended to represent a significantly larger space, and can be further subdivided into four sections or quadrants (A, B, C, D) .
  • Each section has a size roughly equal to that of the motion capture volume 30 described above, so this motion capture volume 150 has four times the surface area of the preceding embodiment.
  • An additional section E is centered within the space and overlaps partially with each of the other sections.
  • the gridlines further include numerical coordinates (1-5) along the vertical axes and alphabetic coordinates (A-E) along the horizontal axes.
  • a particular location on the motion capture volume can be defined by its alphanumeric coordinates, such as region 4A.
  • Such designation permits management of the motion capture volume 150 in terms of providing direction to the actors as to where to conduct their performance and/or where to place props.
  • the gridlines and alphanumeric coordinates may be physically marked onto the floor of the motion capture volume 150 for the convenience of the actors and/or scene director. It should be appreciated that these gridlines and alphanumeric coordinates would not be included in the 2D data files.
  • each of the sections A-E has a square shape having dimensions of 10 ft by 10 ft, for a total area of 400 sq ft, i.e., roughly four times larger than the motion capture volume of the preceding embodiment. It should be appreciated that other shapes and sizes for the motion capture volume 150 can also be
  • FIGs. 12A-12C an arrangement of motion capture cameras 122i ⁇ 122 N is illustrated with respect to a peripheral region around the motion capture volume 150.
  • the peripheral region provides for the placement of scaffolding to support cameras, lighting, and other equipment, and is illustrated as' regions 152 ! -152 4 .
  • the motion capture cameras 122 X -122 N are located generally evenly in each of the regions 152 3 .-152 4 surrounding the motion capture volume 150 with a diversity, of camera heights and angles.
  • the motion capture cameras 122 ! -122 N are each oriented to focus on individual ones of the sections of the motion capture volume 150, rather than on the entire motion capture volume.
  • there are two-hundred total motion capture cameras with groups of forty individual cameras devoted to each one of the five sections A-E of the motion capture volume 150.
  • the arrangement of motion capture cameras 122 ⁇ -122 ⁇ may be defined by distance from the motion capture volume and height off the floor of the motion capture volume 150.
  • Fig. 12A illustrates an arrangement of a first group of motion capture cameras 122 ⁇ -122 ⁇ that are oriented the greatest distance from the motion capture volume 150 and at the generally lowest height .
  • region 152 (of which the other regions are substantially identical) , there are three rows of cameras with a first row 172 disposed radially outward with respect to the motion capture volume 150 at the highest height from the floor W
  • a second row 174 at a slightly lower height e.g., 4 ft
  • a third row 176 disposed radially inward with respect to the first and second rows and at a lowest height e.g., 1 ft.
  • Fig. 12B illustrates an arrangement of a second group of motion capture cameras 122 3 i-122i 60 that are oriented closer to the motion capture volume 150 than the first group and at a height greater than that of the first group.
  • region 152 x (of which the other regions are substantially identical)
  • Fig. 12C illustrates an arrangement of a ( third group of motion capture cameras 122i 6 i-122 20 o that are oriented closer to the motion capture volume 150 than the second group and at a height greater than that of the second group.
  • region 152 x (of which the other regions are substantially identical)
  • the motion capture cameras are focused onto respective sections of the motion capture volume 150 in a similar manner as described above with respect to Fig. 4. For each of the sections A-E of the motion capture volume 150, motion capture cameras from each of the four sides will be focused onto the section.
  • the cameras from the first group most distant from the motion capture volume may focus on the sections of the motion capture volume closest thereto.
  • section A of the motion capture volume 150 may be covered by a combination of certain low height cameras from the first row 182 and third row 186 of peripheral region 152 ⁇ , low height cameras from the first row 182 and third row 186 of peripheral region 152 4 , medium height cameras from the second row 184 and third row 186 of peripheral region 152 3 , medium height cameras from the second row 184 and third row 186 of peripheral region 152 2 .
  • Figs. 12A and 12B further reveal a greater concentration of motion cameras in the center of the peripheral regions for capture of motion within the center section E.
  • One implementation includes one or more programmable processors and corresponding computer system components to store and execute computer instructions, such as to provide the motion capture processing of the video captured by the mobile motion capture cameras and to
  • computer includes one or more processors, one or more datastorage components (e.g., volatile or non-volatile memory modules and persistent optical and magnetic storage devices, such as hard and floppy disk drives, CD-ROM drives, and magnetic tape drives), one or more input devices (e.g., mice and keyboards), and one or more output devices (e.g., display consoles and printers) .
  • processors one or more datastorage components
  • datastorage components e.g., volatile or non-volatile memory modules and persistent optical and magnetic storage devices, such as hard and floppy disk drives, CD-ROM drives, and magnetic tape drives
  • input devices e.g., mice and keyboards
  • output devices e.g., display consoles and printers
  • the computer programs include executable code that is usually stored in a persistent storage medium and then copied into memory at run-time.
  • the processor executes the code by retrieving program instructions from memory in a prescribed order.
  • a combination of motion capture rigs with different numbers of cameras can be used to capture motion of targets before the cameras.
  • Different numbers of fixed and mobile cameras can achieve desired results and accuracy, for example, 50% fixed cameras and 50% mobile cameras; 90% fixed cameras and 10% mobile cameras; or 100% mobile cameras. Therefore, the configuration of the cameras (e.g., number, position, fixed vs. mobile, etc.) can be selected to match the desired result. Accordingly, the present invention is not limited to only those implementations described above .

Abstract

A system for capturing motion (10) comprises: a motion capture volume (30) configured to include at least one moving object having markers defining a plurality of points on the at least one moving object; at least one mobile motion capture camera (16), the at least one mobile caption camera (16) configured to be moveable within the motion capture volume (30); and a motion capture processor (12) coupled to the at least one mobile motion capture camera (16) to produce a digital representation of movement of the at least one moving object.

Description

MOBILE MOTION CAPTURE CAMERAS
CROSS -REFERENCE TO RELATED APPLICATIONS
This application claims priority pursuant to 35 U.S.C. §120 as a continuation-in-part of U.S. Patent Application Serial No. 11/004,320, filed December 3, 2004, entitled
"System and Method for Capturing Facial and Body Motion", which is a continuation-in-part of U.S. Patent Application Serial No. 10/427,114, filed May 1, 2003, entitled "System and Method for Capturing Facial and Body Motion." This application also claims the benefit of priority of co-pending U.S. Provisional Patent Application Serial No. 60/696,193, filed July 1, 2005, entitled "Mobile Motion Capture Cameras."
Benefits of priority of these applications, including the filing dates of May 1, 2003, December 3, 2004, and July 1, 2005, are hereby claimed, and the disclosures of the above-referenced patent applications are hereby incorporated by reference.
BACKGROUND
The present invention relates to three-dimensional graphics and animation, and more particularly, to a motion capture system that enables both facial and body motion to be captured simultaneously within a volume that can accommodate plural actors.
Motion capture systems are used to capture the movement of a real object and map it onto a computer generated object Such systems are often used in the production of motion pictures and video games for creating a digital
representation of a person that is used as source data to create a computer graphics (CG) animation. In a typical system, an actor wears a suit having markers attached at various locations (e.g., having small reflective markers attached to the body and limbs) and digital cameras record the movement of the actor from different angles while illuminating the markers. The system then analyzes the images to determine the locations (e.g., as spatial
coordinates) and orientation of the markers on the actor's suit in each frame. By tracking the locations of the markers, the system creates a spatial representation of the markers over time and builds a digital representation of the actor in motion. The motion is then applied to a digital model, which may then be textured and rendered to produce a complete CG representation of the actor and/or performance. This technique has been used by special effects companies to produce incredibly realistic animations in many popular movies . Motion capture systems are also used to track the motion of facial features of an actor to create a representation of the actor's facial motion and expression (e.g., laughing, crying, smiling, etc.) . As with body motion capture, markers are attached to the actor's face and cameras record the actor's expressions. Since facial movement involves
relatively small muscles in comparison to the larger muscles involved in body movement, the facial markers are typically much smaller than the corresponding body markers, and the cameras typically have higher resolution than cameras usually used for body motion capture. The cameras are typically aligned in a common plane with physical movement of the actor restricted to keep the cameras focused on the actor's face. The facial motion capture system may be incorporated into a helmet or other implement that is physically attached to the actor so as to uniformly illuminate the facial markers and minimize the degree of relative movement between the camera and face. For this reason, facial motion and body motion are usually captured in separate steps. The captured facial motion data is then combined with captured body motion data later as part of the subsequent animation process.
An advantage of motion capture systems over traditional animation techniques, such as keyframing, is the capability of real-time visualization. The production team can review the spatial representation of the actor's motion in real-time or near real-time, enabling the actor to alter the physical performance in order to capture optimal data. Moreover, motion capture systems detect subtle nuances of physical movement that cannot be easily reproduced using other
animation techniques, thereby yielding data that more
accurately reflects natural movement. As a result, animation created using source material that was collected using a motion capture system will exhibit a more lifelike
appearance .
Notwithstanding these advantages of motion capture systems, the separate capture of facial and body motion often results in animation data that is not truly lifelike. Facial motion and body motion are inextricably linked, such that a facial expression is often enhanced by corresponding body motion. For example, an actor may utilize certain body motion (i.e., body language) to communicate motions and emphasize corresponding facial expressions, such as using arm flapping when talking excitedly or shoulder shrugging when frowning. This linkage between facial motion and body motion is lost when the motions are captured separately, and it is difficult to synchronize these separately captured motions together. When the facial motion and body motion are combined, the resulting animation will often appear
noticeably abnormal. Since it is an objective of motion capture to enable the creation of increasingly realistic animation, the decoupling of facial and body motion
represents a significant deficiency of conventional motion capture systems .
Another drawback of conventional motion capture systems is that motion data of an actor may be occluded by
interference with other objects, such as props or other actors. Specifically, if a portion of the body or facial markers is blocked from the field of view of the digital cameras, then data concerning that body or facial portion is not collected. This results in an occlusion or hole in the motion data. While the occlusion can be filled in later during post -production using conventional computer graphics techniques, the fill data lacks the quality of the actual motion data, resulting in a defect of the animation that may be discernable to the viewing audience. To avoid this problem, conventional motion capture systems limit the number of objects that can be captured at one time, e.g., to a single actor. This also tends to make the motion data appear less realistic, since the quality of an actor's performance often depends upon interaction with other actors and objects. Moreover, it is difficult to combine these separate
performances together in a manner that appears natural .
Yet another drawback of conventional motion capture systems is that audio is not recorded simultaneously with the motion capture. In animation, it is common to record the audio track first, and then animate the character to match the audio track. During facial motion capture, the actor will lip synch to the recorded audio track. This inevitably results in a further reduction of the visual quality of the motion data, since it is difficult for an actor to perfectly synchronize facial motion to the audio track. Also, body motion often affects the way in which speech is delivered, and the separate capture of body and facial motion increases the difficulty of synchronizing the audio track to produce a cohesive end product .
Accordingly, it would be desirable to provide a motion capture system that overcomes these and other drawbacks of the prior art. More specifically, it would be desirable to provide a motion capture system that enables both body and facial motion to be captured simultaneously within a volume that can accommodate plural actors. It would also be
desirable to provide a motion capture system that enables audio recording simultaneously with body and facial motion capture .
SUMMARY
The present invention provides systems and methods for capturing motion using mobile motion capture cameras .
In one implementation, a system for capturing motion comprises: a motion capture volume configured to include at least one moving object having markers defining a plurality of points on the at least one moving object; at least one mobile motion capture camera, the at least one mobile motion capture camera configured to be moveable within the motion capture volume; and a motion capture processor coupled to the at least one mobile motion capture camera to produce a digital representation of movement of the at least one moving obj ect .
In another implementation, another system for capturing motion comprises: at least one mobile motion capture camera configured to be moveable, the at least one mobile motion capture camera operating to capture motion within a motion capture volume; and at least one mobile motion capture rig configured to enable the at least one mobile motion capture camera to be disposed on the at least one mobile motion capture rig such that cameras of the at least one mobile motion capture camera can be moved.
In another implementation, a method for capturing motion comprises : defining a motion capture volume configured to include at least one moving object having markers defining a plurality of points on the at least one moving object; moving at least one mobile motion capture camera within the motion capture volume; and processing data from the at least one mobile motion capture camera to produce a digital
representation of movement of the at least one moving object.
In yet another implementation, a system for capturing motion comprises: means for defining a motion capture volume configured to include at least one moving object having markers defining a plurality of points on the at least one moving object; means for moving at least one mobile motion capture camera within the motion capture volume; and means for processing data from the at least one mobile motion capture camera to produce a digital representation of
movement of the at least one moving object.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is a block diagram illustrating a motion capture system in accordance with an embodiment of the present invention;
Fig. 2 is a top view of a motion capture volume with a plurality of motion capture cameras arranged around the periphery of the motion capture volume;
Fig. 3 is a side view of the motion capture volume with a plurality of motion capture cameras arranged around the periphery of the motion capture volume;
Fig. 4 is a top view of the motion capture volume illustrating an arrangement of facial motion cameras with respect to a quadrant of the motion capture volume;
Fig. 5 is a top view of the motion capture volume illustrating another arrangement of facial motion cameras with respect to comers of the motion capture volume;
Fig. 6 is a perspective view of the motion capture volume illustrating a motion capture data reflecting two actors in the motion capture volume ;
Fig. 7 illustrates motion capture data reflecting two actors in the motion capture volume and showing occlusions regions of the data;
Fig. 8 illustrates motion capture data as in Fig. 7, in which one of the two actors has been obscured by an occlusion region;
Fig. 9 is a block diagram illustrating an alternative embodiment of the motion capture cameras utilized in the motion capture system;
Fig. 10 is a block diagram illustrating a motion capture system in accordance with another embodiment of the present invention;
Fig. 11 is a top view of an enlarged motion capture volume defining a plurality of performance regions; and
Figs. 12A-12C are top views of the enlarged motion capture volume of Fig. 11 illustrating another arrangement of motion capture cameras.
Fig. 13 shows a frontal view of one implementation of cameras positioned on a mobile motion capture rig.
Fig. 14 illustrates a frontal view of a particular implementation of the mobile motion capture rig shown in Fig. 13.
Fig. 15 illustrates a top view of a particular
implementation of the mobile motion capture rig shown in Fig. 13.
Fig. 16 illustrates a side view of a particular
implementation of the mobile motion capture rig shown in Fig.
13.
Fig. 17 shows a frontal view of another implementat cameras positioned on a mobile motion capture rig. Fig. 18 shows a front perspective view of yet another implementation of cameras positioned on a mobile motion capture rig.
Fig. 19 illustrates one implementation of a method for capturing motion.
DETAILED DESCRIPTION
As will be further described below, the present
invention satisfies the need for a motion capture system that enables both body and facial motion to be captured
simultaneously within a volume that can accommodate plural actors. Further, the present invention also satisfies the need for a motion capture system that enables audio recording simultaneously with body and facial motion capture. In the detailed description that follows, like element numerals are used to describe like elements illustrated in one or more of the drawings .
Referring first to Fig. 1 , a block diagram illustrates a motion capture system 10 in accordance with an embodiment of the present invention. The motion capture system
10 includes a motion capture processor 12 adapted to
communicate with a plurality of facial motion cameras 14i- 14N and a plurality of body motion cameras 161 - 16N . The motion capture processor 12 may further comprise a programmable computer having a data storage device 20 adapted to enable the storage of associated data files . One or more computer workstations 18i-18N may be coupled to the motion capture processor 12 using a network to enable multiple graphic artists to work with the stored data files in the process of creating a computer graphics animation. The facial motion cameras 14i-l½ and body motion cameras 16i-16N are arranged with respect to a motion capture volume (described below) to capture the combined motion of one or more actors performing within the motion capture volume.
Each actor's face and body is marked with markers that are detected by the facial motion cameras 14i-14N and body motion cameras 16i-16N during the actor's performance within the motion capture volume. The markers may be reflective or illuminated elements. Specifically, each actor's body may be marked with a plurality of reflective markers disposed at various body locations including head, legs, arms, and torso. The actor' may be wearing a body suit formed of non-reflective material to which the markers are attached. The actor's face will also be marked with a plurality of markers. The facial markers are generally smaller than the body markers and a larger number of facial markers are used than body markers. To capture facial motion with sufficient resolution, it is anticipated that a high number of facial markers be utilized (e.g., more than 100) . In one implementation, 152 small facial markers and 64 larger body markers are affixed to the actor. The body markers may have a width or diameter in the range of 5 to 9 millimeters, while the face markers may have a width or diameter in the range of 2 to 4 millimeters.
To ensure consistency of the placement of the face markers, a mask may be formed of each actor's face with holes drilled at appropriate locations corresponding to the desired marker locations. The mask may be placed over the actor's face, and the hole locations marked directly on the face using a suitable pen. The facial markers can then be applied to the actor's face at the marked locations. The facial markers may be affixed to the actor's face using suitable materials known in the theatrical field, such as make-up glue. This way, a motion capture production that extends over a lengthy period of time (e.g., months) can obtain reasonably consistent motion data for an actor even though the markers are applied and removed each day.
The motion capture processor 12 processes two- dimensional images received from the facial motion cameras 14χ-14Ν and body motion cameras 16X-16N to produce a three- dimensional digital representation of the captured motion. Particularly, the motion capture processor 12 receives the two-dimensional data from each camera and saves the data in the form of multiple data files into data storage device 20 as part of an image capture process. The two-dimensional data files are then resolved into a single set of three- dimensional coordinates that are linked together in the form of trajectory files representing movement of individual markers as part of an image processing process . The image processing process uses images from one or more cameras to determine the location of each marker. For example, a marker may only be visible to a subset of the cameras due to
occlusion by facial features or body parts of actors within the motion capture volume. In that case, the image
processing uses the images from other cameras that have an unobstructed view of that marker to determine the marker's location in space.
By using images from multiple cameras to determine the location of a marker, the image processing process evaluates the image information from multiple angles and uses a
triangulation process to determine the spatial location.
Kinetic calculations are then performed on the trajectory files to generate the digital representation reflecting body and facial motion corresponding to the actors' performance. Using the spatial information over time, the calculations determine the progress of each marker as it moves through space. A suitable data management process may be used to control the storage and retrieval of the large number files associated with the entire process to/from the data storage device 20. The motion capture processor 12 and workstations 18i-18N may utilize commercial software packages to perform these and other data processing functions, such as available from Vicon Motion Systems or Motion Analysis Corp.
The motion capture system 10 further includes the capability to record audio in addition to motion. A
plurality of microphones 24i-24N may be arranged around the motion capture volume to pick up audio (e.g., spoken dialog) during the actors' performance. The motion capture processor 12 may be coupled to the microphones 24i-24N, either directly or through an audio interface 22. The microphones 24X-24N may be fixed in place, or may be moveable on booms to follow the motion, or may be carried by the actors and communicate wirelessly with the motion capture processor 12 or audio interface 22. The motion capture processor 12 would receive and store the recorded audio in the form of digital files on the data storage device 20 with a time track or other data that enables synchronization with the motion data.
Figs. 2 and 3 illustrate a motion capture volume 30 surrounded by a plurality of motion capture cameras. The motion capture volume 30 includes a peripheral edge 32. The motion capture volume 30 is illustrated as a rectangular- shaped region subdivided by grid lines. It should be appreciated that the motion capture volume 30 actually comprises a three-dimensional space with the grid defining a floor for the motion capture volume. Motion would be captured within the three dimensional space above the floor. In one implementation of the invention, the motion capture volume 30 comprises a floor area of approximately 10 feet by 10 feet, with a height of approximately 6 feet above the floor. Other size and shape motion capture volumes can also be advantageously utilized to suit the particular needs of a production, such as oval, round, rectangular, polygonal, etc
Fig. 2 illustrates a top view of the motion capture volume 30 with the plurality of motion capture cameras arranged around the peripheral edge 32 in a generally circular pattern. Individual cameras are represented graphically as triangles with the acute angle representing the direction of the lens of the camera, so it should be appreciated that the plurality of cameras are directed toward the motion capture volume 30 from a plurality of distinct directions. More particularly, the plurality of motion capture cameras further include a plurality of body motion cameras 16i-16B and a plurality of facial motion cameras 14x- 1½. In view of the high number of facial motion cameras in Fig. 2, it should be appreciated that many are not labeled. In the present embodiment of the invention, there are many more facial motion cameras than body motion cameras. The body motion cameras 16i-168 are arranged roughly two per side of the motion capture volume 30, and the facial motion cameras 14X-14N are arranged roughly twelve per side of the motion capture volume 30. The facial motion cameras 14i-14N and the body motion cameras 16i~16N are substantially the same except that the focusing lenses of the facial motion cameras are selected to provide narrower field of view than that of the body motion cameras.
Fig. 3 illustrates a side view of the motion capture volume 30 with the plurality of motion capture cameras arranged into roughly three tiers above the floor of the motion capture volume. A lower tier includes a plurality of facial motion cameras 14i-1432, arranged roughly eight per side of the motion capture volume 30. In an embodiment of the invention, each of the lower tier facial motion cameras 14i - 1432 are aimed slightly upward so as to not include a camera roughly opposite the motion capture volume 30 from being included within the field of view. The motion capture cameras generally include a light source (e.g., an array of light emitting diodes) used to illuminate the motion capture volume 30. It is desirable to not have a motion capture camera "see" the light source of another motion capture camera, since the light source will appear to the motion capture camera as a bright reflectance that will overwhelm data from the reflective markers. A middle tier includes a plurality of body motion cameras 163 - 167 arranged roughly two per side of the motion capture volume 30. As discussed above, the body motion cameras have a wider field of view than the facial motion cameras, enabling each camera to include a greater amount of the motion capture volume 30 within its respective field of view.
The upper tier includes a plurality of facial motion cameras (e.g., 1433- 1452) , arranged roughly five per side of the motion capture volume 30. In an embodiment of the invention, each of the upper tier facial motion cameras 1433 - 1452 are aimed slightly downward so as to not include a camera roughly opposite the motion capture volume 30 from being included within the field of view. Shown on the left-hand side of Fig. 2, a number of facial motion cameras (e.g., 1453- 146o) are also included in the middle tier focused on the front edge of the motion capture volume 30. Since the actors' performance will be generally facing the front edge of the motion capture volume 30, the number of cameras in that region are increased to reduce the amount of data lost to occlusion. In addition a number of facial motion cameras (e.g., 14ei-14s4) are included in the middle tier focused on the corners of the motion capture volume 30. These cameras also serve to reduce the amount of data lost to occlusion.
The body and facial motion cameras record images of the marked actors from many different angles so that
substantially all of the lateral surfaces of the actors are exposed to at least one camera at all times . More
specifically, it is preferred that the arrangement of cameras provide that substantially all of the lateral surfaces of the actors are exposed to at least three cameras at all times. By placing the cameras at multiple heights, irregular
surfaces can be modeled as the actor moves within the motion capture field 30. The present motion capture system 10 thereby records the actors' body movement simultaneously with facial movement (i.e., expressions). As discussed above, audio recording can also be conducted simultaneously with motion capture.
Fig. 4 is a top view of the motion capture volume 30 illustrating an arrangement of facial motion cameras. The motion capture volume 30 is graphically divided into
quadrants, labeled a, b, c and d. Facial motion cameras are grouped into clusters 36, 38, with each camera cluster representing a plurality of cameras. For example, one such camera cluster may include two facial motion cameras located in the lower tier and one facial motion camera located in the upper tier. Other arrangements of cameras within a cluster ■can also be advantageously utilized. The two camera clusters 36, 38 are physically disposed adjacent to each other, yet offset horizontally from each other by a discernable
distance. The two camera clusters 36, 38 are each focused on the front edge of quadrant d from an angle of approximately 45°. The first camera cluster 36 has a field of view that extends from partially into the front edge of quadrant c to the right end of the front edge of quadrant d. The second camera cluster 38 has a field of view that extends from the left end of the front edge of quadrant d to partially into the right edge of quadrant d. Thus, the respective fields of view of the first and second camera clusters 36, 38 overlap over the substantial length of the front edge of quadrant d. A similar arrangement of camera clusters is included for each of the other outer edges (coincident with peripheral edge 32) of quadrants a, b, c and d.
Fig. 5 is a top view of the motion capture volume 30 illustrating another arrangement of facial motion cameras.
As in Fig. 4, the motion capture volume 30 is graphically divided into quadrants a, b, c and d. Facial motion cameras are grouped into clusters 42, 44, with each camera cluster representing a plurality of cameras. As in the embodiment of Fig. 4, the clusters may comprise one or more cameras located at various heights. In this arrangement, the camera clusters 42, 44 are located at corners of the motion capture volume 30 facing into the motion capture volume. These corner camera clusters 42, 44 would record images of the actors that are not picked up by the other cameras, such as due to occlusion. Other like camera clusters would also be located at the other corners of the motion capture volume 30.
Having a diversity of camera heights and angles with respect to the motion capture volume 30 serves to increase the available data captured from the actors in the motion capture volume and reduces the likelihood of data occlusion. , It also permits a plurality of actors to be motion captured simultaneously within the motion capture volume 30.
Moreover, the high number and diversity of the cameras enables the motion capture volume 30 to be substantially larger than that of the prior art, thereby enabling a greater range of motion within the motion capture volume and hence more complex performances. It should be appreciated that numerous alternative arrangements of the body and facial motion cameras can also be advantageously utilized. For example, a greater or lesser number of separate tiers can be utilized, and the actual height of each camera within an individual tier can be varied.
In the foregoing description of the motion capture cameras, the body and facial motion cameras remain fixed in place. This way, the motion capture processor 12 has a fixed reference point against which movement of the body and facial markers can be measured. A drawback of this arrangement is that it limits the size of the motion capture volume 30. If it was desired to capture the motion of a performance that requires a greater volume of space (e.g., a scene in which characters are running over a larger distance) , the
performance would have to be divided up into a plurality of segments that are motion captured separately. In an alternative implementation, a number of the motion capture cameras remain fixed while others are moveable. In one configuration, the moveable motion capture cameras are moved to new position (s) and are fixed at the new
position (s) . In another configuration, the- moveable motion capture cameras are moved to follow the action. Thus, in this configuration, the motion capture cameras perform motion capture while moving.
The moveable motion capture cameras can be moved using computer-controlled servomotors or can be moved manually by human camera operators. If the cameras are moved to follow the action (i.e., the camera perform motion capture while moving) , the motion capture processor 12 would track the movement of the cameras, and remove this movement in the subsequent processing of the captured data to generate the three dimensional digital representation reflecting body and facial motion corresponding to the performances of actors. The moveable cameras can be moved individually or moved together by placing the cameras on a mobile motion capture rig. Thus, using mobile or movable cameras for motion capture provides improved flexibility in motion capture production. In one implementation, illustrated in Fig. 13, a mobile motion capture rig 1300 includes six cameras 1310, 1312, 1314, 1316, 1320, 1322. Fig. 13 shows a frontal view of the cameras positioned on the mobile motion capture rig 1300. In the illustrated example of Fig. 13, four cameras 1310, 1312, 1314, 1316 are motion capture cameras. Two cameras 1320, 1322 are reference cameras. One reference camera 1320 is to show the view of the motion capture cameras 1310, 1312, 1314, 1316. The second reference camera 1322 is for video
reference and adjustment. However, different camera
configurations are also possible, with different numbers of motion capture cameras and reference cameras.
Although Fig. 13 shows the mobile motion capture rig 1300 having four motion capture cameras and two reference cameras, the rig 1300 can include only one or more motion capture cameras. For example, in one implementation, the mobile motion capture rig 1300 includes two motion capture cameras. In another implementation, the mobile motion capture rig 1300 includes one motion capture camera with a field splitter or a mirror to provide a stereo view.
Fig. 14, Fig. 15, and Fig. 16 illustrate front, top, and side views, respectively, of a particular implementation of the mobile motion capture rig shown in Fig. 13. The dimensions of the mobile motion capture rig are approximately 40" x 40" in width and length, and approximately 14" in depth .
Fig. 14 shows a frontal view of the particular
implementation of the mobile motion capture rig 1400. Four mobile motion capture cameras 1410, 1412, 1414, 1416 are disposed on the mobile motion capture rig 1400, and are positioned approximately 40 to 48 inches apart width- and length-wise. Each mobile motion capture camera 1410, 1412, 1414, or 1416 is placed on a rotatable cylindrical base having approximately 2" outer diameter. The mobile motion capture rig 1400 also includes reference cameras 1420, computer and display 1430, and a view finder 1440 for framing and focus .
Fig. 15 shows a top view of the particular
implementation of the mobile motion capture rig 1400. This view illustrates the offset layout of the four mobile motion capture cameras 1410, 1412, 1414, 1416. The top cameras 1410, 1412 are positioned at approximately 2 inches and 6 inches in depth, respectively, while the bottom cameras 1414, 1416 are positioned at approximately 14 inches and 1 inch in depth, respectively. Further, the top cameras 1410, 1412 are approximately 42 inches apart in width while the bottom cameras 1414, 1416 are approximately 46 inches apart in width.
Fig. 16 shows a side view of the particular
implementation of the mobile motion capture rig 1400. This view highlights the different heights at which the four mobile motion capture cameras 1410, 1412, 1414, 1416 are positioned. For example, the top cameras 1410 is positioned at approximately 2 inches above the mobile motion capture camera 1412 while the bottom cameras 1414 is positioned at approximately 2 inches below the mobile motion capture camera 1416. In general, some of the motion capture cameras should be positioned low enough (e.g., approximately 2 feet off the ground) so that the cameras can capture performances at very low heights, such as kneeling down and/or looking down on the ground.
In another implementation, for example, a mobile motion capture rig includes a plurality of mobile motion capture cameras but no reference cameras. Thus, in this
implementation, the feedback from the mobile motion capture cameras is used as reference information.
Further, various total numbers of cameras can be used in a motion capture setup, such as 200 or more cameras distributed among multiple rigs or divided among one or more movable rigs and fixed positions. For example, the setup may include 208 fixed motion capture cameras (32 performing realtime reconstruction of bodies) and 24 mobile motion capture cameras. In one example, the 24 mobile motion capture cameras are distributed into six motion capture rigs, each rig including four motion capture cameras. In other
examples, the motion capture cameras are distributed into any number of motion capture rigs including no rigs such that the motion capture cameras are moved individually.
In yet another implementation, illustrated in Fig. 17, a mobile motion capture rig 1700 includes six motion capture cameras 1710, 1712, 1714, 1716, 1718, 1720 and two reference cameras 1730, 1732. Fig. 17 shows a frontal view of the cameras positioned on the mobile motion capture rig 1700.
Further, the motion capture rig 1700 can also include one or more displays to show the images captured by the reference cameras .
Fig. 18 illustrates a front perspective view of a mobile motion capture rig 1800 including cameras 1810, 1812, 1814, 1816, 1820. In the illustrated implemenation of Figure 18, the mobile motion capture rig 1800 includes servomotors that provide at least 6 degrees of freedom (6-DOF) movements to the motion capture cameras 1810, 1812, 1814, 1816, 1820.
Thus, the 6-DOF movements include three translation movements along the three axes X, Y, and Z, and three rotational movements about the three axes X, Y, and Z, namely tilt, pan, and rotate, respectively.
In one implementation, the motion capture rig 1800 provides the 6-DOF movements to all five cameras 1810, 1812, 1814, 1816, 1820. In another implementation, each of the cameras 1810, 1812, 1814, 1816, 1820 on the motion capture rig 1850 is restricted to some or all of the 6-DOF movements. For example, the upper cameras 1810, 1812 may be restricted to X and Z translation movements and pan and tilt down rotational movements; the lower cameras 1814, 1816 may be restricted to X and Z translation movements and pan and tilt up rotational movements; and the center camera 1820 may not be restricted so that it can move in all six directions
(i.e., X, Y, Z translation movements and tilt, pan, and rotate rotational movements) . In a further implementation, the motion capture rig 1800 moves, pans, tilts, and rotates during and/or between shots so that the cameras can be moved and positioned into a fixed position or moved to follow the action.
In one implementation, the motion of the motion capture rig 1800 is controlled by one or more people. The motion control can be manual, mechanical, or automatic. In another implementation, the motion capture rig moves according to a pre-programmed set of motions. In another implementation, the motion capture rig moves automatically based on received input, such as to track a moving actor based on RF, IR, sonic, or visual signals received by a rig motion control system.
In another implementation, the lighting for one or more fixed or mobile motion capture cameras is enhanced in
brightness. For example, additional lights are placed with each camera. The increased brightness allows a reduced f- stop setting to be used and so can increase the depth of the volume for which the camera is capturing video for motion capture .
In another implementation, the mobile motion capture rig includes machine vision cameras using 24P video (i.e., 24 frames per second with progressive image storage) and 60 frames per second motion capture cameras .
Fig. 19 illustrates one implementation of a method 1900 for capturing motion using mobile cameras. Initially, a motion capture volume configured to include at least one moving object is defined, at box 1902. The moving object has markers defining a plurality of points on the moving object. The volume can be an open space defined by use guidelines (e.g., actors and cameras are to stay within 10 meters of a given location) or a restricted space defined by barriers (e.g., walls) or markers (e.g., tape on a floor). In another implementation, the volume is defined by the area that can be captured by the motion capture cameras (e.g., the volume moves with the mobile motion capture cameras) . Then, at box 1904, at least one mobile motion capture camera is moved around a periphery of the motion capture volume such that substantially all laterally exposed surfaces of the moving object while in motion within the motion capture volume are within a field of view of the mobile motion capture cameras at substantially all times. In another implementation, one or more mobile motion capture cameras move within the volume, rather than only the perimeter (instead of, or in addition to, one or more cameras moving around the periphery) .
Finally, data from the motion capture cameras is processed, at box 1906, to produce a digital representation of movement of the moving object.
Fig. 6 is a perspective view of the motion capture volume 30 illustrating motion capture data reflecting two actors 52, 54 within the motion capture volume. The view of Fig. 6 reflects how the motion capture data would be viewed by an operator of a workstation 18 as described above with respect to Fig. 1. Similar to Figs. 2 and 3 (above), Fig. 6 further illustrates a plurality of facial motion cameras, including cameras 14χ-1412 located in a lower tier, cameras 1433-14 o located in an upper tier, and cameras 146o, 1462 located in the corners of motion capture volume 30. The two actors 52, 54 appear as a cloud of dots corresponding to the reflective markers on their body and face. As shown and discussed above, there are a much higher number of markers located on the actors' faces than on their bodies. The movement of the actors' bodies and faces is tracked by the motion capture system 10, as substantially described above.
Referring now to Figs. 7 and 8, motion capture data is shown as it would be viewed by an operator of a workstation 18. As in Fig. 6, the motion capture data reflects two actors 52, 54 in which the high concentration of dots reflects the actors' faces and the other dots reflect body points. The motion capture data further includes three occlusion regions 62, 64, 66 illustrated as oval shapes. The occlusion regions 62, 64, 66 represent places in which reliable motion data was not captured due to light from one of the cameras falling within the fields of view of other cameras. This light overwhelms the illumination from the reflective markers, and is interpreted by motion capture processor 12 as a body or facial marker. The image
processing process executed by the motion capture processor 12 generates a virtual mask that filters out the camera illumination by defining the occlusion regions 62, 64, 66 illustrated in Figs. 7 and 8. The production company can attempt to control the performance of the actors to
physically avoid movement that is obscured by the occlusion regions. Nevertheless, some loss of data capture inevitably occurs, as shown in Fig. 8 in which the face of actor 54 has been almost completely obscured by physical movement into the occlusion region 64.
Fig. 9 illustrates an embodiment of the motion capture system that reduces the occlusion problem. Particularly, Fig. 9 illustrates cameras 84 and 74 that are physically disposed opposite one another across the motion capture volume (not shown) . The cameras 84, 74 include respective light sources 88, 78 adapted to illuminate the fields of view of the cameras. The cameras 84, 74 are further provided with polarized filters 86, 76 disposed in front of the camera lenses. As will be clear from the following description, the polarized filters 86, 76 are arranged (i.e., rotated) out of phase with respect to each other. Light source 88 emits light that is polarized by polarized filter 86. The
polarized light reaches polarized filter 76 of camera 74, but, rather than passing through to camera 74, the polarized light is reflected off of or absorbed by polarized filter 76. As a result, the camera 84 will not "see" the illumination from camera 74, thereby avoiding formation of an occlusion region and obviating the need for virtual masking.
While the preceding description referred to the use of optical sensing of physical markers affixed to the body and face to track motion, it should be appreciated to those skilled in the art that alternative ways to track motion can also be advantageously utilized. For example, instead of affixing markers, physical features of the actors (e.g., shapes of nose or eyes) can be used as natural markers to track motion. Such a feature-based motion capture system would eliminate the task of affixing markers to the actors prior to each performance. In addition, alternative media other than optical can be used to detect corresponding markers. For example, the markers can comprise ultrasonic or electromagnetic emitters that are detected by corresponding receivers arranged around the motion capture volume. In this regard, it should be appreciated that the cameras described above are merely optical sensors and that other types of sensors can also be advantageously utilized.
Referring now to Fig. 10, a block diagram illustrates a motion capture system 100 in accordance with an alternative embodiment of the present invention. The motion capture system 100 has substantially increased data capacity over the preceding embodiment described above, and is suitable to capture a substantially larger amount of data associated with an enlarged motion capture volume. The motion capture system 100 includes three separate networks tied together by a master server 110 that acts as a repository for collected data. The networks include a data network 120, an artists network 130, and a reconstruction render network 140. The master server 110 provides central control and data storage for the motion capture system 100. , The data network 120 communicates the two-dimensional (2D) data captured during a performance to the master server 110. The artists network 130 and reconstruction render network 140 may subsequently access these same 2D data files from the master server 110. The master server 110 may further include a memory 112 system suitable for storing large volumes of data.
The data network 120 provides an interface with the motion capture cameras and provides initial data processing of the captured motion data, which is then provided to the master server 110 for storage in memory 112. More
particularly, the data network 120 is coupled to a plurality of motion capture cameras 122χ- 122Ν that are arranged with respect to a motion capture volume (described below) to capture the combined motion of one or more actors performing within the motion capture volume. The data network 120 may also be coupled to a plurality of microphones 126i - 126N either directly or through a suitable audio interface 124 to capture audio associated with the performance (e.g., dialog) . One of more user workstations 128 may be coupled to the data network 120 to provide operation, control and monitoring of the function of the data network. In an embodiment of the invention, the data network 120 may be provided by a
plurality of motion capture data processing stations, such as available from Vicon Motion Systems or Motion Analysis Corp, along with a plurality of slave processing stations for collating captured data into 2D files.
The artists network 130 provides a high speed
infrastructure for a plurality of data checkers and animators using suitable workstations 132χ- 132Ν . The data checkers access the 2D data files from the master server 110 to verify the acceptability of the data. For example, the data checkers may review the data to verify that critical aspects of the performance were captured. If important aspects of the performance were not captured, such as if a portion of the data was occluded, the performance can be repeated As necessary until the captured data is deemed acceptable. The data checkers and associated workstations 132χ-132Ν may be located in close physical proximity to the motion capture volume in order to facilitate communication with the actors and/or scene director.
The reconstruction render network 140 provides high speed data processing computers suitable for performing automated reconstruction of the 2D data files and rendering the 2D data files into three-dimensional (3D) animation files that are stored by the master server 110. One of more user workstations 142i-142N may be coupled to the reconstruction render network 140 to provide operation, control and
monitoring of the function of the data network. The
animators accessing the artists network 130 will also access the 3D animation files in the course of producing the final computer graphics animation.
Similar to the description above for fixed motion capture cameras, motion (e.g., video) captured by the mobile cameras of the motion capture rig is provided to a motion capture processing system, such as the data network 120 (see Fig. 10) . Moreover, the motion capture processing system uses the captured motion to determine the location and movement of markers on a target (or targets) in front of the motion capture cameras. The processing system uses the location information to build and update a three dimensional model (a point cloud) representing the target (s) . In a system using multiple motion capture rigs or a combination of one or more motion capture rigs and one or more fixed
cameras, the processing system combines the motion capture information from the various sources to produce the model .
In one implementation, the processing system determines the location of the motion capture rig and the location of the cameras in the rig by correlating the motion capture information for those cameras with information captured by other motion capture cameras (e.g., reference cameras as part of calibration) . The processing system can automatically and dynamically calibrate the motion capture cameras as the motion capture rig moves. The calibration may be based on other motion capture information, such as from other rigs or from fixed cameras, determining how the motion capture rig information correlates with the rest of the motion capture model .
In another implementation, the processing system
calibrates the cameras using motion capture information representing the location of fixed tracking markers or dots attached to known fixed locations in the background. Thus, the processing system ignores markers or dots on moving targets for the purpose of calibration.
Fig. 11 illustrates a top view of another motion capture volume 150. As in the foregoing embodiment, the motion capture volume 150 is a generally rectangular shaped region subdivided by gridlines. In this embodiment, the motion capture volume 150 is intended to represent a significantly larger space, and can be further subdivided into four sections or quadrants (A, B, C, D) . Each section has a size roughly equal to that of the motion capture volume 30 described above, so this motion capture volume 150 has four times the surface area of the preceding embodiment. An additional section E is centered within the space and overlaps partially with each of the other sections. The gridlines further include numerical coordinates (1-5) along the vertical axes and alphabetic coordinates (A-E) along the horizontal axes. This way, a particular location on the motion capture volume can be defined by its alphanumeric coordinates, such as region 4A. Such designation permits management of the motion capture volume 150 in terms of providing direction to the actors as to where to conduct their performance and/or where to place props. The gridlines and alphanumeric coordinates may be physically marked onto the floor of the motion capture volume 150 for the convenience of the actors and/or scene director. It should be appreciated that these gridlines and alphanumeric coordinates would not be included in the 2D data files.
In a preferred embodiment of the invention, each of the sections A-E has a square shape having dimensions of 10 ft by 10 ft, for a total area of 400 sq ft, i.e., roughly four times larger than the motion capture volume of the preceding embodiment. It should be appreciated that other shapes and sizes for the motion capture volume 150 can also be
advantageously utilized.
Referring now to Figs. 12A-12C, an arrangement of motion capture cameras 122i~122N is illustrated with respect to a peripheral region around the motion capture volume 150. The peripheral region provides for the placement of scaffolding to support cameras, lighting, and other equipment, and is illustrated as' regions 152!-1524. The motion capture cameras 122X-122N are located generally evenly in each of the regions 1523.-1524 surrounding the motion capture volume 150 with a diversity, of camera heights and angles. Moreover, the motion capture cameras 122!-122N are each oriented to focus on individual ones of the sections of the motion capture volume 150, rather than on the entire motion capture volume. In embodiment of the invention, there are two-hundred total motion capture cameras with groups of forty individual cameras devoted to each one of the five sections A-E of the motion capture volume 150.
More specifically, the arrangement of motion capture cameras 122χ-122Ν may be defined by distance from the motion capture volume and height off the floor of the motion capture volume 150. Fig. 12A illustrates an arrangement of a first group of motion capture cameras 122χ-122Ν that are oriented the greatest distance from the motion capture volume 150 and at the generally lowest height . Referring to
region 152, (of which the other regions are substantially identical) , there are three rows of cameras with a first row 172 disposed radially outward with respect to the motion capture volume 150 at the highest height from the floor W
(e.g., 6 ft), a second row 174 at a slightly lower height (e.g., 4 ft), and a third row 176 disposed radially inward with respect to the first and second rows and at a lowest height (e.g., 1 ft). In the embodiment, there are eighty total motion capture cameras in this first group.
Fig. 12B illustrates an arrangement of a second group of motion capture cameras 1223i-122i60 that are oriented closer to the motion capture volume 150 than the first group and at a height greater than that of the first group. Referring to region 152x (of which the other regions are substantially identical) , there are three rows of cameras with a first row 182 disposed radially outward with respect to the motion capture volume at the highest height from the floor (e.g., 14 ft), a second row 184 at a slightly lower height (e.g., 11 ft) , and a third row 186 disposed radially inward with respect to the first and second rows and at a lowest height (e.g., 9 ft) . In the embodiment, there are eighty total motion capture cameras in this second group.
Fig. 12C illustrates an arrangement of a (third group of motion capture cameras 122i6i-12220o that are oriented closer to the motion capture volume 150 than the second group and at a height greater than that of the second group. Referring to region 152x (of which the other regions are substantially identical) , there are three rows of cameras with a first row 192 disposed radially outward with respect to the motion capture volume at the highest height from the floor (e.g., 21 ft), a second row 194 at a slightly lower height (e.g., 18 ft) , and a third row 196 disposed radially inward with respect to the first and second rows at a lower height (e.g., 17 ft) . In the embodiment, there are forty total motion capture cameras in this second group. It should be
appreciated that other arrangements of motion capture cameras and different numbers of motion capture cameras can also be advantageously utilized.
The motion capture cameras are focused onto respective sections of the motion capture volume 150 in a similar manner as described above with respect to Fig. 4. For each of the sections A-E of the motion capture volume 150, motion capture cameras from each of the four sides will be focused onto the section. By way of example, the cameras from the first group most distant from the motion capture volume may focus on the sections of the motion capture volume closest thereto.
Conversely, the cameras from the third group most close to the motion capture volume may focus on the sections of the motion capture volume farthest therefrom. Cameras from one end of one of the sides may focus on sections at the other end. In a more specific example, section A of the motion capture volume 150 may be covered by a combination of certain low height cameras from the first row 182 and third row 186 of peripheral region 152χ, low height cameras from the first row 182 and third row 186 of peripheral region 1524, medium height cameras from the second row 184 and third row 186 of peripheral region 1523, medium height cameras from the second row 184 and third row 186 of peripheral region 1522. Figs. 12A and 12B further reveal a greater concentration of motion cameras in the center of the peripheral regions for capture of motion within the center section E.
By providing a diversity of angles and heights, with many cameras focusing on the sections of the motion capture volume 150, there is far greater likelihood of capturing the entire performance while minimizing incidents of undesirable occlusions. In view of the large number of cameras used in this arrangement, it may be advantageous to place light shields around each of the camera to cut down on detection of extraneous light from another camera located opposite the motion capture volume. In this embodiment of the invention, the same cameras are used to capture both facial and body motion at the same time, so there is no need for separate body and facial motion cameras. Different sized markers may be utilized on the actors in order to distinguish between facial and body motion, with generally larger markers used overall in order to ensure data capture given the larger motion capture volume. For example, 9 millimeter markers may be used for the body and 6 millimeter markers used for the face .
Various implementations of the invention are realized in electronic hardware, computer software, or combinations of these technologies. One implementation includes one or more programmable processors and corresponding computer system components to store and execute computer instructions, such as to provide the motion capture processing of the video captured by the mobile motion capture cameras and to
calibrate those cameras during motion. Other implementations include one or more computer programs executed by a
programmable processor or computer. In general, each
computer includes one or more processors, one or more datastorage components (e.g., volatile or non-volatile memory modules and persistent optical and magnetic storage devices, such as hard and floppy disk drives, CD-ROM drives, and magnetic tape drives), one or more input devices (e.g., mice and keyboards), and one or more output devices (e.g., display consoles and printers) .
The computer programs include executable code that is usually stored in a persistent storage medium and then copied into memory at run-time. The processor executes the code by retrieving program instructions from memory in a prescribed order. When executing the program code, the computer
receives data from the input and/or storage devices, performs operations on the data, and then delivers the resulting data to the output and/or storage devices.
Various illustrative implementations of the present invention have been described. However, one of ordinary skill in the art will see that additional implementations are also possible and within the scope of the present invention. For example, in one variation, a combination of motion capture rigs with different numbers of cameras can be used to capture motion of targets before the cameras. Different numbers of fixed and mobile cameras can achieve desired results and accuracy, for example, 50% fixed cameras and 50% mobile cameras; 90% fixed cameras and 10% mobile cameras; or 100% mobile cameras. Therefore, the configuration of the cameras (e.g., number, position, fixed vs. mobile, etc.) can be selected to match the desired result. Accordingly, the present invention is not limited to only those implementations described above .

Claims

CLAIMS What is Claimed is:
1. A system for capturing motion, comprising:
a motion capture volume configured to include at least one moving object having markers defining a plurality of points on said at least one moving object;
at least one mobile motion capture camera, said at least one mobile motion capture camera configured to be moveable within said motion capture volume; and
a motion capture processor coupled to said at least one mobile motion capture camera to produce a digital
representation of movement of said at least one moving object.
2. The system for capturing motion of claim 1, further comprising at least one fixed motion capture camera.
3. The system for capturing motion of claim 1, wherein said at least one mobile motion capture camera moves around the periphery of the volume .
4. The system for capturing motion of claim 1, wherein said at least one mobile motion capture camera is moved such that substantially all laterally exposed surfaces of said at least one moving object while in motion within said motion capture volume are within a field of view of said plurality of motion capture cameras at substantially all times.
5. The system for capturing motion of claim 1, wherein said pluralit of motion capture cameras is arranged to provide a larger effective volume of said motion capture volume than possible with all fixed motion capture cameras.
6. The system for capturing motion of claim 1, wherein said at least one mobile motion capture camera is moved from a first position to a second position and is fixed at said second position.
7. The system for capturing motion of claim 1, wherein said at least one mobile motion capture camera is moved to follow the action of said at least one moving object and to perform motion capture while moving.
8. The system for capturing motion of claim 1, further comprising
a camera motion processor coupled to said at least one mobile motion capture camera to track the movement of said at least one mobile motion capture camera and to remove the movement in the subsequent processing of captured data to generate said digital representation of movement of said at least one moving object.
9. The system for capturing motion of claim 1, further comprising
at least one servomotor used to move said at least one mobile motion capture camera.
10. The system for capturing motion of claim 1, wherein said at least one mobile motion capture camera is moved individually.
11. The system for capturing motion of claim 1, wherein said at least one mobile motion capture camera includes
a first reference camera configured to show views of said at least one mobile motion capture camera.
12. The system for capturing motion of claim 1, wherein said at least one mobile motion capture camera includes
a second reference camera configured to generate
reference and adjustment of motion captured by said at least one mobile motion capture camera.
13. The system for capturing motion of claim 1, wherein said at least one mobile motion capture camera includes
5 a feedback loop configured to generate reference and
adjustment of motion captured by said at least one mobile motion capture camera without a reference camera.
14. The system for capturing motion of claim 1, further L0 comprising
at least one mobile motion capture rig configured to enable at least one of said at least one mobile motion capture camera to be disposed on said at least one mobile motion capture rig such that cameras of said at least one mobile 15 motion capture camera are moved.
15. The system for capturing motion of claim 14, wherein at least one of said at least one mobile motion capture rig includes
20 at least one servomotor configured to provide translation and rotational movements to said at least one mobile motion capture camera disposed on that mobile motion capture rig.
50 i
I
16. The system for capturing motion of claim 15, wherein said at least one servomotor is coupled to said at least one mobile motion capture rig to provide said translation and rotational movements together to said at least one mobile motion capture camera.
17. The system for capturing motion of claim 15, wherein said at least one servomotor is coupled to each of said at least one mobile motion capture camera to provide said
translation and rotational movements individually to said each of said at least one mobile motion capture camera.
18. The system for capturing motion of claim 17, wherein at least one of said at least one mobile motion capture camera is restricted in at least one movement of said translation and rotational movements.
19. The system for capturing motion of claim 14, wherein said at least one mobile motion capture rig is configured such that movements of said at least one mobile motion capture rig are pre- rogrammed .
20. The system for capturing motion of claim 14, wherein said at least one mobile motion capture rig is configured to move automatically based on received input for said at least one moving obj ect .
21. The system for capturing motion of claim 20, wherein said received input includes a visual signal received from said at least one moving object.
22. The system for capturing motion of claim 21, wherein said at least one mobile motion capture rig moves with said at least one moving object based on captured movement of that obj ect .
23. The system for capturing motion of claim 1, further comprising
fixed tracking markers disposed at fixed locations within said motion capture volume.
24. The system for capturing motion of claim 23, further comprising
a camera calibration system to calibrate said at least one mobile motion capture camera using motion capture
information representing the location of said fixed tracking markers .
25. A system for capturing motion, comprising:
at least one mobile motion capture camera configured to be moveable, said at least one mobile motion capture camera operating to capture motion within a motion capture volume; and
at least one mobile motion capture rig configured to enable said at least one mobile motion capture camera to be disposed on said at least one mobile motion capture rig such that cameras of said at least one mobile motion capture camera can be moved.
26. A method for capturing motion, comprising:
defining a motion capture volume configured to include at least one moving object having markers defining a plurality of points on said at least one moving object;
moving at least one mobile motion capture camera within said motion capture volume; and
processing data from said at least one mobile motion capture camera to produce a digital representation of movement of said at least one moving object.
27. The method of claim 26, wherein said moving said at least one mobile motion capture camera includes
moving said at least one mobile motion capture camera from a first position to a second position and is fixed at said second position.
28. The method of claim 26, wherein said moving said at least one mobile motion capture camera includes
moving said at least one mobile motion capture camera to follow the action of said at least one moving object and to perform motion capture while moving.
29. The method of claim 26, further comprising
tracking the movement of said at least one mobile motion capture camera; and
removing the movement in the subsequent processing of captured data to generate said digital representation of movement of said at least one moving object.
30. The method of claim 26, wherein said at least one mobile motion capture camera includes
generating reference and adjustment of motion captured by- said at least one mobile motion capture camera.
31. The method of claim 26, further comprising
placing fixed tracking markers at fixed locations within said motion capture volume.
32. The method of claim 31, further comprising
calibrating said at least one mobile motion capture camera using motion capture information representing the location of said' fixed tracking markers.
33. A system for capturing motion, comprising:
means for defining a motion capture volume configured to include at least one moving object having markers defining a plurality of points on said at least one moving object;
means for moving at least one mobile motion capture camera within said motion capture volume; and
means for processing data from said at least one mobile motion capture camera to produce a digital representation of movement of said at least one moving object.
PCT/US2006/026088 2005-07-01 2006-07-03 Mobile motion capture cameras WO2007005900A2 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
CA2614058A CA2614058C (en) 2005-07-01 2006-07-03 Mobile motion capture cameras
AU2006265040A AU2006265040B2 (en) 2005-07-01 2006-07-03 Mobile motion capture cameras
KR1020087002642A KR101299840B1 (en) 2005-07-01 2006-07-03 Mobile motion capture cameras
EP06786294A EP1908019A4 (en) 2005-07-01 2006-07-03 Mobile motion capture cameras
CN2006800321792A CN101253538B (en) 2005-07-01 2006-07-03 Mobile motion capture cameras
NZ564834A NZ564834A (en) 2005-07-01 2006-07-03 Representing movement of object using moving motion capture cameras within a volume
JP2008519711A JP2008545206A (en) 2005-07-01 2006-07-03 Mobile motion capture camera

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US69619305P 2005-07-01 2005-07-01
US60/696,193 2005-07-01
US11/372,330 US7333113B2 (en) 2003-03-13 2006-03-08 Mobile motion capture cameras
US11/372,330 2006-03-08

Publications (3)

Publication Number Publication Date
WO2007005900A2 WO2007005900A2 (en) 2007-01-11
WO2007005900A3 WO2007005900A3 (en) 2007-10-25
WO2007005900A9 true WO2007005900A9 (en) 2014-08-07

Family

ID=37605162

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/026088 WO2007005900A2 (en) 2005-07-01 2006-07-03 Mobile motion capture cameras

Country Status (6)

Country Link
US (2) US7333113B2 (en)
EP (1) EP1908019A4 (en)
JP (1) JP5710652B2 (en)
AU (1) AU2006265040B2 (en)
CA (1) CA2614058C (en)
WO (1) WO2007005900A2 (en)

Families Citing this family (123)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4849761B2 (en) * 2002-09-02 2012-01-11 株式会社 資生堂 Makeup method based on texture and texture image map
JP4559092B2 (en) * 2004-01-30 2010-10-06 株式会社エヌ・ティ・ティ・ドコモ Mobile communication terminal and program
US9826537B2 (en) 2004-04-02 2017-11-21 Rearden, Llc System and method for managing inter-cluster handoff of clients which traverse multiple DIDO clusters
US10425134B2 (en) 2004-04-02 2019-09-24 Rearden, Llc System and methods for planned evolution and obsolescence of multiuser spectrum
US9819403B2 (en) 2004-04-02 2017-11-14 Rearden, Llc System and method for managing handoff of a client between different distributed-input-distributed-output (DIDO) networks based on detected velocity of the client
US10277290B2 (en) 2004-04-02 2019-04-30 Rearden, Llc Systems and methods to exploit areas of coherence in wireless systems
US8654815B1 (en) 2004-04-02 2014-02-18 Rearden, Llc System and method for distributed antenna wireless communications
US7520091B2 (en) * 2004-07-09 2009-04-21 Friedman Daniel B Adaptable roof system
US20060055706A1 (en) * 2004-09-15 2006-03-16 Perlman Stephen G Apparatus and method for capturing the motion of a performer
US8194093B2 (en) * 2004-09-15 2012-06-05 Onlive, Inc. Apparatus and method for capturing the expression of a performer
US7633521B2 (en) * 2005-02-25 2009-12-15 Onlive, Inc. Apparatus and method improving marker identification within a motion capture system
US7605861B2 (en) * 2005-03-10 2009-10-20 Onlive, Inc. Apparatus and method for performing motion capture using shutter synchronization
CA2556533A1 (en) * 2005-08-24 2007-02-24 Degudent Gmbh Method of determining the shape of a dental technology object and apparatus performing the method
US8659668B2 (en) 2005-10-07 2014-02-25 Rearden, Llc Apparatus and method for performing motion capture using a random pattern on capture surfaces
US7667767B2 (en) * 2006-06-07 2010-02-23 Onlive, Inc. System and method for three dimensional capture of stop-motion animated characters
US7548272B2 (en) * 2006-06-07 2009-06-16 Onlive, Inc. System and method for performing motion capture using phosphor application techniques
US7567293B2 (en) * 2006-06-07 2009-07-28 Onlive, Inc. System and method for performing motion capture by strobing a fluorescent lamp
CN101513065B (en) * 2006-07-11 2012-05-09 索尼株式会社 Using quantum nanodots in motion pictures or video games
US20100231692A1 (en) * 2006-07-31 2010-09-16 Onlive, Inc. System and method for performing motion capture and image reconstruction with transparent makeup
GB2452041B (en) * 2007-08-20 2012-09-26 Snell Ltd Video framing control
JP2010541035A (en) * 2007-09-04 2010-12-24 ソニー株式会社 Integrated motion capture
KR100940860B1 (en) * 2007-12-18 2010-02-09 한국전자통신연구원 Method and apparatus for generating locomotion of digital creature
US8166421B2 (en) * 2008-01-14 2012-04-24 Primesense Ltd. Three-dimensional user interface
US8933876B2 (en) 2010-12-13 2015-01-13 Apple Inc. Three dimensional user interface session control
US9035876B2 (en) 2008-01-14 2015-05-19 Apple Inc. Three-dimensional user interface session control
TW200937348A (en) * 2008-02-19 2009-09-01 Univ Nat Chiao Tung Calibration method for image capturing device
US9390516B2 (en) * 2008-09-29 2016-07-12 Two Pic Mc Llc Asynchronous streaming of data for validation
US9325972B2 (en) * 2008-09-29 2016-04-26 Two Pic Mc Llc Actor-mounted motion capture camera
US8289443B2 (en) * 2008-09-29 2012-10-16 Two Pic Mc Llc Mounting and bracket for an actor-mounted motion capture camera system
TW201021546A (en) * 2008-11-19 2010-06-01 Wistron Corp Interactive 3D image display method and related 3D display apparatus
US9298007B2 (en) 2014-01-21 2016-03-29 Osterhout Group, Inc. Eye imaging in head worn computing
US9400390B2 (en) 2014-01-24 2016-07-26 Osterhout Group, Inc. Peripheral lighting for head worn computing
US9965681B2 (en) 2008-12-16 2018-05-08 Osterhout Group, Inc. Eye imaging in head worn computing
US20150205111A1 (en) 2014-01-21 2015-07-23 Osterhout Group, Inc. Optical configurations for head worn computing
US9229233B2 (en) 2014-02-11 2016-01-05 Osterhout Group, Inc. Micro Doppler presentations in head worn computing
US9952664B2 (en) 2014-01-21 2018-04-24 Osterhout Group, Inc. Eye imaging in head worn computing
US9715112B2 (en) 2014-01-21 2017-07-25 Osterhout Group, Inc. Suppression of stray light in head worn computing
US8103088B2 (en) * 2009-03-20 2012-01-24 Cranial Technologies, Inc. Three-dimensional image capture system
US8217993B2 (en) * 2009-03-20 2012-07-10 Cranial Technologies, Inc. Three-dimensional image capture system for subjects
US11699247B2 (en) * 2009-12-24 2023-07-11 Cognex Corporation System and method for runtime determination of camera miscalibration
US8659658B2 (en) * 2010-02-09 2014-02-25 Microsoft Corporation Physical interaction zone for gesture-based user interfaces
US8787663B2 (en) * 2010-03-01 2014-07-22 Primesense Ltd. Tracking body parts by combined color image and depth processing
KR101079925B1 (en) 2010-04-19 2011-11-04 서경대학교 산학협력단 An augmented reality situational training system by recognition of markers and hands of trainee
US11117033B2 (en) 2010-04-26 2021-09-14 Wilbert Quinc Murdock Smart system for display of dynamic movement parameters in sports and training
CN102959616B (en) 2010-07-20 2015-06-10 苹果公司 Interactive reality augmentation for natural interaction
US9201501B2 (en) 2010-07-20 2015-12-01 Apple Inc. Adaptive projector
US8959013B2 (en) 2010-09-27 2015-02-17 Apple Inc. Virtual keyboard for a non-tactile three dimensional user interface
US8872762B2 (en) 2010-12-08 2014-10-28 Primesense Ltd. Three dimensional user interface cursor control
CN106125921B (en) 2011-02-09 2019-01-15 苹果公司 Gaze detection in 3D map environment
US9377865B2 (en) 2011-07-05 2016-06-28 Apple Inc. Zoom-based gesture user interface
US8881051B2 (en) 2011-07-05 2014-11-04 Primesense Ltd Zoom-based gesture user interface
US9459758B2 (en) 2011-07-05 2016-10-04 Apple Inc. Gesture-based interface with enhanced features
US9030498B2 (en) 2011-08-15 2015-05-12 Apple Inc. Combining explicit select gestures and timeclick in a non-tactile three dimensional user interface
US9218063B2 (en) 2011-08-24 2015-12-22 Apple Inc. Sessionless pointing user interface
US9122311B2 (en) 2011-08-24 2015-09-01 Apple Inc. Visual feedback for tactile and non-tactile user interfaces
US9389677B2 (en) 2011-10-24 2016-07-12 Kenleigh C. Hobby Smart helmet
WO2013086246A1 (en) 2011-12-06 2013-06-13 Equisight Inc. Virtual presence model
US9229534B2 (en) 2012-02-28 2016-01-05 Apple Inc. Asymmetric mapping for tactile and non-tactile user interfaces
CN103324905A (en) * 2012-03-21 2013-09-25 天津生态城动漫园投资开发有限公司 Next-generation virtual photostudio facial capture system
AU2013239179B2 (en) 2012-03-26 2015-08-20 Apple Inc. Enhanced virtual touchpad and touchscreen
US20140046923A1 (en) * 2012-08-10 2014-02-13 Microsoft Corporation Generating queries based upon data points in a spreadsheet application
US11189917B2 (en) 2014-04-16 2021-11-30 Rearden, Llc Systems and methods for distributing radioheads
US10488535B2 (en) 2013-03-12 2019-11-26 Rearden, Llc Apparatus and method for capturing still images and video using diffraction coded imaging techniques
US9923657B2 (en) 2013-03-12 2018-03-20 Rearden, Llc Systems and methods for exploiting inter-cell multiplexing gain in wireless cellular systems via distributed input distributed output technology
US9973246B2 (en) 2013-03-12 2018-05-15 Rearden, Llc Systems and methods for exploiting inter-cell multiplexing gain in wireless cellular systems via distributed input distributed output technology
US10547358B2 (en) 2013-03-15 2020-01-28 Rearden, Llc Systems and methods for radio frequency calibration exploiting channel reciprocity in distributed input distributed output wireless communications
US9829707B2 (en) 2014-08-12 2017-11-28 Osterhout Group, Inc. Measuring content brightness in head worn computing
US20150277118A1 (en) 2014-03-28 2015-10-01 Osterhout Group, Inc. Sensor dependent content position in head worn computing
US10191279B2 (en) 2014-03-17 2019-01-29 Osterhout Group, Inc. Eye imaging in head worn computing
US9810906B2 (en) 2014-06-17 2017-11-07 Osterhout Group, Inc. External user interface for head worn computing
US9746686B2 (en) 2014-05-19 2017-08-29 Osterhout Group, Inc. Content position calibration in head worn computing
US9448409B2 (en) 2014-11-26 2016-09-20 Osterhout Group, Inc. See-through computer display systems
US9939934B2 (en) 2014-01-17 2018-04-10 Osterhout Group, Inc. External user interface for head worn computing
US9671613B2 (en) 2014-09-26 2017-06-06 Osterhout Group, Inc. See-through computer display systems
US20160019715A1 (en) 2014-07-15 2016-01-21 Osterhout Group, Inc. Content presentation in head worn computing
US11103122B2 (en) 2014-07-15 2021-08-31 Mentor Acquisition One, Llc Content presentation in head worn computing
US10649220B2 (en) 2014-06-09 2020-05-12 Mentor Acquisition One, Llc Content presentation in head worn computing
US9575321B2 (en) 2014-06-09 2017-02-21 Osterhout Group, Inc. Content presentation in head worn computing
US20150228119A1 (en) 2014-02-11 2015-08-13 Osterhout Group, Inc. Spatial location presentation in head worn computing
US9841599B2 (en) 2014-06-05 2017-12-12 Osterhout Group, Inc. Optical configurations for head-worn see-through displays
US10684687B2 (en) 2014-12-03 2020-06-16 Mentor Acquisition One, Llc See-through computer display systems
US10254856B2 (en) 2014-01-17 2019-04-09 Osterhout Group, Inc. External user interface for head worn computing
US9529195B2 (en) 2014-01-21 2016-12-27 Osterhout Group, Inc. See-through computer display systems
US11227294B2 (en) 2014-04-03 2022-01-18 Mentor Acquisition One, Llc Sight information collection in head worn computing
US9594246B2 (en) 2014-01-21 2017-03-14 Osterhout Group, Inc. See-through computer display systems
US9299194B2 (en) 2014-02-14 2016-03-29 Osterhout Group, Inc. Secure sharing in head worn computing
US11737666B2 (en) 2014-01-21 2023-08-29 Mentor Acquisition One, Llc Eye imaging in head worn computing
US9529199B2 (en) 2014-01-21 2016-12-27 Osterhout Group, Inc. See-through computer display systems
US9753288B2 (en) 2014-01-21 2017-09-05 Osterhout Group, Inc. See-through computer display systems
US11669163B2 (en) 2014-01-21 2023-06-06 Mentor Acquisition One, Llc Eye glint imaging in see-through computer display systems
US9651784B2 (en) 2014-01-21 2017-05-16 Osterhout Group, Inc. See-through computer display systems
US9615742B2 (en) 2014-01-21 2017-04-11 Osterhout Group, Inc. Eye imaging in head worn computing
US11487110B2 (en) 2014-01-21 2022-11-01 Mentor Acquisition One, Llc Eye imaging in head worn computing
US9766463B2 (en) 2014-01-21 2017-09-19 Osterhout Group, Inc. See-through computer display systems
US9836122B2 (en) 2014-01-21 2017-12-05 Osterhout Group, Inc. Eye glint imaging in see-through computer display systems
US20150205135A1 (en) 2014-01-21 2015-07-23 Osterhout Group, Inc. See-through computer display systems
US9740280B2 (en) 2014-01-21 2017-08-22 Osterhout Group, Inc. Eye imaging in head worn computing
US9494800B2 (en) 2014-01-21 2016-11-15 Osterhout Group, Inc. See-through computer display systems
US11892644B2 (en) 2014-01-21 2024-02-06 Mentor Acquisition One, Llc See-through computer display systems
US9846308B2 (en) 2014-01-24 2017-12-19 Osterhout Group, Inc. Haptic systems for head-worn computers
US9852545B2 (en) 2014-02-11 2017-12-26 Osterhout Group, Inc. Spatial location presentation in head worn computing
US20150241963A1 (en) 2014-02-11 2015-08-27 Osterhout Group, Inc. Eye imaging in head worn computing
US9401540B2 (en) 2014-02-11 2016-07-26 Osterhout Group, Inc. Spatial location presentation in head worn computing
US20160187651A1 (en) 2014-03-28 2016-06-30 Osterhout Group, Inc. Safety for a vehicle operator with an hmd
US10853589B2 (en) 2014-04-25 2020-12-01 Mentor Acquisition One, Llc Language translation with head-worn computing
US9672210B2 (en) 2014-04-25 2017-06-06 Osterhout Group, Inc. Language translation with head-worn computing
US9651787B2 (en) 2014-04-25 2017-05-16 Osterhout Group, Inc. Speaker assembly for headworn computer
US10663740B2 (en) 2014-06-09 2020-05-26 Mentor Acquisition One, Llc Content presentation in head worn computing
US9519289B2 (en) 2014-11-26 2016-12-13 Irobot Corporation Systems and methods for performing simultaneous localization and mapping using machine vision systems
JP6732746B2 (en) * 2014-11-26 2020-07-29 アイロボット・コーポレーション System for performing simultaneous localization mapping using a machine vision system
US9744670B2 (en) 2014-11-26 2017-08-29 Irobot Corporation Systems and methods for use of optical odometry sensors in a mobile robot
US9751210B2 (en) 2014-11-26 2017-09-05 Irobot Corporation Systems and methods for performing occlusion detection
US9684172B2 (en) 2014-12-03 2017-06-20 Osterhout Group, Inc. Head worn computer display systems
USD751552S1 (en) 2014-12-31 2016-03-15 Osterhout Group, Inc. Computer glasses
CN104606882B (en) * 2014-12-31 2018-01-16 南宁九金娃娃动漫有限公司 A kind of somatic sensation television game interactive approach and system
USD753114S1 (en) 2015-01-05 2016-04-05 Osterhout Group, Inc. Air mouse
US20160239985A1 (en) 2015-02-17 2016-08-18 Osterhout Group, Inc. See-through computer display systems
US10878775B2 (en) 2015-02-17 2020-12-29 Mentor Acquisition One, Llc See-through computer display systems
US10591728B2 (en) 2016-03-02 2020-03-17 Mentor Acquisition One, Llc Optical systems for head-worn computers
US10667981B2 (en) 2016-02-29 2020-06-02 Mentor Acquisition One, Llc Reading assistance system for visually impaired
US11178355B2 (en) * 2017-06-06 2021-11-16 Swaybox Studios, Inc. System and method for generating visual animation
US10812693B2 (en) * 2017-10-20 2020-10-20 Lucasfilm Entertainment Company Ltd. Systems and methods for motion capture
EP3863516A4 (en) 2018-10-12 2022-06-15 Magic Leap, Inc. Staging system to verify accuracy of a motion tracking system

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6020892A (en) 1995-04-17 2000-02-01 Dillon; Kelly Process for producing and controlling animated facial representations
US5802220A (en) 1995-12-15 1998-09-01 Xerox Corporation Apparatus and method for tracking facial motion through a sequence of images
US5774591A (en) 1995-12-15 1998-06-30 Xerox Corporation Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images
US6324296B1 (en) 1997-12-04 2001-11-27 Phasespace, Inc. Distributed-processing motion tracking system for tracking individually modulated light points
US6272231B1 (en) 1998-11-06 2001-08-07 Eyematic Interfaces, Inc. Wavelet-based facial motion capture for avatar animation
ATE420528T1 (en) * 1998-09-17 2009-01-15 Yissum Res Dev Co SYSTEM AND METHOD FOR GENERATING AND DISPLAYING PANORAMIC IMAGES AND FILMS
JP3941262B2 (en) * 1998-10-06 2007-07-04 株式会社日立製作所 Thermosetting resin material and manufacturing method thereof
US7483049B2 (en) * 1998-11-20 2009-01-27 Aman James A Optimizations for live event, real-time, 3D object tracking
US6788333B1 (en) * 2000-07-07 2004-09-07 Microsoft Corporation Panoramic video
US6707444B1 (en) 2000-08-18 2004-03-16 International Business Machines Corporation Projector and camera arrangement with shared optics and optical marker for use with whiteboard systems
US6950104B1 (en) 2000-08-30 2005-09-27 Microsoft Corporation Methods and systems for animating facial features, and methods and systems for expression transformation
US6774869B2 (en) 2000-12-22 2004-08-10 Board Of Trustees Operating Michigan State University Teleportal face-to-face system
US7012637B1 (en) * 2001-07-27 2006-03-14 Be Here Corporation Capture structure for alignment of multi-camera capture systems
US7106358B2 (en) * 2002-12-30 2006-09-12 Motorola, Inc. Method, system and apparatus for telepresence communications
US7573480B2 (en) * 2003-05-01 2009-08-11 Sony Corporation System and method for capturing facial and body motion
US7218320B2 (en) * 2003-03-13 2007-05-15 Sony Corporation System and method for capturing facial and body motion

Also Published As

Publication number Publication date
AU2006265040A1 (en) 2007-01-11
US20060152512A1 (en) 2006-07-13
JP2013061987A (en) 2013-04-04
EP1908019A4 (en) 2009-10-21
CA2614058C (en) 2015-04-28
JP5710652B2 (en) 2015-04-30
US7812842B2 (en) 2010-10-12
CA2614058A1 (en) 2007-01-11
WO2007005900A2 (en) 2007-01-11
WO2007005900A3 (en) 2007-10-25
US7333113B2 (en) 2008-02-19
US20080211815A1 (en) 2008-09-04
AU2006265040B2 (en) 2011-12-15
EP1908019A2 (en) 2008-04-09

Similar Documents

Publication Publication Date Title
US7333113B2 (en) Mobile motion capture cameras
US8106911B2 (en) Mobile motion capture cameras
AU2005311889B2 (en) System and method for capturing facial and body motion
EP1602074B1 (en) System and method for capturing facial and body motion
US7358972B2 (en) System and method for capturing facial and body motion
KR101299840B1 (en) Mobile motion capture cameras
US20220067968A1 (en) Motion capture calibration using drones with multiple cameras
US11282233B1 (en) Motion capture calibration
US20220076452A1 (en) Motion capture calibration using a wand
US20220067969A1 (en) Motion capture calibration using drones
WO2022055371A1 (en) Motion capture calibration using a wand
CN116368350A (en) Motion capture calibration using targets

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200680032179.2

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref document number: 2614058

Country of ref document: CA

Ref document number: 2008519711

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2006786294

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2006265040

Country of ref document: AU

Ref document number: 564834

Country of ref document: NZ

ENP Entry into the national phase

Ref document number: 2006265040

Country of ref document: AU

Date of ref document: 20060703

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020087002642

Country of ref document: KR