|Publication number||US7526790 B1|
|Application number||US 10/107,454|
|Publication date||Apr 28, 2009|
|Filing date||Mar 28, 2002|
|Priority date||Mar 28, 2002|
|Publication number||10107454, 107454, US 7526790 B1, US 7526790B1, US-B1-7526790, US7526790 B1, US7526790B1|
|Original Assignee||Nokia Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (13), Referenced by (3), Classifications (6), Legal Events (3)|
|External Links: USPTO, USPTO Assignment, Espacenet|
1. Field of Invention
This invention relates to audio systems for TV presentations. More particularly, the invention relates to audio systems providing a virtual audio arena effect for live TV presentations.
2. Description of Prior Art
Today attendance at sporting, social and cultural events held in arenas, auditoriums and concert halls can be expensive and present travel difficulties, provided tickets are available for the event. Many events are covered by live TV broadcasts, which fail to give the viewer, the impression of being virtually present at the event. Enhancing the audio accompanying the TV presentation could contribute to providing a viewer with the impression of being virtually present at an event. Moreover, the impression could be further enhanced if the viewer could remotely control the origination of the audio in the arena to provide the viewer with the sensation of sitting in his/her favorite seat and, if desired, repositioning his/her seat for a better location in the arena, auditorium or concert hall.
Prior art related, virtual sound systems accompanying TV broadcast include:
(A) international Publication WO 01/52526 A2, entitled, “System and Method for Real Time Video Production and Multi-Casting”, published Jul. 19, 2001, “discloses a method for broadcasting a show in a video production environment having a processing server in communication with one or more clients. The processor server receives requests from clients for one or more show segments. The server assembles the show segments to produce a single video clip and sends the video clip as a whole unit to the requested client. The video clip is buffered at the requesting client, whereby the buffering permits the requested client to continue to display the video clip.
(B) International Publication No. WO99/21164, published Apr. 29, 1999, entitled, “A Method in a System for Processing a Virtual Acoustic Environment”, discloses a system intended to transfer a virtual environment as a datastream into a receiver and/or reproducing device. The datastream is stored in a memory in which there is stored a type or types of filters, a transfer function used by the system and creating a virtual environment. The receiver receives in the datastream parameters, which are used for modeling the surfaces within the virtual environment. With the aid of these data and stored filter types and transfer functions, the receiver creates a filter bank, which corresponds to the acoustic characteristics of the environment to be created. During operation, receiver receives the datastream, which is supplied to the filter bank created by the receiver and as a result a process sound lets the user listening to the sound receive an impression of the desired virtual environment.
(C) International Publication WO99/57900, published Nov. 11, 1999, entitled, “Video Phone with Enhanced User-Defined Imaging System”, discloses a video phone, which allows a presentation of a scene composed of a user plus environment plus composed of a scene (composed of user plus environment) to be perceived by a viewer. An imaging system perceived the user scene extracts essential information describing the user's sensory appearance along with that of the environment. A distribution system transmits this information from the user's locale to the viewer's locale. A presentation system uses the essential information and the formatting information to construct a presentation of the scene's appearance for the viewer to perceive. A library of presentation/construction formatting may be employed to contribute information that is used along with abstracted essential information to create the presentation for the viewer.
(D) U.S. Pat. No. 5,495,576, issued Feb. 27, 1996, entitled, “Panoramic Image-Based Virtual Realty/Telepresence Audio-Visual System and Method”, discloses a display system for virtual interaction with recorded images. A plurality of positionable sensor means of mutually angular relation, enables substantially continuous coverage of a three-dimensional subject. The sensor recorder communicates with the sensor and is operative to store and generate sensor signals representing the subject. A signal processing means communicates with a sensor and recorder. The processor receives the sensor signals from the recorder and is operable to textual map virtual images represented by the signal's sensor signals on to a three-dimensional form. A panoramic audio-visual display assembly communicates with the signal processor and enables display to the viewer of a texture map virtual image. The viewer has control means communicating with a single processor and enabling interactive manipulation of the texture map virtual images by the viewer by operating the interactive input device. A host computer manipulates a computer generated world model by assigning actions to subjects in the computer generated world model based upon actions by another subject in the computer generated world model.
None of the prior art discloses a viewer controlled audio system for enhancing a “live” TV broadcast of an event at an arena, auditorium or concert hall, the system providing the viewer with an audio effect of being virtually present at the event in a seat of his or her choice, which may be changed to another location, according to the desires of the viewer.
A TV viewer has enhanced listening of a sporting event, concert, or the like by the availability of audio streams from different positions in the arena. Audio sensors are located in different parts of the arena and are connected to a server by wireless or wired connection(s). The sensors are equipped for reception and transmission of sounds from the different positions. The server provides a frequency divided carrier to the respective sensors. The sensors are capable of modulating the divided carrier frequency with the audio sounds from the different positions as a stereophonic signal in the area of the sensor. The server receives, digitizes and packetizes the stereophonic sensor signal into a plurality of digital streams, each representative of the different sensor locations in the arena. The audio streams are broadcast to the viewer using Digital Video Broadcasting or via a cable system. The viewer is equipped with a control device airlinked to the TV for selection of an audio stream representative of a position in the arena. The selected digital stream is converted into an audio sound, which provides the viewer with a virtual presence at a selected position the arena. The viewer can change audio streams and select other positions in the arena from which to watch and listen to the audio sound being generated at the position.
A medium comprising program instructions executable in a computer system, provides the virtual arena effect for live TV presentations of an event.
The invention will be further understood from the following detailed description of a preferred embodiment, taken in conjunction with an appended drawing, in which:
A broadcasting system and network 130 receives the streams 119, 121, 123, 125, 127 and combines them with a video stream 132 of the “live” event and a general audio stream 134 for broadcast 136 by air, cable, wire, satellite or the like to the TV set 102. The audio streams are represented on the TV as icons 138, 140, 142, 144, 146 each representative of the locations L1, L2, . . . L5, respectively which the viewer 104, using a remote controller 148 can switch among the audio streams visualized by the icons.
Step 401 selects locations in the arena among installed audio sensors for generating virtual sounds, which would be experienced by the spectator at the selected locations.
Step 403 collects stereophonic sounds of an arena event in the audio sensors disposed about the arena.
Step 405 transmits the collected stereophonic sounds using a digital signal processor in a server.
Step 407 digitizes each sensor signal into a pulse code modulation (PCM) value for each stereophonic sound using a processor and standard MPEG programming.
Step 409 separates the digital signals in the server by arena location and compensates each digital signal for signal loss due to the Rayleigh effect between adjacent sensors and the selected locations in the arena.
Step 411 stores the distances R between viewer selected locations and adjacent sensor(s).
Step 413 calculates the Raleigh effect for each selected location based on 1/R2, where R is the distance(s) between the selected location and the adjacent sensor(s).
Step 415 translates the Rayleigh effect value for each location into a PCM value representative of the sound loss between each selected location and adjacent sensors.
Step 417 adds the Rayleigh effect value to the PCM value in each packet and generates packetized, digitized stream for each selected location in the arena.
Step 419 packetizes and adds the event narrator's voice signal to the digitized stream for each location.
Step 421 transmits all audio streams and the event video to TV receivers.
Step 423 stores the digitized streams in the receiver and generate an icon for each stream, the icon indicating the origin of the selected stream in the arena.
Step 425 viewer operates a remote TV controller under control of the viewer to select an icon of a location in the arena to receive the sound as if the viewer was present in the arena at the selected location.
Step 427 viewer operates the remote controller under control of the viewer to select other icons to receive the sound for other locations in the arena providing the viewer with the effect of moving about the arena.
While the invention has been shown and described in conjunction with a preferred embodiment, various changes can be made without departing from the spirit and scope of the invention as defined in the appended claims.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4384362 *||Jul 18, 1980||May 17, 1983||Bell Telephone Laboratories, Incorporated||Radio communication system using information derivation algorithm coloring for suppressing cochannel interference|
|US5495576||Jan 11, 1993||Feb 27, 1996||Ritchey; Kurtis J.||Panoramic image based virtual reality/telepresence audio-visual system and method|
|US5555306 *||Jun 27, 1995||Sep 10, 1996||Trifield Productions Limited||Audio signal processor providing simulated source distance control|
|US5677728 *||Feb 28, 1994||Oct 14, 1997||Schoolman Scientific Corporation||Stereoscopic video telecommunication system|
|US5894320 *||May 29, 1996||Apr 13, 1999||General Instrument Corporation||Multi-channel television system with viewer-selectable video and audio|
|US5912700||Jan 10, 1996||Jun 15, 1999||Fox Sports Productions, Inc.||System for enhancing the television presentation of an object at a sporting event|
|US6154250||Dec 10, 1998||Nov 28, 2000||Fox Sports Productions, Inc.||System for enhancing the television presentation of an object at a sporting event|
|US6195680||Jul 23, 1998||Feb 27, 2001||International Business Machines Corporation||Client-based dynamic switching of streaming servers for fault-tolerance and load balancing|
|US6229899||Sep 24, 1998||May 8, 2001||American Technology Corporation||Method and device for developing a virtual speaker distant from the sound source|
|US6301339||Dec 22, 1997||Oct 9, 2001||Data Race, Inc.||System and method for providing a remote user with a virtual presence to an office|
|WO1999021164A1||Oct 19, 1998||Apr 29, 1999||Nokia Oyj||A method and a system for processing a virtual acoustic environment|
|WO1999057900A1||May 1, 1999||Nov 11, 1999||John Karl Myers||Videophone with enhanced user defined imaging system|
|WO2001052526A2||Jan 9, 2001||Jul 19, 2001||Parkervision, Inc.||System and method for real time video production|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US8694553||Jun 7, 2011||Apr 8, 2014||Gary Stephen Shuster||Creation and use of virtual places|
|US20070201492 *||Apr 26, 2007||Aug 30, 2007||Genesis Microchip Inc.||Compact packet based multimedia interface|
|US20090094375 *||Oct 5, 2007||Apr 9, 2009||Lection David B||Method And System For Presenting An Event Using An Electronic Device|
|U.S. Classification||725/135, 725/59|
|Cooperative Classification||H04S7/302, H04S2400/11|
|Dec 10, 2012||REMI||Maintenance fee reminder mailed|
|Apr 28, 2013||LAPS||Lapse for failure to pay maintenance fees|
|Jun 18, 2013||FP||Expired due to failure to pay maintenance fee|
Effective date: 20130428