Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20060183542 A1
Publication typeApplication
Application numberUS 11/198,546
Publication dateAug 17, 2006
Filing dateAug 4, 2005
Priority dateFeb 16, 2005
Also published asEP1694096A2, EP1694096A3, US7672742, US20060184261
Publication number11198546, 198546, US 2006/0183542 A1, US 2006/183542 A1, US 20060183542 A1, US 20060183542A1, US 2006183542 A1, US 2006183542A1, US-A1-20060183542, US-A1-2006183542, US2006/0183542A1, US2006/183542A1, US20060183542 A1, US20060183542A1, US2006183542 A1, US2006183542A1
InventorsBuay Ng, Naser Mgariaf, Samuel Chih, Ann Ong, Marshall Mohr
Original AssigneeAdaptec, Inc.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Method and system for displaying video from a video game console
US 20060183542 A1
Abstract
A system for displaying video from a video game console is provided. The system includes an audio and video data conversion device in communication with the video game console. Additionally, a computer in communication with the audio and video data conversion device is included. The system also includes a display panel in communication with the computer, whereby the display panel is capable of displaying the video from the video game console. A method for displaying the video from the video game console also is described.
Images(11)
Previous page
Next page
Claims(20)
1. A system for displaying video from a video game console, comprising:
an audio and video data conversion device in communication with the video game console;
a computer in communication with the audio and video data conversion device; and
a display panel in communication with the computer, the display panel being capable of displaying the video.
2. The system of claim 1, wherein the audio and video data conversion device includes,
an audio and video processing circuitry.
3. The system of claim 2, wherein the audio and video data conversion device further includes,
a device controller in communication with the audio and video processing circuitry, the device controller enabling communication with the computer.
4. The system of claim 2, wherein the audio and video processing circuitry includes,
a video decoder configured to receive the video from the video game console; and
an audio decoder in communication with the video decoder, the audio decoder being configured to receive audio from the video game console.
5. The system of claim 2, wherein the audio and video data conversion device further includes,
a television tuner in communication with the audio and video processing circuitry.
6. The system of claim 1, wherein the computer includes a memory, the memory being configured to store the video from the video game console.
7. The system of claim 6, wherein the memory is further configured to store audio received from the video game console.
8. The system of claim 1, further comprising:
a speaker in communication with the computer, the speaker being capable of outputting audio outputted from the video game console.
9. The system of claim 1, wherein a graphical user interface (GUI) for a video game console display application is rendered on the display panel, the GUI including,
a first GUI component providing access to the video from the video game console; and
a first region displaying the video, the first region being generated in response to a selection of the first GUI component.
10. The system of claim 9, wherein the GUI further includes,
a second GUI component providing access to video received from a television tuner.
11. The system of claim 10, wherein the GUI further includes,
a third region displaying the video received from the television tuner, the third region being generated in response to a selection of the second GUI component.
12. The system of claim 9, wherein the GUI further includes,
a third GUI component providing access to video received from a video player.
13. A method for displaying video from a video game console, comprising:
receiving video at a computer from an audio and video data conversion device, the video being fed to the audio and video conversion device by the video game console; and
outputting the video to a display panel.
14. The method of claim 13, further comprising:
receiving audio at the computer from the audio and video data conversion device, the audio being fed to the audio and video conversion device by the video game console; and
outputting the audio to a speaker.
15. The method of claim 13, further comprising:
adjusting a property of the video.
16. The method of claim 15, wherein the property is defined by one or more of brightness, sharpness, contrast, shape, color, resolution, refresh rate, and size.
17. The method of claim 13, further comprising:
recording the video.
18. The method of claim 13, wherein the audio and video data conversion device is external to the computer.
19. A method for displaying video from a video game console, comprising method operations of:
converting analog video from the video game console to digitized video; and
outputting the digitized video to a computer.
20. The method of claim 19, further comprising:
converting analog audio from the video game console to digitized audio; and
outputting the digitized audio to the computer.
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application is a continuation-in-part of application Ser. No. 11/059,972, filed on Feb. 16, 2005, the disclosure of this application is incorporated herein by reference.

BACKGROUND OF THE INVENTION

Video game consoles are usually connected to regular television sets for visual and sounds effects. When compared to a computer monitor, a television set typically has lower image quality since the television set displays video at lower resolution. However, video game consoles do not interface with computer monitors through computers because a user will typically experience audio latency whereby the audio and video are not synchronized. In other words, the audio may lag behind or lead the video by a few seconds.

Audio latency may be caused by the use of different clock frequencies by an audio and video data conversion device and by the computer. In particular, a first frequency used within the audio and video data conversion device is typically different from a second frequency of an audio capture clock at which an audio renderer within the computer sends audio data to an audio encoder. The audio latency can cause a noticeable delay between a user's input actions through the video game console and resultant audio and video outputted from a computer.

As a result, there is a need to provide methods and systems for reducing audio latency and for displaying video from a video game console.

SUMMARY OF THE INVENTION

Broadly speaking, the present invention fills these needs by providing methods and systems for displaying video from a video game console. It should be appreciated that the present invention can be implemented in numerous ways, including as a method, a system, or a device. Several inventive embodiments of the present invention are described below.

In accordance with a first aspect of the present invention, a system for displaying video from a video game console is provided. The system includes an audio and video data conversion device in communication with the video game console. Additionally, a computer in communication with the audio and video data conversion device is included. The system also includes a display panel in communication with the computer, whereby the display panel is capable of displaying the video.

In accordance with a second aspect of the present invention, a method for displaying video from a video game console is provided. In this method, the video from an audio and video data conversion device is received at a computer. The video is fed to the audio and video conversion device by the video game console. Thereafter, the video is outputted to a display panel.

In accordance with a third aspect of the present invention, a method for displaying video from a video game console is provided. In this method, analog video from the video game console is converted to digitized video. After the conversion, the digitized video is outputted to a computer.

Other aspects and advantages of the invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating by way of example the principles of the invention.

BRIEF DESCRIPTION OF THE DRAWINGS

The present invention will be readily understood by the following detailed description in conjunction with the accompanying drawings, and like reference numerals designate like structural elements.

FIG. 1 is a simplified block diagram of a system for displaying video from a video game console, in accordance with one embodiment of the present invention.

FIG. 2 is a more detailed schematic diagram of the audio and video data conversion device shown in FIG. 1, in accordance with one embodiment of the present invention.

FIG. 3 is an alternative embodiment to the audio and video data conversion device of FIG. 2.

FIG. 4 is a flowchart diagram of a high level overview for displaying video from a video game console, in accordance with one embodiment of the present invention.

FIG. 5 is a simplified block diagram of a system for rendering analog audio signals, in accordance with one embodiment of the present invention.

FIG. 6 is a more detailed block diagram of the operating system shown in FIG. 5, in accordance with one embodiment of the present invention.

FIG. 7 is a flowchart diagram of a high level overview of a method for reducing audio latency when executing program instructions for processing audio data, in accordance with one embodiment of the present invention.

FIG. 8 is a flowchart diagram of a more detailed method for reducing audio latency when executing program instructions for processing audio data, in accordance with one embodiment of the present invention.

FIG. 9 is a flowchart diagram of a detailed method for implementing an embodiment of the present invention in Microsoft Windows, in accordance with one embodiment of the present invention.

FIG. 10 is a schematic diagram of a main graphical user interface (GUI) associated with a video game console display application, in accordance with one embodiment of the present invention.

DETAILED DESCRIPTION

An invention is described for hardware implemented methods and systems for displaying video from a video game console and reducing audio latency. It will be obvious, however, to one skilled in the art, that the present invention may be practiced without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure the present invention.

A. Display of Graphics from Video Game Console

FIG. 1 is a simplified block diagram of a system for displaying video from a video game console, in accordance with one embodiment of the present invention. As shown in FIG. 1, system includes video game console 102, audio and video data conversion device 150, and computer 104. Video game console 102 is a computing device primarily designed to play video games. Computer 104 can include any suitable computing device, and display panel 101 and speaker 172 can be attached to the computer to output video and audio, respectively. Display panel 101 is a peripheral device capable of displaying still or moving image. Examples of display panel 101 include cathode ray tube (CRT) displays, liquid crystal displays (LCD), plasma displays, video projectors, etc. Speaker 172 can include any suitable device that converts electrical signals into sounds.

As shown in FIG. 1, audio and video data conversion device 150 is in communication with video game console 102 and computer 104. Video game console 102 feeds analog audio and/or video to audio and video data conversion device 150. Video game console can output analog audio and video through any suitable ports. Exemplary ports include Radio Corporation of America (RCA) composite jacks, S-Video ports, television radio frequency (RF) ports, etc. As will be explained in more detail below, audio and video data conversion device 150 receives the analog audio and/or video from video game console 102 and digitizes the audio and/or video. In one embodiment, the digitized audio and/or video is then outputted to computer 104 in raw form. In other words, the digitized audio and/or video is directly outputted to computer 104. In another embodiment, the digitized audio and/or video is further compressed and then outputted to computer 104.

Computer 104 receives the audio and/or video from audio and video data conversion device 150 through a computer interface and outputs the video and audio to display panel 101 for display and to speaker 172, respectively. In one embodiment, before the audio and/or video is outputted, computer 104, as will be explained in more detail below, can process the audio to reduce audio latency and adjust properties of the video. Accordingly, audio and video data conversion device 150 allows display panel 101 and speaker 172 attached to computer 104 to be used to display video and to render audio, respectively, outputted from video game console 102.

FIG. 2 is a more detailed schematic diagram of the audio and video data conversion device shown in FIG. 1, in accordance with one embodiment of the present invention. As shown in FIG. 2, audio and video data conversion device 150 includes audio and video processing circuitry 106 and device controller 108. Audio and video processing circuitry 106 digitizes analog audio 154 and/or analog video 110 received from a video game console. Audio and video processing circuitry 106 can include video decoder 107 that converts analog video 110 to a digital format, such as YCbCr signals or YUV signals. An exemplary video decoder 107 is the Conexant CX25840 chip. It should be appreciated that video game consoles are used worldwide and therefore are designed to cater to different video standards, such as phase alternation line (PAL), National Television System Committee (NTSC), and séquentiel couleur avec mémoire (SÉCAM). In one embodiment, video decoder 107 can decode these video standards automatically.

Audio and video processing circuitry 106 can additionally include audio decoder 162 (e.g., Asahi Kasei Microsystem's AKM5357 chip, which is a 24-bit 2-channel A/D converter) that converts analog audio 154 to a digital format. For example, audio decoder 162 can digitize analog audio 154 received from a video game console into a sixteen bit or higher stereo serial data bit stream. In the embodiment of FIG. 2, audio decoder 162 outputs the digitized audio to video decoder 107 for packing with the digitized video by time multiplexing the digitized audio into the digitized video stream. Audio and video processing circuitry 106 then outputs the combined audio and video data to computer 104 through device controller 108. It should be appreciated that audio and video data conversion device 150 can adopt to a variety of interfaces, such as Universal Serial Bus (USB), FireWire, Peripheral Component Interconnect (PCI), PCI-X, PCI Express, Cardbus, etc. Depending on the type of interface used, device controller 108 can include any suitable controller for communicating with computer 104. Exemplary device controller 108 includes a USB controller (e.g., Cypress's CY7C68013A chip), a FireWire controller, a PCI controller, a PCI-X controller, a PCI Express controller, etc. As shown in FIG. 2, audio and video data conversion device 150 is separate from or external to the video game console and computer 104. However, depending on the type of interface used, audio and video data conversion device 150 can also be integrated within computer 104. Furthermore, it should be appreciated that video decoder 107, audio decoder 162, and device controller 108 can be integrated into a single chip.

FIG. 3 is an alternative embodiment to the audio and video data conversion device of FIG. 2. In addition to audio and video processing circuitry 106 and device controller 108, audio and video data conversion device 150 of FIG. 3 additionally includes television (TV) tuner 111, light-emitting diode (LED) 109, and non-volatile storage chip 112. TV tuner 111 allows computer 104 to additionally receive television signals by converting the television signals into a video stream and outputting the video stream to video decoder 107.

It should be appreciated that computer 104 can use device controller 108 to control operations of the various sub-modules (e.g., video decoder 107, audio decoder 162, LED 109, etc.) of audio and video data conversion device 150. Computer 104 can send commands to device controller 108 and, in turn, the device controller extracts the commands for processing, execution, and control of the sub-modules. In one embodiment, device controller 108 may store the commands from computer 104 to non-volatile storage chip 112 (e.g., Electrically-Erasable Programmable Read-Only Memory (EEPROM)) for subsequent retrieval and execution.

FIG. 4 is a flowchart diagram of a high level overview for displaying video from a video game console, in accordance with one embodiment of the present invention. Starting in operation 113, a computer receives video from an audio and video data conversion device. The video is fed to the audio and video conversion device by a video game console. It should be appreciated that in another embodiment, the computer can additionally receive audio from the audio and video data conversion device, where the audio is fed to the audio and video conversion device by the video game console. Accordingly, in this embodiment, the audio and video data conversion device can simultaneously feed both the video and audio from the video game console to the computer. As will be explained in more detail below, embodiments of the present invention can keep the latency between audio and video effects to a minimum, thereby preserving a real-time gaming effect.

After the computer receives the video, the computer outputs the video in operation 114 to a display panel that is in communication with the computer for display. As will be explained in more detail below, before the computer outputs the video, the computer can adjust properties of the video and store the video onto a memory of the computer.

B. Audio Latency Reduction

The embodiments described herein provide method and system for reducing audio latency when executing program instructions for processing audio data. In one embodiment, an amount of audio data stored in an audio buffer is determined, and the amount is compared with a top threshold value and a bottom threshold value. It should be noted that the terms “audio data,” “audio,” and “audio signal” may be used interchangeably. As will be explained in more detail below, an audio data feed to an audio renderer is adjusted incrementally such that the amount is between the top threshold value and the bottom threshold value. By keeping the amount of audio data stored in the audio buffer within the top threshold value and the bottom threshold value, audio latency is reduced or altogether eliminated.

FIG. 5 is a simplified block diagram of a system for rendering analog audio signals, in accordance with one embodiment of the present invention. As shown in FIG. 5, system 140 includes operating system 152, audio and video data conversion device 150, audio encoder 170, and speaker 172. Operating system 152 is in communication with audio and video data conversion device 150 and audio encoder 170. Operating system 152 is the system software responsible for the control and management of hardware and basic system operations, as well as running application software. Exemplary operating system 152 includes Microsoft Windows, Microsoft-DOS, UNIX, Linux, Macintosh Operating System, etc. Included within operating system 152 are streaming driver 166 and audio renderer 168. It should be appreciated that streaming driver 166 may include any suitable driver that supports the processing of streamed data for multimedia devices such as sound cards, TV tuner cards, video graphics cards, etc. Audio renderer 168 includes any suitable renderer that renders audio data. An exemplary audio renderer 168 is the Microsoft DirectSound Audio Renderer that filters and renders audio data.

System 140 may additionally include audio and video data conversion device 150. As discussed above, audio and video data conversion device 150 may be used to connect the video game console to the computer through Universal Serial Bus (USB) such that video and audio from the video game console may be viewed, listened, and captured on the computer. It should be appreciated that audio and video data conversion device 150 is merely an example of one type of connection between a video game console and computer. Other types of connections may include audio cables that connect the video game console to a sound card of the computer, video cables that connect the video game console to a video card of the computer, etc. System 140 additionally includes audio encoder 170 which converts digital audio data to analog audio data and renders the analog audio data to speaker 172.

As shown in FIG. 5, audio decoder 162 receives analog audio signal 154 and converts the analog audio signal to digitized audio data 156 at an audio capture clock frequency (e.g., 48 KHz, 48.05 KHz, etc.) for output to USB controller 164. USB controller 164 receives digitized audio data 156 and transfers the streaming, digitized audio data to streaming driver 166 of operating system 152 via USB bus 158. Subsequently, streaming driver 166 processes the audio data and outputs digitized audio data 160 to audio renderer 168 for rendering. As a result, audio renderer 168 sends a buffered, digital audio data to audio encoder 170 at an audio rendering clock frequency (e.g., 48.00 KHz). Audio encoder 170 then converts the digital audio data to analog audio data for rendering on speaker 172.

FIG. 6 is a more detailed block diagram of the operating system shown in FIG. 5, in accordance with one embodiment of the present invention. As shown in FIG. 6, operating system 152 includes streaming driver 166 and audio renderer 168. After streaming driver 166 passes digitized audio data 160 to audio renderer 168, the audio renderer temporarily stores the digitized audio data in audio buffer 204 for rendering, which allows the audio renderer and an audio encoder that process the audio data at different speeds to operate without being delayed by one another.

With reference to audio buffer 204, FIG. 6 additionally shows fullness 208 that indicates the amount of audio data stored in the audio buffer. For example, a large fullness 208 value indicates a large amount of audio data stored in audio buffer 204. It should be noted that the amount of audio data stored in audio buffer 204 has a direct correlation with the delay of audio signal processing. For instance, the larger the fullness 208, the longer the delay of processing the audio signals. Furthermore, as will be explained in more detail below, embodiments of the invention use top threshold value 206 and bottom threshold value 210 for reducing audio latency by adjusting incrementally the audio data feed to audio renderer 168 such that the amount of audio data (i.e., fullness 208) stored in audio buffer 204 is between the top threshold value and the bottom threshold value.

Further, since an embodiment of the invention adjusts the audio data feed to audio renderer 168, embodiments of the invention may be included in streaming driver 166. For example, in one embodiment, streaming driver 166 may additionally include program instructions 212 for adjusting incrementally the audio data feed from the streaming driver to audio renderer 168. Specifically, as will be explained in more detail below, program instructions 212 included in streaming driver 166 may make the adjustments by decreasing or increasing the audio data feed to audio renderer 168 such that fullness 208 is between top threshold value 206 and bottom threshold value 210.

FIG. 7 is a flowchart diagram of a high level overview of a method for reducing audio latency when executing program instructions for processing audio data, in accordance with one embodiment of the present invention. Starting in operation 302, an amount of audio data stored in an audio buffer is first determined. Thereafter, in operation 304, the amount of audio data stored in the audio buffer is compared with a top threshold value and a bottom threshold value. The audio data feed is then adjusted incrementally in operation 306 such that the amount of stored audio data is between the top threshold value and the bottom threshold value.

FIG. 8 is a flowchart diagram of a more detailed method for reducing audio latency when executing program instructions for processing audio data, in accordance with one embodiment of the present invention. Starting with operations 402 and 404, a top threshold value and a bottom threshold value are provided. Top threshold value may include any suitable value, and bottom threshold value may include any suitable value that is less than the top threshold value. In one exemplary embodiment, top threshold value may be calculated by: Top_threshold = freq_audio ( Max_delay _time 1000 ) ( 1.0 )
With regard to Equation (1.0), maximum delay time (i.e., max_delay_time) is a time limit used by embodiments of the invention to assure that audio latency will be within the maximum delay time during rendering by audio renderer. An exemplary maximum delay time is 50 ms. Freq_audio is the standard audio frequency, which may be different from audio capture clock frequency and audio capture clock frequency as discussed above. Exemplary standard audio frequencies include 48 KHz, 44.1 KHz, 32 KHz, etc. Accordingly, in one embodiment, a top threshold value may be calculated if the maximum delay time and the standard audio frequency are provided. For example, assuming maximum delay time=50 ms and standard audio frequency=48 KHz, then top threshold is (50 ms/1000)*48000, which equals 2400 samples.

Bottom threshold value may include any suitable value that is less than top threshold value. In one embodiment, bottom threshold value may be derived from top threshold value. For instance, bottom threshold value may be calculated by: Bottom_threshold = Top_threshold 3 ( 1.1 )
Accordingly, with reference to the top threshold value of 2400 samples discussed above, the bottom threshold value is simply 2400 samples/3, which equals 800 samples.

Still referring to FIG. 8, the amount of audio data stored in the audio buffer is then determined in operation 406. The amount of audio data may be determined by receiving the fullness value directly from audio renderer. Thereafter, the amount of audio data is compared with the top threshold value and the bottom threshold value in operation 408. As shown in operation 410, the top threshold value is compared with the amount of audio data stored in the audio buffer to determine whether the stored amount is greater than the top threshold value. If the amount of data stored in audio buffer is greater than top threshold value, then the audio data feed from a streaming driver is decreased by an incremental amount in operation 412. Specifically, the streaming driver first receives a sample of the audio data and reduces the sample of the received audio data by the incremental amount through interpolation. It should be appreciated that any suitable interpolation techniques may be applied. Exemplary interpolation techniques include linear interpolation and non-linear interpolation. Additionally, interpolation may simply include taking the floor or ceiling of the audio data.

Subsequently, the streaming driver outputs the reduced sample of audio data to the audio renderer. For example, if the incremental amount is specified as two audio data samples in each 1000 samples, then for every 1000 samples received by streaming driver, streaming driver reduces two samples from the 1000 received samples, and interpolates the 1000 received samples to generate 998 samples. Streaming driver then outputs the 998 samples of audio data to the audio renderer. If the amount of audio data stored in the audio buffer is still greater than the top threshold value after the interpolation, then the audio data is still fed to the audio renderer faster than the speed at which the audio renderer can render the audio data. Accordingly, the streaming driver will further reduce the number of audio data samples to 996. The reduction will repeat for subsequent samples of the audio data until the amount of audio data buffered in audio renderer is less than or equal to the top threshold value.

On the other hand, as shown in FIG. 8, if the amount of audio data stored in audio buffer is not greater than top threshold value, then another comparison may be conducted in operation 411 to determine whether the amount of audio data is less than the bottom threshold value. If the amount of audio data is not less than the bottom threshold value, then the amount of audio data is between the top threshold value and the bottom threshold value, and no further adjustments are necessary. However, if the amount of audio data is less than the bottom threshold value, then the audio data feed from the streaming driver is increased by an incremental amount in operation 414. Specifically, the streaming driver first receives a sample of the audio data and increases the sample of the audio data by the incremental amount through interpolation. Subsequently, the streaming driver outputs the increased sample of the audio data to the audio renderer. For example, if the incremental amount is again specified as two audio data samples in each 1000 samples, then for every 1000 samples received by streaming driver, the streaming driver adds two samples to the 1000 received samples through interpolation to generate 1002 samples. Streaming driver then outputs the 1002 samples of audio data to the audio renderer. If the amount of audio data stored in the audio buffer is still less than the bottom threshold value after the interpolation, then the audio data is still fed to audio renderer slower than the speed at which the audio renderer can render the audio data. Accordingly, the streaming driver will further increase the number of audio data samples to 1004. The increase will repeat for subsequent samples of the audio data until the amount of audio data stored in the audio renderer is greater than or equal to the bottom threshold value.

As discussed above, the incremental amount is the number of samples that is adjusted each time when the amount of audio data stored in audio buffer is either larger than the top threshold value or lesser than the bottom threshold vale. Incremental amount may include any suitable values. For example, in one embodiment, the incremental amount can be calculated by: Incremental_amount = original_sample _size ( Freq_ 2 Freq_ 1 - 1 ) ( 1.2 )
Referring to Equation (1.2), original sample size is the number of audio samples received by streaming driver. As discussed above, audio rendering clock frequency (i.e., Freq2) is the frequency at which an audio renderer sends audio data to an audio encoder, and audio capture clock frequency (i.e., Freq1) is the frequency at which an audio decoder converts the analog audio signal to digitized audio data. For example, assuming that the standard audio frequency (i.e., freq_audio) is 48 KHz and the amount of audio data stored in the audio buffer increases from 0 to 2400 samples (50 ms delay) during 24 seconds, which is about a one second delay in eight minutes ((8*60/24)*50 ms=1000 ms=1 s), then for each second the amount of audio data increases by 100 samples, and Freq2/Freq1=(48000+100)/48000=481/480. If original sample size is 481, then incremental amount=481*(481/480−1)=1. Thus, one sample is reduced for every 481 samples and, as a result, the reduced audio data is fed to the audio renderer at the same speed as the audio renderer renders the audio data.

FIG. 9 is a flowchart diagram of a detailed method for implementing an embodiment of the present invention in Microsoft Windows. Starting in operation 502, the streaming driver creates an application program interface (API) to receive audio buffer fullness information from an application that reads the buffer fullness information from the DirectSound Audio Renderer. Specifically, to create the API, Microsoft Stream Class minidriver defines the following property set Globally Unique Identifier (GUID) and implements the property set on an IKSPropertySet interface. The property set allows applications to send commands to the driver through the IKSPropertySet interface with GUID PROPSETID_ADAPTEC_PROPERTIES, which is defined as {0x2d2c4cd1, 0xd8c7, 0x45ae, {0xb5, 0x4a, 0xee, 0x7c, 0x66, 0xa, 0x82, 0x54} }. Here, the property set GUID PROPSETID_ADAPTEC_PROPERTIES implements settings related to specific properties. For example, the following Table A summarizes an exemplary Property ID that may be implemented in a property set and also shows the parameters to be passed to the Set and Get calls.

TABLE A
Command
Property ID Code Set/Get pPropData
ADPT_AUDIO_RENDERER 0x01 set byte
FULLNESS

As shown in FIG. 9, after the API is created, the API gets the audio buffer fullness information from DirectSound Audio Renderer in operation 504 through an IAMAudioRendererStats::GetStatParam method. Subsequently, operation 506 shows that a comparison is made to determine whether the fullness is greater than a top threshold value or lesser than a bottom threshold value. If the fullness is greater than the top threshold value or lesser than the bottom threshold value, then the application sends the fullness information to streaming driver in operation 508 through the IKSPropertySet interface with property Set GUID (PROPSETID_ADAPTEC_PROPERTIES) and Property Id (ADPT_AUDIO_RENDERER_FULLNESS). Else, if the fullness is between the top threshold value and the bottom threshold value, then the fullness information is not sent to the streaming driver.

FIG. 9 further shows that after operation 508, streaming driver incrementally adjusts the audio data feed in operation 510. As discussed above, the audio data feed to the audio renderer is increased or decreased incrementally through interpolation. In other words, the streaming driver shrinks or expands the audio data feed through interpolation according to whether the fullness is greater than the top threshold value or lesser than the bottom threshold value before passing the audio data to audio renderer. The following Table B is an exemplary embodiment of program instructions for adjusting incrementally the audio data feed to reduce audio latency.

TABLE B
#define ORIGINAL_SAMPLE_SIZE 1000
#define ADJUSTED_SIZE 2
#define BOTTOM_THRESHOLD 200
#define TOP_THRESHOLD 400
WORD wOriginalAudioSamples [ORIGINAL_SAMPLE_SIZE];
WORD wAdjustedAudioSamples [ORIGINAL_SAMPLE_SIZE + ADJUSTED_SIZE];
WORD wFullness;
WORD wAdjustedSampleSize = ORIGINAL_SAMPLE_SIZE;
WORD AdjustAudioSamples ( )
{
WORD I, wFloor, wCeiling;
DOUBLE dDistance;
if (wFullness > TOP_THRESHOLD)
{
wAdjustedSampleSize −= ADJUSTED_SIZE;
}
else if (wFullness < BOTTOM_THRESHOLD)
{
wAdjustedSampleSize += ADJUSTED_SIZE
}
if (wAdjustedSampleSize > ORIGINAL_SAMPLE_SIZE + ADJUSTED_SIZE)
{
wAdjustedSampleSize = ORIGINAL_SAMPLE_SIZE + ADJUSTED_SIZE
}
else if (wAdjustedSampleSize < ORIGINAL_SAMPLE_SIZE − ADJUSTED_SIZE)
{
wAdjustedSampleSize = ORIGINAL_SAMPLE_SIZE − ADJUSTED_SIZE
}
if (wAdjustedSampleSize == ORIGINAL_SAMPLE_SIZE)
{
For (I = 0; I < wAdjustedSampleSize; I++)
{
wAdjustedAudioSamples[I] = wOriginalAudioSamples[I];
}
}
else
{
for (I = 0; I < wAdjustedSampleSize; I++)
{
dDistance = (DOUBLE 1.0)*I*(ORIGINAL_SAMPLE_SIZE − 1)/
(wAdjustedSampleSize − 1);
wFloor = floor(dDistance);
wCeiling = ceiling(dDistance);
wAdjustedAudioSamples[I] = wOriginalAudioSamples[wFloor]*
(1 − (dDistance − wFloor)) + wOriginalAudioSamples[wCeiling]*
(dDistance − wFloor);
}
}
return wAdjustedSampleSize;
}

The algorithms included in Table B are merely exemplary, and many different algorithms may be used to reduce or increase samples of audio data.

It should be appreciated that the above-described functionality for reducing audio latency may be incorporated in program application stored in memory (e.g., random access memory (RAM), hard disk drives, floppy disks, magnetic tapes, optical discs, etc.) and executed by a processor. For example, the functionality may be provided through the streaming driver, or the like, having program instructions to perform the above-described functionality. In one embodiment, streaming driver includes program instructions for determining an amount of the audio data stored in an audio buffer and program instructions for comparing the amount with a top threshold value and a bottom threshold value. Further, program instructions are included for adjusting incrementally an audio data feed to an audio renderer such that the amount is between the top threshold value and the bottom threshold value.

In sum, the above described invention provides method and system for reducing audio latency when executing program instructions for processing audio data. Essentially, to reduce latency, streaming driver feeds audio data to an audio renderer slower than the speed at which the audio renderer can render audio data if the amount of audio data stored in the audio buffer is greater than a top threshold value. On the other hand, streaming driver may be additionally configured to feed audio data to the audio renderer faster than the speed at which the audio renderer can render audio data if the amount of audio data is less than a bottom threshold value. Thus, by keeping the amount of audio data stored in audio buffer between the top threshold value and the bottom threshold value, audio latency is reduced or altogether eliminated such that audio is synchronized with video.

C. Graphical User Interface

A video game console display application can be executed on a computer to process, to facilitate real-time video game playing, to control, and to display video and/or audio from the video game console. It should be appreciated that the video game console display application can be integrated or combined with other software modules, such as the audio renderer discussed above. The video game console display application can optimize video display for video gaming to provide rich audio and visual effects. For example, in one embodiment, the video game console display application can adjust properties of the video before output to a display panel. Exemplary video properties include brightness, sharpness, contrast, shape, color, resolution, refresh rate (or vertical scan rate), size, etc. For example, the size of the display area for video can be enlarged or reduced. Further, a display panel used with computers typically have higher refresh rates than conventional televisions. Accordingly, in another example, the video game console display application can increase the refresh rate of the video received from the video game console, thereby reducing eye strain. In another embodiment, the video game console display application can process the audio data to reduce the audio latency as discussed above to provide a real-time gaming experience. In still another embodiment, the video game console display application can include controls to allow capture and storage of video and/or audio from the video game console. For example, video displays of game configuration settings, best score results, locations of hidden treasures in a video game, etc. can be stored. The recorded video and/or audio may be stored in a memory (e.g., RAM, hard disk drives, floppy disks, magnetic tapes, optical disks, etc.) of the computer.

FIG. 10 is a schematic diagram of a main graphical user interface (GUI) associated with the video game console display application, in accordance with one embodiment of the present invention. After the video game console display application is launched on a computer, main window region 902 is displayed on a display panel coupled to the computer. Main window region 902 includes a number of GUI components 904-909 for providing access to various audio and video functionalities. As shown in FIG. 10, main window region 902 includes video game GUI component 904, TV GUI component 905, digital versatile disc (DVD)/video CD (VCD) GUI component 906, videos GUI component 907, photos component 908, and music component 909.

Video game GUI component 904 provides access to the video and/or audio received from the video game console. In response to video game GUI component 904 being selected (e.g., by clicking with a mouse, input through a keyboard, or input through any suitable input devices), another region is generated that displays the video received from the video game console. TV GUI component 905 provides access to video received from a TV tuner, and a selection of the TV GUI component generates another region that displays the video from the TV tuner. In addition to video and/or audio received from the audio and video data conversion device, main window region 902 allows additional access to video and/or audio from additional sources, such as video players (e.g., DVD/VCD players), memory of the computer, external hard drives, etc. For example, DVD/VCD GUI component 906 allows access to video and/or audio received from DVD/VCD players, videos GUI component 907 allows access to video stored in a memory of the computer, photos GUI component 908 allows access to photos stored in the memory of the computer, and music GUI component 909 allows access to audio files stored in the memory of the computer.

Any number of suitable layouts can be designed for region and GUI component layouts illustrated above as FIG. 10 does not represent all possible layout options available. The displayable appearance of the regions and GUI components can be defined by any suitable geometric shape (e.g., rectangle, square, circle, triangle, etc.), alphanumeric character (e.g., A,v,t,Q,1,9,10, etc.), symbol (e.g., $,*,@,α,

,¤,♥etc.), shading, pattern (e.g., solid, hatch, stripes, dots, etc.), and color. Furthermore, for example, TV GUI component 905 of FIG. 10, or any other regions or GUI components, may be omitted or dynamically assigned. It should also be appreciated that the regions and GUI components can be fixed or customizable. In addition, the computer may have a fixed set of layouts, utilize a defined protocol or language to define a layout, or an external structure can be reported to the computer that defines a layout. Finally, selecting a region or GUI component of GUI triggers code to cause the functionality described herein.

In summary, the above described embodiments provide methods, systems, and GUIs for reducing audio latency and for displaying video from a video game console. Essentially, the audio and video data conversion device connects a video game console to a computer and enables users to play video games using the resources (e.g., display panel, speakers, etc.) available to the computer with real-time gaming effect, similar to when the video game console is connected to a regular television set. The embodiments described above can actually keep the latency between audio/video effects and user's input actions through the video game console to less than 100 milliseconds, which is negligible to humans.

With the above embodiments in mind, it should be understood that the invention may employ various computer-implemented operations involving data stored in computer systems. These operations are those requiring physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Further, the manipulations performed are often referred to in terms, such as producing, identifying, determining, or comparing.

The invention can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data which can be thereafter read by a computer system. The computer readable medium also includes an electromagnetic carrier wave in which the computer code is embodied. Examples of the computer readable medium include hard drives, network attached storage (NAS), read-only memory, random-access memory, CD-ROMs, CD-Rs, CD-RWs, magnetic tapes, and other optical and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion.

Any of the operations described herein that form part of the invention are useful machine operations. The invention also relates to a device or an apparatus for performing these operations. The apparatus may be specially constructed for the required purposes, or it may be a general purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.

The above described invention may be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. Although the foregoing invention has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications may be practiced within the scope of the appended claims. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims. In the claims, elements and/or steps do not imply any particular order of operation, unless explicitly stated in the claims.

Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US8142282Nov 15, 2006Mar 27, 2012Microsoft CorporationConsole integrated downloadable game service
US8376844 *Jun 14, 2007Feb 19, 2013Ambx Uk LimitedGame enhancer
US20090137319 *Nov 12, 2008May 28, 2009Mstar Semiconductor, Inc.Command Distribution Method, and Multimedia Apparatus and System Using the Same for Playing Games
US20090280896 *Jun 14, 2007Nov 12, 2009Ambx Uk LimitedGame enhancer
Classifications
U.S. Classification463/31
International ClassificationG10L19/00, H04N5/93, A63F13/00
Cooperative ClassificationH04N21/4341, H04N21/23406, H04N21/44004, G06F3/16, H04J3/0632, H04N21/4307, H04N21/2368, H04N21/8106, G11B2020/1074
European ClassificationH04N21/43S2, H04N21/44B, H04N21/81A, H04N21/2368, H04N21/234B, H04N21/434A, H04J3/06B6, G06F3/16
Legal Events
DateCodeEventDescription
Aug 4, 2005ASAssignment
Owner name: ADAPTEC, INC., CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NG, BUAY HOCK;MGARIAF, NASER;ONG, ANN TIONG;AND OTHERS;REEL/FRAME:016869/0654;SIGNING DATES FROM 20050727 TO 20050804
Oct 3, 2005ASAssignment
Owner name: ADAPTEC, INC., CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHIH, SAMUEL C.M.;REEL/FRAME:017042/0614
Effective date: 20050808