WO2004023841A1 - Smart speakers - Google Patents

Smart speakers Download PDF

Info

Publication number
WO2004023841A1
WO2004023841A1 PCT/IB2003/003369 IB0303369W WO2004023841A1 WO 2004023841 A1 WO2004023841 A1 WO 2004023841A1 IB 0303369 W IB0303369 W IB 0303369W WO 2004023841 A1 WO2004023841 A1 WO 2004023841A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
presenting
location
presenting device
content
Prior art date
Application number
PCT/IB2003/003369
Other languages
French (fr)
Inventor
Paulus C. Neervoort
Robert Kortenoeven
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to CN03821324.9A priority Critical patent/CN1682567B/en
Priority to AT03793931T priority patent/ATE554606T1/en
Priority to US10/527,117 priority patent/US7379552B2/en
Priority to EP03793931A priority patent/EP1540988B1/en
Priority to JP2004533698A priority patent/JP4643987B2/en
Priority to AU2003250404A priority patent/AU2003250404A1/en
Publication of WO2004023841A1 publication Critical patent/WO2004023841A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/02Spatial or constructional arrangements of loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2205/00Details of stereophonic arrangements covered by H04R5/00 but not provided for in any of its subgroups
    • H04R2205/024Positioning of loudspeaker enclosures for spatial sound reproduction

Definitions

  • This invention relates to a method for providing location-aware media information and more specifically a method for providing location-aware audio content by an audio-presenting device capable of presenting audio content.
  • the present invention also relates to a system for performing the method and a computer program for performing the method.
  • DE 196 46 055 discloses an audio playback system comprising a reproducing device, a speaker system, and a signal-processing unit for improving the spatial experience to a listener by applying psycho-acoustic signal processing.
  • Physical placement of a speaker system is assisted by processing the presented audio to e.g. compensate the speed of audio in air.
  • the audio output from a source is processed with effects to trick the listening ears in believing that the presented audio is coming from a direction where no speaker is actually placed.
  • This type of audio processing to e.g. virtually expand the size of the room and/or virtually displace sounds is commonly used in conjunction with consumer-related media productions where the size of the room and/or the number of surrounding speakers are limited.
  • the processed and imaged/mirrored audio does not necessarily reflect the actual placement of musical instruments as they were recorded, but mostly introduces a feel of another location i.e. a concert hall, a church, an outdoor scene, etc.
  • This calibration may comprise an impulse response for each of the available speakers, where the impulse response may comprise speaker- independent characteristics such as group delay and frequency response, etc.
  • such a method may be sufficient to obtain an acceptable impulse response to desirably render an audio signal.
  • a real environment such as a living room or a kitchen, etc.
  • To process the audio optimally with respect to audio placement it may not be necessary to inquire impulse responses for the speaker system. It may be necessary for the processing unit to know the exact placement of speakers and the listener for estimation of acceptable processing schemes.
  • the human ear tolerates a slight deviation in speaker placement, but it is not possible to convince a listener that a sound is coming from the left speaker, when it is actually being played from e.g. the right speaker. Therefore, to satisfy and convince a listener of a speaker placement, the speaker actually has to be placed relatively near the intended location of the sound.
  • a speaker playing music is placed close to a listener, the listener may observe a given level of sound. If the speaker is placed at a longer distance from the listener, the playing speaker must carry out more power to let the listener obtain the same sound level as when the speaker is placed closer to him.
  • An example of the use of a speaker system according to the present invention could be watching a concert on television where an organ is playing on the left and a guitar is playing on the right. Positioning an audio-presenting device on the left would present the sound of the organ, positioning the audio-presenting device on the right would otherwise present the sound of the guitar.
  • the loudspeaker In a stereo system where a left and a right audio signal is represented but only one loudspeaker placed to the left of a listener is available, it may be desirable to only reproduce the left signal to avoid spatial confusion of the listener. Likewise, if the loudspeaker is placed in front of the listener, the reproduced audio may comprise an appropriate mix of the left and the right audio channel. Likewise, this may also be the situation in a surround sound environment, where a number of loudspeakers (typically 4 to 6) are placed around the listener to generate a 3D-like sound image.
  • the speaker location is essential to e.g. instrument placement and accurate mirroring of acoustic spaces for high precision sound positioning. Unless e.g.
  • the rear speakers (the speakers positioned behind the listener) in a surround sound setup are placed exactly symmetrically relative to the listener, undesirable effects may be apparent such as e.g. non-uniform sound delay, sound coloration, wave interference, etc.
  • a front/rear balance control of e.g. an amplifier has to be adjusted to prevent the rear speakers from dominating the sound image.
  • the sounds coming from the rear speakers still arrive first at the listener by way of the physically shorter distance. This disadvantage is typically disregarded in home theatre arrangements.
  • a speaker system provides users with a system that enables them to position speakers in a space relative to the current auditory content without troublesome speaker/amplifier adjustments.
  • the sound system For processing the audio according to the speaker placement, it is necessary for the sound system to identify the loudspeaker location. It may be difficult and sometimes even impossible for a user to enter the exact location of a loudspeaker. Therefore it may be advantageous if the sound system is able to automatically determine the speaker placement prior to signal processing.
  • the user can add audio-presenting devices without having to enter any software-based set-up programs or adjusting any system setting. All the user has to do is position the speaker somewhere within the useful area and the processing unit will determine which auditory signals will be presented through the audio-presenting device.
  • a method of providing location-aware audio content by an audio-presenting device capable of presenting audio content, the method comprising the steps of obtaining, in a processing unit, at least one location parameter representing the location of the audio-presenting device; processing, in said processing unit, current audio content on the basis of the obtained at least one location parameter in order to obtain a location-aware audio content being relative to the current audio content dependent on the at least one location parameter; and presenting the obtained location-aware audio content by the audio-presenting device.
  • This invention provides a user with a system that enables him to position a speaker relative to a current auditory content without having to consider any programming of speaker placement. The system will determine which auditory signals will be presented through the speaker.
  • An audio-presenting device may be a speaker capable of reproducing audible signals, as well as signals inaudible to the human ear.
  • the idea of the present invention covers the automatic transfer of location-aware content from a source, i.e. the content of an audio source, to an audio-presenting device relative to its location.
  • Said audio source may be a personal computer, a television, a video camera, a game unit, a mobile phone, etc. capable of detecting said location(s) of an audio-presenting device, and capable of subsequently transferring a corresponding content to said audio- presenting device.
  • FIG. 1 shows an audio-presenting device connected to an audio source in a basic setup
  • Fig. 2 shows a method of presenting content with an audio-presenting device
  • Fig. 3 illustrates a schematic block diagram of a processing unit in an audio source
  • Fig. 4 shows a setup with two audio-presenting devices with location reference to a display device
  • Fig. 5 shows another embodiment of the present invention
  • Fig. 6 illustrates a schematic block diagram of musical instruments placed in a stereophonic reproduction setup
  • Fig. 7 illustrates another schematic block diagram of musical instruments placed in a quadraphonic reproduction setup.
  • Fig. 1 shows an audio-presenting device, here a speaker unit, denoted by reference numeral (101) with one or more transmitters (102) placed in front of a listener denoted by reference numeral (105).
  • On the audio source (103) one or more sensors, indicated by reference numeral (104), may be positioned in order to locate the position(s) of one or more audio-presenting devices attached, close, or distant to said audio source.
  • the number of sensors are used, by receiving signal(s) sent from one or more transmitters positioned on or integrated in the audio-presenting device, to determine the location of the audio-presenting device(s).
  • the audio source may locate said audio-presenting device(s). Subsequently, the audio source may determine information (dependent on said location) representing audio content (106) which has to be transferred and presented on said audio-presenting devices.
  • Fig. 2 shows a method of presenting content with an audio-presenting device.
  • step 201 the method in accordance with a preferred embodiment of the invention is started.
  • Variables, flags, buffers, etc., keeping track of locations, content, information item(s), identifying signal(s), etc. corresponding to the status of audio-presenting devices located relative to an audio source and corresponding to the status of said audio source are set to default values.
  • the audio-presenting device may be connected or attached to an audio source. This will typically be a user action in that the user may desire that the audio- presenting device may be in operation.
  • this step is repeated for more audio-presenting devices.
  • the steps to be followed may then correspondingly apply.
  • At least one transmitter - located on the audio-presenting device - preferably transmits a corresponding signal identifying the device.
  • one or more transmitters may be positioned on or integrated in the audio-presenting device. This or these transmitter(s) may then be used to inform the audio source that said audio-presenting device is connected to it. Said signal may be used to identify the audio-presenting device, its type and characteristics, etc.
  • at least one sensor may receive at least one identifying signal.
  • Said sensor(s) is/are preferably located on the audio source. As discussed in the foregoing step and in Figure 1, the identifying signal(s) is/are transmitted from one or more transmitters located on the audio-presenting device.
  • the audio source may obtain a first location of the audio- presenting device.
  • the audio source may determine, on the basis of obtained location information what content part or parts from the audio content has to be processed and played back subsequently on the audio-presenting device. It may be the case that this step is repeated for more audio-presenting devices. Based on one or more identifying signals, the audio source may determine specific X, Y, Z coordinates of the audio-presenting device. Said coordinates may be defined relative to a fixed point on the audio source or e.g. a location of the room, etc. and measured by it by means of received identifying signals(s).
  • Said audio content may be electric or acoustic signals, analog, digital, compressed or non-compressed audio, etc. or any combination thereof.
  • step 207 the audio parts from step 206 are processed in order to obtain a location-aware audio content relative to the current audio content dependent on the at least one location parameter.
  • the audio source may transfer context-aware audio content to the audio-presenting device.
  • Said first information item may be transferred and then received by means of a network - as a general solution known from the prior art - or it may be received by means of an optimized communication dedicated to the audio-presenting device.
  • the audio-presenting device may receive and present/reproduce said context-aware audio content.
  • the context-aware audio content (presented on said audio-presenting devices) may further be dependent on what is currently presented on the audio source, as it may be convenient to present a part of what is currently presented on the audio source with e.g. different processing attributes, if any.
  • the wording "content”, is understood to be audio information typically played back on a personal computer, a television, a video camera, a game unit or a mobile phone, etc.
  • Said information or content may be electric signals, compressed or non-compressed digital signals, etc. or any combination thereof.
  • Fig. 3 illustrates a schematic block diagram of an embodiment of an audio source (301) comprising one or more microprocessors (302) and/or Digital Signal Processors (306), a storage unit (303), and input/output means (304) all connected via a data bus (305).
  • the processor(s) and/or Digital Signal Processor(s) (306) are the interaction mechanism among the storage unit (303) and the input/output means (304).
  • the input/output means (304) is responsible for communication with the accessible sensor(s), wherein transport of received location parameters, etc. may occur during operation. Location parameters can be uploaded from remote audio-presenting devices via the input/output means (304). This communication between an audio-presenting device and the sensor(s) may take place e.g.
  • the storage unit (304) stores relevant information like a dedicated computer program or uploaded location parameters for determination of available resources, processing algorithms, etc.
  • Digital Signal Processors may be dedicated programmed for different processing tasks such as decoding, encoding, effect layering, etc. Either a single multi-issue DSP may comprise several processing means or a multiple of DSPs can be nested to perform processing tasks where each DSP is dedicated to fewer processing means than the single multi-issued DSP.
  • the overall processing may also be comprised in a single general-purpose processor comprising software for a multitude of tasks, wherein processes are defined among different processing functions.
  • DSPs digital signal processors
  • integrated system functionality into one processor may be the best way to realize several common design objectives such as lowering the system part count, reducing power consumption, minimizing size, and lowering cost, etc. Reducing the processor count to one also means fewer instruction sets and tool suites to be mastered.
  • the invention relates to a computer-readable medium containing a program for making a processor carry out a method of providing location-aware media content by an audio-presenting device (101) capable of presenting audio content (106), the method comprising the steps of obtaining, in a processing unit (103), at least one location parameter representing the location of the audio-presenting device (101); processing, in said processing unit (103), current audio content on the basis of the obtained at least one location parameter in order to obtain a location-aware audio content being relative to the current audio content dependent on the at least one location parameter; and presenting the obtained location-aware audio content by the audio-presenting device (101).
  • a computer-readable medium may be a program storage medium i.e. both physical computer ROM and RAM, removable and non-removable storage drives, magnetic tape, optical disc, digital versatile disc (DVD), compact disc (CD or CD- ROM), mini-disc, hard disk, floppy disk, smart card, PCMCIA card, information acquired from data networks e.g. a local area network (LAN), a wide area network (WAN), or any combination thereof, e.g. the Internet, an intranet, an extranet, etc.
  • data networks e.g. a local area network (LAN), a wide area network (WAN), or any combination thereof, e.g. the Internet, an intranet, an extranet, etc.
  • FIG. 4 shows a setup with two audio-presenting devices (402, 403) with location reference to a display device denoted by reference numeral (406) all with one or more transmitters (not shown) placed in front of a listener denoted by reference numeral (405).
  • the audio source (401) comprising processing means (301) one or more sensors, indicated by reference numeral (404), may be positioned in order to locate the position(s) of one or more audio-presenting devices attached, close, or distant to said audio source.
  • the sensors are used, by receiving signal(s) sent from one or more transmitters positioned on or integrated in the audio-presenting devices, to determine the location of the available audio- presenting devices.
  • the audio-presenting device's (402, 403) location relative to the user's working position - in front of the display device (406) - may be estimated by the audio source (401) and thereby provides information items to the audio-presenting devices (402, 403) by the method described hereinbefore to provide desired sound signals accordingly.
  • the audio source may be supported by surround-sound technologies capable of sending audio information to individual channels, and thereby different audio-presenting devices, to generate a 3d-like sound-image.
  • surround-sound technologies capable of sending audio information to individual channels, and thereby different audio-presenting devices, to generate a 3d-like sound-image.
  • the audio-presenting device(s) is/are connectable and/or attachable to the audio sources or may be placed relative to the audio source and there connected to it, and furthermore, the audio-presenting device is capable of receiving and presenting content from the audio source.
  • FIG. 5 Another example of an embodiment of the present invention can be seen in Fig. 5 wherein a media content source (501) transmits all available audio content without the above-mentioned processing prior to transmission.
  • content processing is carried out in the audio-presenting devices (502, 503, 504, 505, 506), a number of devices comprising processing means (not shown), prior to user presentation.
  • Each audio-presenting device comprises means (not shown) for receiving media content transmitted from the content source (501) and means for obtaining location parameters relative to a user (505).
  • the user (505) may wear, or be attached to, location transmitting means (not shown) to inform any audio-presenting devices of its position.
  • each audio-presenting device may comprise processing means as described in the foregoing to process the media content accordingly to the location of the audio-presenting devices relative to the user's position. For example, if the audio-presenting device (503) in front of the user determines that it is located directly in front of the user, it may be determined by the device that this should reproduce the center channel in a 5.1 surround signal. If, for example, the media content is available in stereo only, it may be determined by the front audio-presenting device to reproduce an appropriate mix of the left and the right audio channel, etc.
  • the processing of media content may comprise capabilities of the available audio-presenting devices. For example, if a loudspeaker is only capable of reproducing signals in the frequency range of 10 - 200 Hz, but the media content comprises signals outside that range and i.e. therefore should be reproduced, this audio-presenting device limitation may be considered in the processing steps. This lack of reproduction possibility may be compensated in the processing steps by e.g. processing media content for other audio-presenting devices accordingly, if any.
  • Fig. 6 illustrates a schematic block diagram of musical instruments placed in a stereophonic reproduction setup.
  • the stereo recording comprises a guitar on the left channel (602) and a drum set on the right channel (603).
  • the audio device may be configured to only play the sounds coming from the drum set. Placing the audio- presenting device to the far left of the listener (105) may result in presenting only the guitar. If now, for example, the audio-presenting device placed to the far left is located in the same relative direction in relation to the listener but this time closer to the listener, the audio- presenting device may need to turn down the output power, in order to obtain an identical volume level of sound received by the listener.
  • Fig. 7 illustrates another schematic block diagram of musical instruments placed in a quadraphonic recording setup.
  • Four separate tracks are recorded comprising guitar (602), drum set (603), piano (701), and a violin (702).
  • four audio-presenting devices placed around a listener (105) may be required.
  • every audio-presenting device reproduces sonic material corresponding to its location. If placed symmetrically in a quadrant like the instruments in the Figure, every single audio device approximately plays back only a single instrument. If, for example, the audio- presenting device in the 3rd quadrant is turned off, no or only a little bit of piano (701) may be found in the acoustic image.

Abstract

This invention relates to a method (and corresponding system) for providing location-aware media information and more specifically a method for providing location-aware audio content by an audio-presenting device capable of presenting audio content. On the audio source (401) comprising processing means (301) one or more sensors (404), may be positioned in order to locate the position(s) of one or more audio-presenting devices attached, close, or distant to said audio source. The sensors are used by receiving signal(s) to determine the location of the available audio-presenting devices. The audio-presenting device's (402, 403) location relative to the user's working position - in front of the display device (406) - may be estimated by the audio source (401) and thereby provides information items to the audio-presenting devices (402, 403) by the method according to the present invention so as to provide desired sound signals.

Description

Smart speakers
Field of the invention
This invention relates to a method for providing location-aware media information and more specifically a method for providing location-aware audio content by an audio-presenting device capable of presenting audio content. The present invention also relates to a system for performing the method and a computer program for performing the method.
Background of the invention
DE 196 46 055 discloses an audio playback system comprising a reproducing device, a speaker system, and a signal-processing unit for improving the spatial experience to a listener by applying psycho-acoustic signal processing. Physical placement of a speaker system is assisted by processing the presented audio to e.g. compensate the speed of audio in air. The audio output from a source is processed with effects to trick the listening ears in believing that the presented audio is coming from a direction where no speaker is actually placed.
This type of audio processing to e.g. virtually expand the size of the room and/or virtually displace sounds is commonly used in conjunction with consumer-related media productions where the size of the room and/or the number of surrounding speakers are limited. The processed and imaged/mirrored audio does not necessarily reflect the actual placement of musical instruments as they were recorded, but mostly introduces a feel of another location i.e. a concert hall, a church, an outdoor scene, etc. To obtain information of the actual placement of the physically available speakers in a system it may, however, be necessary to provide a calibration procedure prior to processing the sound source to compensate room characteristics, etc. This calibration may comprise an impulse response for each of the available speakers, where the impulse response may comprise speaker- independent characteristics such as group delay and frequency response, etc.
In a special audio-optimized environment, e.g. a soundproofed chamber, such a method may be sufficient to obtain an acceptable impulse response to desirably render an audio signal. However, in a real environment, such as a living room or a kitchen, etc., it is a very difficult challenge to obtain authentic impulse responses to accomplish trustworthiness by the listener due to room reverberations, background noise, placement of probe microphones etc. during the calibration procedure. To process the audio optimally with respect to audio placement, it may not be necessary to inquire impulse responses for the speaker system. It may be necessary for the processing unit to know the exact placement of speakers and the listener for estimation of acceptable processing schemes.
The human ear tolerates a slight deviation in speaker placement, but it is not possible to convince a listener that a sound is coming from the left speaker, when it is actually being played from e.g. the right speaker. Therefore, to satisfy and convince a listener of a speaker placement, the speaker actually has to be placed relatively near the intended location of the sound.
For this sake, it may be convenient to physically place a speaker on a chosen spot and let this speaker play material that may be appropriate for this location.
For example, if a speaker playing music is placed close to a listener, the listener may observe a given level of sound. If the speaker is placed at a longer distance from the listener, the playing speaker must carry out more power to let the listener obtain the same sound level as when the speaker is placed closer to him. An example of the use of a speaker system according to the present invention could be watching a concert on television where an organ is playing on the left and a guitar is playing on the right. Positioning an audio-presenting device on the left would present the sound of the organ, positioning the audio-presenting device on the right would otherwise present the sound of the guitar. In a stereo system where a left and a right audio signal is represented but only one loudspeaker placed to the left of a listener is available, it may be desirable to only reproduce the left signal to avoid spatial confusion of the listener. Likewise, if the loudspeaker is placed in front of the listener, the reproduced audio may comprise an appropriate mix of the left and the right audio channel. Likewise, this may also be the situation in a surround sound environment, where a number of loudspeakers (typically 4 to 6) are placed around the listener to generate a 3D-like sound image. The speaker location is essential to e.g. instrument placement and accurate mirroring of acoustic spaces for high precision sound positioning. Unless e.g. the rear speakers (the speakers positioned behind the listener) in a surround sound setup are placed exactly symmetrically relative to the listener, undesirable effects may be apparent such as e.g. non-uniform sound delay, sound coloration, wave interference, etc. In addition, if the front speakers in a surround sound environment are placed further apart from the user than the rear speakers, a front/rear balance control of e.g. an amplifier has to be adjusted to prevent the rear speakers from dominating the sound image. However, the sounds coming from the rear speakers still arrive first at the listener by way of the physically shorter distance. This disadvantage is typically disregarded in home theatre arrangements.
A speaker system according to the present invention provides users with a system that enables them to position speakers in a space relative to the current auditory content without troublesome speaker/amplifier adjustments.
For processing the audio according to the speaker placement, it is necessary for the sound system to identify the loudspeaker location. It may be difficult and sometimes even impossible for a user to enter the exact location of a loudspeaker. Therefore it may be advantageous if the sound system is able to automatically determine the speaker placement prior to signal processing.
With that, the user can add audio-presenting devices without having to enter any software-based set-up programs or adjusting any system setting. All the user has to do is position the speaker somewhere within the useful area and the processing unit will determine which auditory signals will be presented through the audio-presenting device.
Object and summary of the invention
It is an object of the invention to solve the above-mentioned problem of speaker placement without user interference.
This is achieved by a method (and corresponding system) of providing location-aware audio content by an audio-presenting device capable of presenting audio content, the method comprising the steps of obtaining, in a processing unit, at least one location parameter representing the location of the audio-presenting device; processing, in said processing unit, current audio content on the basis of the obtained at least one location parameter in order to obtain a location-aware audio content being relative to the current audio content dependent on the at least one location parameter; and presenting the obtained location-aware audio content by the audio-presenting device.
It is a further object of the invention to provide a method and system wherein the processing of audio content comprises processing steps considering audio capabilities of the audio-presenting device. This invention provides a user with a system that enables him to position a speaker relative to a current auditory content without having to consider any programming of speaker placement. The system will determine which auditory signals will be presented through the speaker. An audio-presenting device may be a speaker capable of reproducing audible signals, as well as signals inaudible to the human ear. In general, the idea of the present invention covers the automatic transfer of location-aware content from a source, i.e. the content of an audio source, to an audio-presenting device relative to its location.
Said audio source may be a personal computer, a television, a video camera, a game unit, a mobile phone, etc. capable of detecting said location(s) of an audio-presenting device, and capable of subsequently transferring a corresponding content to said audio- presenting device.
BRIEF DESRCIPTION OF THE DRAWINGS Fig. 1 shows an audio-presenting device connected to an audio source in a basic setup,
Fig. 2 shows a method of presenting content with an audio-presenting device, Fig. 3 illustrates a schematic block diagram of a processing unit in an audio source, Fig. 4 shows a setup with two audio-presenting devices with location reference to a display device,
Fig. 5 shows another embodiment of the present invention,
Fig. 6 illustrates a schematic block diagram of musical instruments placed in a stereophonic reproduction setup, Fig. 7 illustrates another schematic block diagram of musical instruments placed in a quadraphonic reproduction setup.
Throughout the drawings, the same reference numerals indicate similar or corresponding features, functions, etc.
DETAILED DESCRIPTION OF EMBODIMENTS
Fig. 1 shows an audio-presenting device, here a speaker unit, denoted by reference numeral (101) with one or more transmitters (102) placed in front of a listener denoted by reference numeral (105). On the audio source (103) one or more sensors, indicated by reference numeral (104), may be positioned in order to locate the position(s) of one or more audio-presenting devices attached, close, or distant to said audio source. The number of sensors are used, by receiving signal(s) sent from one or more transmitters positioned on or integrated in the audio-presenting device, to determine the location of the audio-presenting device(s). In other words, by means of said sensor(s), the audio source may locate said audio-presenting device(s). Subsequently, the audio source may determine information (dependent on said location) representing audio content (106) which has to be transferred and presented on said audio-presenting devices.
Fig. 2 shows a method of presenting content with an audio-presenting device.
In step 201, the method in accordance with a preferred embodiment of the invention is started. Variables, flags, buffers, etc., keeping track of locations, content, information item(s), identifying signal(s), etc. corresponding to the status of audio-presenting devices located relative to an audio source and corresponding to the status of said audio source are set to default values.
In step 202, the audio-presenting device may be connected or attached to an audio source. This will typically be a user action in that the user may desire that the audio- presenting device may be in operation.
It may be the case that this step is repeated for more audio-presenting devices. The steps to be followed may then correspondingly apply.
In step 203, at least one transmitter - located on the audio-presenting device - preferably transmits a corresponding signal identifying the device. As discussed in Fig. 1, one or more transmitters may be positioned on or integrated in the audio-presenting device. This or these transmitter(s) may then be used to inform the audio source that said audio-presenting device is connected to it. Said signal may be used to identify the audio-presenting device, its type and characteristics, etc. In step 204, at least one sensor may receive at least one identifying signal.
Said sensor(s) is/are preferably located on the audio source. As discussed in the foregoing step and in Figure 1, the identifying signal(s) is/are transmitted from one or more transmitters located on the audio-presenting device.
In step 205, the audio source may obtain a first location of the audio- presenting device.
In step 206, the audio source may determine, on the basis of obtained location information what content part or parts from the audio content has to be processed and played back subsequently on the audio-presenting device. It may be the case that this step is repeated for more audio-presenting devices. Based on one or more identifying signals, the audio source may determine specific X, Y, Z coordinates of the audio-presenting device. Said coordinates may be defined relative to a fixed point on the audio source or e.g. a location of the room, etc. and measured by it by means of received identifying signals(s).
Said audio content may be electric or acoustic signals, analog, digital, compressed or non-compressed audio, etc. or any combination thereof.
In step 207, the audio parts from step 206 are processed in order to obtain a location-aware audio content relative to the current audio content dependent on the at least one location parameter.
In step 208, the audio source may transfer context-aware audio content to the audio-presenting device. Said first information item may be transferred and then received by means of a network - as a general solution known from the prior art - or it may be received by means of an optimized communication dedicated to the audio-presenting device.
In step 209, the audio-presenting device may receive and present/reproduce said context-aware audio content. The context-aware audio content (presented on said audio-presenting devices) may further be dependent on what is currently presented on the audio source, as it may be convenient to present a part of what is currently presented on the audio source with e.g. different processing attributes, if any.
Throughout the application - when the wording "presentation", "present" or the like is used - it is understood to mean that content may be reproduced on a corresponding audio-presenting device.
The wording "content", is understood to be audio information typically played back on a personal computer, a television, a video camera, a game unit or a mobile phone, etc. Said information or content may be electric signals, compressed or non-compressed digital signals, etc. or any combination thereof.
Fig. 3 illustrates a schematic block diagram of an embodiment of an audio source (301) comprising one or more microprocessors (302) and/or Digital Signal Processors (306), a storage unit (303), and input/output means (304) all connected via a data bus (305). The processor(s) and/or Digital Signal Processor(s) (306) are the interaction mechanism among the storage unit (303) and the input/output means (304). The input/output means (304) is responsible for communication with the accessible sensor(s), wherein transport of received location parameters, etc. may occur during operation. Location parameters can be uploaded from remote audio-presenting devices via the input/output means (304). This communication between an audio-presenting device and the sensor(s) may take place e.g. by using IrDa, Bluetooth, IEEE 802.11, wireless LAN, etc. but will also be useful in a wired application solution. The storage unit (304) stores relevant information like a dedicated computer program or uploaded location parameters for determination of available resources, processing algorithms, etc. Digital Signal Processors may be dedicated programmed for different processing tasks such as decoding, encoding, effect layering, etc. Either a single multi-issue DSP may comprise several processing means or a multiple of DSPs can be nested to perform processing tasks where each DSP is dedicated to fewer processing means than the single multi-issued DSP. The overall processing may also be comprised in a single general-purpose processor comprising software for a multitude of tasks, wherein processes are defined among different processing functions. The use of general-purpose microprocessors, instead of DSPs, is a viable option in some system designs. Although dedicated DSPs are well suited to handle signal-processing tasks in a system, most designs also require a microprocessor for other processing tasks such as memory managing, user interaction, relative location estimation, etc. Integrating system functionality into one processor may be the best way to realize several common design objectives such as lowering the system part count, reducing power consumption, minimizing size, and lowering cost, etc. Reducing the processor count to one also means fewer instruction sets and tool suites to be mastered. Furthermore, the invention relates to a computer-readable medium containing a program for making a processor carry out a method of providing location-aware media content by an audio-presenting device (101) capable of presenting audio content (106), the method comprising the steps of obtaining, in a processing unit (103), at least one location parameter representing the location of the audio-presenting device (101); processing, in said processing unit (103), current audio content on the basis of the obtained at least one location parameter in order to obtain a location-aware audio content being relative to the current audio content dependent on the at least one location parameter; and presenting the obtained location-aware audio content by the audio-presenting device (101).
In this context, a computer-readable medium may be a program storage medium i.e. both physical computer ROM and RAM, removable and non-removable storage drives, magnetic tape, optical disc, digital versatile disc (DVD), compact disc (CD or CD- ROM), mini-disc, hard disk, floppy disk, smart card, PCMCIA card, information acquired from data networks e.g. a local area network (LAN), a wide area network (WAN), or any combination thereof, e.g. the Internet, an intranet, an extranet, etc. Fig. 4 shows a setup with two audio-presenting devices (402, 403) with location reference to a display device denoted by reference numeral (406) all with one or more transmitters (not shown) placed in front of a listener denoted by reference numeral (405). On the audio source (401) comprising processing means (301) one or more sensors, indicated by reference numeral (404), may be positioned in order to locate the position(s) of one or more audio-presenting devices attached, close, or distant to said audio source. The sensors are used, by receiving signal(s) sent from one or more transmitters positioned on or integrated in the audio-presenting devices, to determine the location of the available audio- presenting devices. The audio-presenting device's (402, 403) location relative to the user's working position - in front of the display device (406) - may be estimated by the audio source (401) and thereby provides information items to the audio-presenting devices (402, 403) by the method described hereinbefore to provide desired sound signals accordingly.
The audio source may be supported by surround-sound technologies capable of sending audio information to individual channels, and thereby different audio-presenting devices, to generate a 3d-like sound-image. By gathering location placement parameters of the individual audio-presenting devices at different locations, appropriate audio processing may be executed in order to spatially enhance a listening experience.
Correspondingly, the audio-presenting device(s) is/are connectable and/or attachable to the audio sources or may be placed relative to the audio source and there connected to it, and furthermore, the audio-presenting device is capable of receiving and presenting content from the audio source.
Another example of an embodiment of the present invention can be seen in Fig. 5 wherein a media content source (501) transmits all available audio content without the above-mentioned processing prior to transmission. In this example, content processing is carried out in the audio-presenting devices (502, 503, 504, 505, 506), a number of devices comprising processing means (not shown), prior to user presentation. Each audio-presenting device comprises means (not shown) for receiving media content transmitted from the content source (501) and means for obtaining location parameters relative to a user (505). The user (505) may wear, or be attached to, location transmitting means (not shown) to inform any audio-presenting devices of its position.
Furthermore, each audio-presenting device may comprise processing means as described in the foregoing to process the media content accordingly to the location of the audio-presenting devices relative to the user's position. For example, if the audio-presenting device (503) in front of the user determines that it is located directly in front of the user, it may be determined by the device that this should reproduce the center channel in a 5.1 surround signal. If, for example, the media content is available in stereo only, it may be determined by the front audio-presenting device to reproduce an appropriate mix of the left and the right audio channel, etc.
Furthermore, the processing of media content may comprise capabilities of the available audio-presenting devices. For example, if a loudspeaker is only capable of reproducing signals in the frequency range of 10 - 200 Hz, but the media content comprises signals outside that range and i.e. therefore should be reproduced, this audio-presenting device limitation may be considered in the processing steps. This lack of reproduction possibility may be compensated in the processing steps by e.g. processing media content for other audio-presenting devices accordingly, if any.
Fig. 6 illustrates a schematic block diagram of musical instruments placed in a stereophonic reproduction setup. The stereo recording comprises a guitar on the left channel (602) and a drum set on the right channel (603). When placing an audio-presenting device according to the invention at the far right side (603) of the listener (105), the audio device may be configured to only play the sounds coming from the drum set. Placing the audio- presenting device to the far left of the listener (105) may result in presenting only the guitar. If now, for example, the audio-presenting device placed to the far left is located in the same relative direction in relation to the listener but this time closer to the listener, the audio- presenting device may need to turn down the output power, in order to obtain an identical volume level of sound received by the listener.
Fig. 7 illustrates another schematic block diagram of musical instruments placed in a quadraphonic recording setup. Four separate tracks are recorded comprising guitar (602), drum set (603), piano (701), and a violin (702). To reproduce the same ambience during reproduction as in the recording stage, four audio-presenting devices placed around a listener (105) may be required. Similarly to the above-mentioned stereo recording, every audio-presenting device reproduces sonic material corresponding to its location. If placed symmetrically in a quadrant like the instruments in the Figure, every single audio device approximately plays back only a single instrument. If, for example, the audio- presenting device in the 3rd quadrant is turned off, no or only a little bit of piano (701) may be found in the acoustic image.
Placing a speaker in the middle of the quadrant may e.g. reproduce all of the four instruments. While the description above refers 1o particular embodiments of the present invention, it will be understood by those skilled in the art that many details provided above have been described by way of example only, and modifications may be made without departing from the scope thereof. The accompanying claims are intended to cover such modifications as would fall within the true scope and spirit of the present invention. The disclosed embodiments are therefore to be considered in ail respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims, rather than the foregoing description, and all changes coming within the meaning and range of equivalency of the following claims are therefore intended to be embraced therein.

Claims

CLAIMS:
1. A method of providing location-aware media content by an audio-presenting device (101) capable of presenting audio content (106), the method comprising the steps of:
- obtaining, in a processing unit (103), at least one location parameter representing the location of the audio-presenting device (101); - processing, in said processing unit (103), current audio content on the basis of the obtained at least one location parameter in order to obtain a location-aware audio content being relative to the current audio content dependent on the at least one location parameter; and
- presenting the obtained location-aware audio content by the audio-presenting device (101).
2. A method as claimed in claim 1, wherein the processing unit (103) comprises the steps of:
- receiving the at least one location parameter from the audio-presenting device (101); and - transmitting the obtained location-aware audio content to the audio-presenting device
(101) prior to presenting the same.
3. A method according to claim 1 , wherein the processing unit is comprised by an audio-presenting device (502, 503, 504, 505, 506), and comprises the steps of: - receiving the current audio content; and
- presenting the obtained location-aware audio content by the audio-presenting device (502, 503, 504, 505, 506).
4. A method according to claims 1 to 3, wherein said at least one location parameter is determined as a parameter relative to a user's workspace.
5. A method according to claims 1 to 4, wherein the steps of processing audio content comprise processing by using audio reproduction capabilities of the audio-presenting device.
6. A system for providing location-aware media content by an audio-presenting device (101) capable of presenting audio content (106), the system comprising means for:
- obtaining, in a processing unit (103), at least one location parameter representing the location of the audio-presenting device (101);
- processing, in said processing unit (103), current audio content on the basis of the obtained at least one location parameter in order to obtain a location-aware audio content being relative to the current audio content dependent on the at least one location parameter; and - presenting the obtained location-aware audio content by the audio-presenting device (101).
7. A system according to claim 6, wherein the processing unit (103) comprises means for: - receiving the at least one location parameter from the audio-presenting device (101); and
- transmitting the obtained location-aware audio content to the audio-presenting device (101) prior to presenting the same.
8. A system according to claim 6, wherein the processing unit is comprised by an audio-presenting device (502, 503, 504, 505, 506), and comprises means for:
- receiving the current audio content; and
- presenting the obtained location-aware audio content by the audio-presenting device (502, 503, 504, 505, 506).
9. A system according to claims 6 to 8, wherein said at least one location parameter is determined as a parameter relative to a user's workspace.
10. A system according to claims 6 to 9, wherein the steps of processing audio content comprise processing by using audio reproduction capabilities of the audio-presenting device.
11. A computer-readable medium containing a program for making a processor carry out the method of any one of claims 1 through 5.
PCT/IB2003/003369 2002-09-09 2003-08-05 Smart speakers WO2004023841A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN03821324.9A CN1682567B (en) 2002-09-09 2003-08-05 Smart speakers
AT03793931T ATE554606T1 (en) 2002-09-09 2003-08-05 SMART SPEAKERS
US10/527,117 US7379552B2 (en) 2002-09-09 2003-08-05 Smart speakers
EP03793931A EP1540988B1 (en) 2002-09-09 2003-08-05 Smart speakers
JP2004533698A JP4643987B2 (en) 2002-09-09 2003-08-05 Smart speaker
AU2003250404A AU2003250404A1 (en) 2002-09-09 2003-08-05 Smart speakers

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP02078665.3 2002-09-09
EP02078665 2002-09-09

Publications (1)

Publication Number Publication Date
WO2004023841A1 true WO2004023841A1 (en) 2004-03-18

Family

ID=31970399

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2003/003369 WO2004023841A1 (en) 2002-09-09 2003-08-05 Smart speakers

Country Status (8)

Country Link
US (1) US7379552B2 (en)
EP (1) EP1540988B1 (en)
JP (1) JP4643987B2 (en)
KR (1) KR20050057288A (en)
CN (2) CN1682567B (en)
AT (1) ATE554606T1 (en)
AU (1) AU2003250404A1 (en)
WO (1) WO2004023841A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1615464A1 (en) * 2004-07-07 2006-01-11 Sony Ericsson Mobile Communications AB Method and device for producing multichannel audio signals
FR2884100A1 (en) * 2005-03-30 2006-10-06 Cedric Fortunier Audio/audiovisual installation`s e.g. home theater installation, component e.g. screen, positioning assisting device, has beam generators to emit light beams in preset directions to simultaneously locate zones to position components
EP1784049A1 (en) * 2005-11-08 2007-05-09 BenQ Corporation A method and system for sound reproduction, and a program product
US7546144B2 (en) 2006-05-16 2009-06-09 Sony Ericsson Mobile Communications Ab Mobile wireless communication terminals, systems, methods, and computer program products for managing playback of song files
US7555291B2 (en) 2005-08-26 2009-06-30 Sony Ericsson Mobile Communications Ab Mobile wireless communication terminals, systems, methods, and computer program products for providing a song play list
US7925244B2 (en) 2006-05-30 2011-04-12 Sony Ericsson Mobile Communications Ab Mobile wireless communication terminals, systems, methods, and computer program products for publishing, sharing and accessing media files
US7991268B2 (en) 2006-08-18 2011-08-02 Sony Ericsson Mobile Communications Ab Wireless communication terminals, systems, methods, and computer program products for media file playback
US8086331B2 (en) 2005-02-01 2011-12-27 Panasonic Corporation Reproduction apparatus, program and reproduction method
RU196533U1 (en) * 2019-11-28 2020-03-03 Общество С Ограниченной Ответственностью "Яндекс" SMART SPEAKER WITH MEDIA FILTRATION OF TOF SENSOR VALUES

Families Citing this family (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7492913B2 (en) * 2003-12-16 2009-02-17 Intel Corporation Location aware directed audio
US11431312B2 (en) 2004-08-10 2022-08-30 Bongiovi Acoustics Llc System and method for digital signal processing
US7653447B2 (en) 2004-12-30 2010-01-26 Mondo Systems, Inc. Integrated audio video signal processing system using centralized processing of signals
US8015590B2 (en) * 2004-12-30 2011-09-06 Mondo Systems, Inc. Integrated multimedia signal processing system using centralized processing of signals
US8880205B2 (en) * 2004-12-30 2014-11-04 Mondo Systems, Inc. Integrated multimedia signal processing system using centralized processing of signals
US20070061830A1 (en) * 2005-09-14 2007-03-15 Sbc Knowledge Ventures L.P. Audio-based tracking system for IPTV viewing and bandwidth management
US8677002B2 (en) * 2006-01-28 2014-03-18 Blackfire Research Corp Streaming media system and method
US10848867B2 (en) 2006-02-07 2020-11-24 Bongiovi Acoustics Llc System and method for digital signal processing
US8239559B2 (en) * 2006-07-15 2012-08-07 Blackfire Research Corp. Provisioning and streaming media to wireless speakers from fixed and mobile media sources and clients
US20080077261A1 (en) * 2006-08-29 2008-03-27 Motorola, Inc. Method and system for sharing an audio experience
US20090304205A1 (en) * 2008-06-10 2009-12-10 Sony Corporation Of Japan Techniques for personalizing audio levels
US8274611B2 (en) * 2008-06-27 2012-09-25 Mitsubishi Electric Visual Solutions America, Inc. System and methods for television with integrated sound projection system
US8793717B2 (en) * 2008-10-31 2014-07-29 The Nielsen Company (Us), Llc Probabilistic methods and apparatus to determine the state of a media device
US8154588B2 (en) * 2009-01-14 2012-04-10 Alan Alexander Burns Participant audio enhancement system
KR101196410B1 (en) * 2009-07-07 2012-11-01 삼성전자주식회사 Method for auto setting configuration of television according to installation type of television and television using the same
US20110123030A1 (en) * 2009-11-24 2011-05-26 Sharp Laboratories Of America, Inc. Dynamic spatial audio zones configuration
CN104822036B (en) * 2010-03-23 2018-03-30 杜比实验室特许公司 The technology of audio is perceived for localization
JP2012104871A (en) * 2010-11-05 2012-05-31 Sony Corp Acoustic control device and acoustic control method
US9075419B2 (en) * 2010-11-19 2015-07-07 Google Inc. Systems and methods for a graphical user interface of a controller for an energy-consuming system having spatially related discrete display elements
US20130051572A1 (en) * 2010-12-08 2013-02-28 Creative Technology Ltd Method for optimizing reproduction of audio signals from an apparatus for audio reproduction
FR2970574B1 (en) * 2011-01-19 2013-10-04 Devialet AUDIO PROCESSING DEVICE
US9408011B2 (en) 2011-12-19 2016-08-02 Qualcomm Incorporated Automated user/sensor location recognition to customize audio performance in a distributed multi-sensor environment
US9692535B2 (en) 2012-02-20 2017-06-27 The Nielsen Company (Us), Llc Methods and apparatus for automatic TV on/off detection
US20130294618A1 (en) * 2012-05-06 2013-11-07 Mikhail LYUBACHEV Sound reproducing intellectual system and method of control thereof
US9996628B2 (en) * 2012-06-29 2018-06-12 Verisign, Inc. Providing audio-activated resource access for user devices based on speaker voiceprint
US9344828B2 (en) * 2012-12-21 2016-05-17 Bongiovi Acoustics Llc. System and method for digital signal processing
KR20140087104A (en) * 2012-12-27 2014-07-09 전자부품연구원 Audio Equipment Installation Information Providing System and Method, Personalized Audio Providing Server
EP3742440A1 (en) 2013-04-05 2020-11-25 Dolby International AB Audio encoder and decoder for interleaved waveform coding
US9883318B2 (en) 2013-06-12 2018-01-30 Bongiovi Acoustics Llc System and method for stereo field enhancement in two-channel audio systems
US9906858B2 (en) 2013-10-22 2018-02-27 Bongiovi Acoustics Llc System and method for digital signal processing
GB2529295B (en) * 2014-06-13 2018-02-28 Harman Int Ind Media system controllers
CN104125522A (en) * 2014-07-18 2014-10-29 北京智谷睿拓技术服务有限公司 Sound track configuration method and device and user device
US9924224B2 (en) 2015-04-03 2018-03-20 The Nielsen Company (Us), Llc Methods and apparatus to determine a state of a media presentation device
US9686625B2 (en) * 2015-07-21 2017-06-20 Disney Enterprises, Inc. Systems and methods for delivery of personalized audio
WO2017110882A1 (en) * 2015-12-21 2017-06-29 シャープ株式会社 Speaker placement position presentation device
US10048929B2 (en) * 2016-03-24 2018-08-14 Lenovo (Singapore) Pte. Ltd. Adjusting volume settings based on proximity and activity data
CN112236812A (en) 2018-04-11 2021-01-15 邦吉欧维声学有限公司 Audio-enhanced hearing protection system
WO2020028833A1 (en) 2018-08-02 2020-02-06 Bongiovi Acoustics Llc System, method, and apparatus for generating and digitally processing a head related audio transfer function
RU197268U1 (en) * 2019-12-30 2020-04-16 Общество С Ограниченной Ответственностью "Яндекс" EXCLUSIVE SOUND LINK OPERATIONS FROM THE IC OF THE LED DRIVER IC SMART COLUMN
EP4256558A1 (en) 2020-12-02 2023-10-11 Hearunow, Inc. Dynamic voice accentuation and reinforcement
US11521623B2 (en) 2021-01-11 2022-12-06 Bank Of America Corporation System and method for single-speaker identification in a multi-speaker environment on a low-frequency audio recording

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19646055A1 (en) * 1996-11-07 1998-05-14 Thomson Brandt Gmbh Method and device for mapping sound sources onto loudspeakers
JP2002078037A (en) 2000-08-25 2002-03-15 Matsushita Electric Ind Co Ltd Wireless loudspeaker
WO2002056635A2 (en) 2001-01-09 2002-07-18 Roke Manor Research Limited High fidelity audio signal reproduction system and method of operation

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW218062B (en) * 1991-11-12 1993-12-21 Philips Nv
US5386478A (en) * 1993-09-07 1995-01-31 Harman International Industries, Inc. Sound system remote control with acoustic sensor
US6118880A (en) * 1998-05-18 2000-09-12 International Business Machines Corporation Method and system for dynamically maintaining audio balance in a stereo audio system
JP2001352600A (en) * 2000-06-08 2001-12-21 Marantz Japan Inc Remote controller, receiver and audio system
US7095455B2 (en) * 2001-03-21 2006-08-22 Harman International Industries, Inc. Method for automatically adjusting the sound and visual parameters of a home theatre system
US7076204B2 (en) * 2001-10-30 2006-07-11 Unwired Technology Llc Multiple channel wireless communication system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19646055A1 (en) * 1996-11-07 1998-05-14 Thomson Brandt Gmbh Method and device for mapping sound sources onto loudspeakers
JP2002078037A (en) 2000-08-25 2002-03-15 Matsushita Electric Ind Co Ltd Wireless loudspeaker
WO2002056635A2 (en) 2001-01-09 2002-07-18 Roke Manor Research Limited High fidelity audio signal reproduction system and method of operation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PATENT ABSTRACTS OF JAPAN vol. 2002, no. 07 3 July 2002 (2002-07-03) *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1615464A1 (en) * 2004-07-07 2006-01-11 Sony Ericsson Mobile Communications AB Method and device for producing multichannel audio signals
US8086331B2 (en) 2005-02-01 2011-12-27 Panasonic Corporation Reproduction apparatus, program and reproduction method
FR2884100A1 (en) * 2005-03-30 2006-10-06 Cedric Fortunier Audio/audiovisual installation`s e.g. home theater installation, component e.g. screen, positioning assisting device, has beam generators to emit light beams in preset directions to simultaneously locate zones to position components
US7555291B2 (en) 2005-08-26 2009-06-30 Sony Ericsson Mobile Communications Ab Mobile wireless communication terminals, systems, methods, and computer program products for providing a song play list
EP1784049A1 (en) * 2005-11-08 2007-05-09 BenQ Corporation A method and system for sound reproduction, and a program product
WO2007054285A1 (en) * 2005-11-08 2007-05-18 Benq Corporation A method and system for sound reproduction, and a program product
US7890088B2 (en) 2006-05-16 2011-02-15 Sony Ericsson Mobile Communications Ab Mobile wireless communication terminals, systems, methods, and computer program products for managing playback of song files
US8000742B2 (en) 2006-05-16 2011-08-16 Sony Ericsson Mobile Communications Ab Mobile wireless communication terminals, systems, methods, and computer program products for managing playback of song files
US7546144B2 (en) 2006-05-16 2009-06-09 Sony Ericsson Mobile Communications Ab Mobile wireless communication terminals, systems, methods, and computer program products for managing playback of song files
US7925244B2 (en) 2006-05-30 2011-04-12 Sony Ericsson Mobile Communications Ab Mobile wireless communication terminals, systems, methods, and computer program products for publishing, sharing and accessing media files
US8090360B2 (en) 2006-05-30 2012-01-03 Sony Ericsson Mobile Communications Ab Mobile wireless communication terminals, systems, methods, and computer program products for publishing, sharing and accessing media files
US8229405B2 (en) 2006-05-30 2012-07-24 Sony Ericsson Mobile Communications Ab Communication terminals, systems, methods, and computer program products for publishing, sharing and accessing media files
USRE46258E1 (en) 2006-05-30 2016-12-27 Sony Mobile Communications Ab Communication terminals, systems, methods, and computer program products for publishing, sharing and accessing media files
US7991268B2 (en) 2006-08-18 2011-08-02 Sony Ericsson Mobile Communications Ab Wireless communication terminals, systems, methods, and computer program products for media file playback
RU196533U1 (en) * 2019-11-28 2020-03-03 Общество С Ограниченной Ответственностью "Яндекс" SMART SPEAKER WITH MEDIA FILTRATION OF TOF SENSOR VALUES

Also Published As

Publication number Publication date
JP4643987B2 (en) 2011-03-02
JP2005538589A (en) 2005-12-15
KR20050057288A (en) 2005-06-16
US7379552B2 (en) 2008-05-27
ATE554606T1 (en) 2012-05-15
CN1682567A (en) 2005-10-12
EP1540988B1 (en) 2012-04-18
AU2003250404A1 (en) 2004-03-29
EP1540988A1 (en) 2005-06-15
US20060062401A1 (en) 2006-03-23
CN1682567B (en) 2014-06-11

Similar Documents

Publication Publication Date Title
US7379552B2 (en) Smart speakers
EP1266541B1 (en) System and method for optimization of three-dimensional audio
JP6486833B2 (en) System and method for providing three-dimensional extended audio
JP5526042B2 (en) Acoustic system and method for providing sound
US7602921B2 (en) Sound image localizer
JP5325988B2 (en) Method for rendering binaural stereo in a hearing aid system and hearing aid system
US20050281421A1 (en) First person acoustic environment system and method
AU2001239516A1 (en) System and method for optimization of three-dimensional audio
CN104956689A (en) Method and apparatus for personalized audio virtualization
WO2006009004A1 (en) Sound reproducing system
KR20050085360A (en) Personalized surround sound headphone system
US20050047619A1 (en) Apparatus, method, and program for creating all-around acoustic field
JPH0415693A (en) Sound source information controller
WO2008015733A1 (en) Sound control device, sound control method, and sound control program
KR200247762Y1 (en) Multiple channel multimedia speaker system
JP2002152897A (en) Sound signal processing method, sound signal processing unit
Sigismondi Personal monitor systems
WO2007096792A1 (en) Device for and a method of processing audio data
JP2019201308A (en) Acoustic control device, method, and program
KR100703923B1 (en) 3d sound optimizing apparatus and method for multimedia devices
JPH11231878A (en) Control method for sound field treatment
KR100655543B1 (en) Mobile device having artificial reverberator
Didden et al. Product Review: Smyth Research Inc. Realiser A8
JPH0962266A (en) Acoustic device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2003793931

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 3220/CHENP/2004

Country of ref document: IN

ENP Entry into the national phase

Ref document number: 2006062401

Country of ref document: US

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 20038213249

Country of ref document: CN

Ref document number: 10527117

Country of ref document: US

Ref document number: 2004533698

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 1020057004060

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 2003793931

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1020057004060

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 10527117

Country of ref document: US