WO2004093488A2 - Directional speakers - Google Patents

Directional speakers Download PDF

Info

Publication number
WO2004093488A2
WO2004093488A2 PCT/US2004/011972 US2004011972W WO2004093488A2 WO 2004093488 A2 WO2004093488 A2 WO 2004093488A2 US 2004011972 W US2004011972 W US 2004011972W WO 2004093488 A2 WO2004093488 A2 WO 2004093488A2
Authority
WO
WIPO (PCT)
Prior art keywords
audio
ofthe
user
signals
speaker
Prior art date
Application number
PCT/US2004/011972
Other languages
French (fr)
Other versions
WO2004093488A3 (en
Inventor
Kwok Wai Cheung
Peter P. Tong
C. Douglass Thomas
Original Assignee
Ipventure, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ipventure, Inc. filed Critical Ipventure, Inc.
Publication of WO2004093488A2 publication Critical patent/WO2004093488A2/en
Publication of WO2004093488A3 publication Critical patent/WO2004093488A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/53Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers
    • H04H20/61Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/65Arrangements characterised by transmission systems for broadcast
    • H04H20/71Wireless systems
    • H04H20/72Wireless systems of terrestrial networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/03Constructional features of telephone transmitters or receivers, e.g. telephone hand-sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/19Arrangements of transmitters, receivers, or complete sets to prevent eavesdropping, to attenuate local noise or to prevent undesired transmission; Mouthpieces or receivers specially adapted therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/605Portable telephones adapted for handsfree use involving control of the receiver volume to provide a dual operational mode at close or far distance from the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/6075Portable telephones adapted for handsfree use adapted for handsfree use in a vehicle
    • H04M1/6083Portable telephones adapted for handsfree use adapted for handsfree use in a vehicle by interfacing with the vehicle audio system
    • H04M1/6091Portable telephones adapted for handsfree use adapted for handsfree use in a vehicle by interfacing with the vehicle audio system including a wireless interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/403Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers loud-speakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/405Arrangements for obtaining a desired directivity characteristic by combining a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/0206Portable telephones comprising a plurality of mechanically joined movable body parts, e.g. hinged housings
    • H04M1/0208Portable telephones comprising a plurality of mechanically joined movable body parts, e.g. hinged housings characterized by the relative motions of the body parts
    • H04M1/0214Foldable telephones, i.e. with body parts pivoting to an open position around an axis parallel to the plane they define in closed position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/02Details casings, cabinets or mounting therein for transducers covered by H04R1/02 but not provided for in any of its subgroups
    • H04R2201/023Transducers incorporated in garment, rucksacks or the like
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/40Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
    • H04R2201/4012D or 3D arrays of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2217/00Details of magnetostrictive, piezoelectric, or electrostrictive transducers covered by H04R15/00 or H04R17/00 but not provided for in any of their subgroups
    • H04R2217/03Parametric transducers where sound is generated or captured by the acoustic demodulation of amplitude modulated ultrasonic waves
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/55Communication between hearing aids and external devices via a network for data exchange
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/554Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired using a wireless connection, e.g. between microphone and amplifier or using Tcoils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic

Definitions

  • the present invention relates generally to electronic devices with audio output, and more particularly, to directional speakers.
  • Audio systems such as stereo systems, DVD players, VCRs, and televisions, typically provide audio sounds to one or more users.
  • improved approaches for audio systems to providing audio sounds to desirous persons while reducing disturbance to other persons in the same environment, not desirous of hearing the audio sounds.
  • a number of embodiments ofthe present invention are based on a directional speaker.
  • the audio signals from the speaker can be generated by fransforming ultrasonic signals in air.
  • Different embodiments can be applied to a number of different areas, such as a cell phone, a hearing aid, a portable electronic device, and an entertainment system.
  • the embodiments can be personalized to the hearing characteristics ofthe user, or to the ambient noise level ofthe environment.
  • One embodiment is applicable to a wireless communication system, such as a cell phone.
  • the system can include an interface unit and a base unit.
  • the audio signals from the speaker can be heard hands-free, while privacy protection is enhanced.
  • the interface unit can be attached or integrated to a piece of clothing at the shoulder ofthe user, with the audio signals from the speaker directed towards one ofthe user's ears.
  • the system can include an interface unit that has the directional speaker and a microphone.
  • the microphone captures input audio signals, which are transformed into ultrasonic signals.
  • the speaker transmits the ultrasonic signals, which are transformed into output audio signals by air. At least a portion ofthe output audio signals has higher power than the input audio signals to enhance the hearing ofthe user.
  • the user's ear remains free from any inserted objects and thus is free from the annoying occlusion effects.
  • the system is relatively inexpensive. For example, the system does not require an individually-fitted ear mold.
  • Yet another embodiment uses a directional speaker in a portable electronic device, such as a handheld game console, to direct audio output in a directionally constrained manner.
  • a portable electronic device such as a handheld game console
  • the directional speaker can be integral with the portable electronic device.
  • the directional speaker can be attached or coupled to the portable electronic device.
  • One embodiment is on a directional audio apparatus, such as an entertainment system, that provides directional delivery of audio output targeted to those one or more persons desirous of hearing the audio output.
  • the directional audio apparatus includes a directional speaker.
  • a number ofthe attributes of he audio output can be controlled, either by a user or by monitored measurements. Such attributes include the beam width, the beam direction, the degree of isolation or privacy, and the volume ofthe audio outputs.
  • the audio output can also be personalized or modified according to the audio conditions ofthe surroundings ofthe apparatus. To control these attributes or characteristics, a number of approaches can be used.
  • the surface ofthe speaker can be segmented or curved, the ultrasonic frequencies can be changed, the phases to individual speaker elements can be adjusted, or the path lengths ofthe ultrasonic waves from the emitting surface ofthe speaker can be elongated before the audio output emits into free space.
  • more than one directional speaker can be used to generate stereo effects.
  • Yet another embodiment ofthe invention includes techniques to provide wireless delivery of audio sounds from audio systems to personal audio devices. These techniques permit users ofthe personal audio device to be mobile yet still acquire the audio sounds.
  • a wireless adapter can serve as an after market modification to an audio system.
  • Fig. 1 shows one embodiment ofthe invention with a base unit coupled to a directional speaker and a microphone.
  • Fig. 2 shows examples of characteristics ofthe directional speaker of the present invention.
  • Fig. 3 shows examples of mechanisms to set the direction ofthe audio signals ofthe present invention.
  • Fig. 4A shows one embodiment of a blazed grating for the present invention.
  • Fig. 4B shows an example of a wedge to direct the propagation angle ofthe audio signals in the present invention.
  • Fig. 5 shows an example of a steerable phase array of devices to generate the directional audio signals in the present invention.
  • Fig. 6 shows one example of an interface unit attached to a piece of clothing of a user in the present invention.
  • Fig. 7 shows examples of mechanisms to couple the interface unit to a piece of clothing in the present invention.
  • Fig. 8 shows examples of different coupling techniques between the interface unit and the base unit in the present invention.
  • Fig. 9 shows examples of additional attributes ofthe wireless communication system in the present invention.
  • Fig. 10 shows examples of attributes of a power source for use with the present invention.
  • Fig. 11 A shows the phone being a hands-free or a normal mode phone according to one embodiment ofthe present invention.
  • Fig. 1 IB shows examples of different techniques to automatically select the mode of a dual mode phone in the present invention.
  • Fig. 12 shows examples of different embodiments ofthe interface unit ofthe present invention.
  • Fig. 13 shows examples of additional applications for the present invention.
  • FIG. 14 shows another embodiment ofthe present invention.
  • FIG. 15 shows a person wearing one embodiment of the present invention.
  • FIG. 16 shows different embodiments regarding frequency-dependent amplification ofthe present invention.
  • FIG. 17 shows a number of embodiments regarding calibration ofthe present invention.
  • FIG. 18A shows a number of embodiments regarding power management ofthe present invention.
  • FIG. 18B shows an embodiment ofthe interface unit with an electrical connection.
  • FIGS. 19A-19C show different embodiments regarding microphones in the present invention.
  • FIG. 20 shows embodiments ofthe present invention, which can also function as a phone.
  • FIG. 21 is a flow diagram of call processing according to one embodiment ofthe invention.
  • FIG. 22 shows a number of embodiments regarding improving privacy ofthe present invention.
  • FIG. 23 shows a number of embodiments ofthe present invention accessing audio signals from other instruments wirelessly or through wired connection.
  • FIG. 24A is a view of a mobile telephone with an integrated directional speaker according to one embodiment ofthe invention.
  • FIG. 24B is a perspective view of a flip-type mobile telephone with an integrated directional speaker according to another embodiment ofthe invention.
  • FIG. 25 is a perspective view of a personal digital assistant with an integrated directional speaker according to one embodiment ofthe invention.
  • FIG. 26 is a block diagram of an electronic device with wireless communication capability according to one embodiment ofthe invention.
  • FIG. 27A is a block diagram of a directional audio conversion apparatus according to one embodiment ofthe invention.
  • FIG. 27B is a block diagram of a pre-processor according to one embodiment ofthe invention.
  • FIG. 27C is a block diagram of an estimation circuit for a pre-processor according to one embodiment ofthe invention.
  • FIG. 28 illustrates different embodiments of directional speaker characteristics according to the invention. '
  • FIG. 29 is a flow diagram of audio signal processing according to one embodiment ofthe invention.
  • FIG. 30 is a flow diagram of speaker selection processing according to one embodiment ofthe invention.
  • FIG. 31 is a diagram indicating exemplary conditions that can be utilized to select the appropriate speaker.
  • FIG. 32A is a perspective view of a personal digital assistant with an attachable directional speaker according to another embodiment ofthe invention.
  • FIG. 32B is a perspective view of a personal digital assistant with an attachable directional speaker according to another embodiment ofthe invention.
  • FIG. 33 is a perspective view of a mobile telephone with yet another attachable directional speaker according to one embodiment ofthe invention.
  • FIG. 34 is a diagram depicting examples of additional applications associated with the invention.
  • FIG. 35 is a block diagram of a directional audio delivery device coupled to an audio system according to one embodiment ofthe invention.
  • FIG. 36A is a block diagram of a directional audio delivery device according to one embodiment ofthe invention.
  • FIG. 36B is a block diagram of a directional audio delivery device according to another embodiment ofthe invention.
  • FIG. 37A is a diagram illustrating a representative arrangement suitable for use by different embodiments ofthe invention.
  • FIG. 37B is a diagram of a representative building layout illustrating one application of the present invention.
  • FIG. 38 is a flow diagram of directional audio delivery processing according to an embodiment ofthe invention.
  • FIG. 39 shows examples of attributes ofthe constrained audio output according to the invention.
  • FIG. 40 is another representative building layout illustrating one application ofthe present invention.
  • FIG. 41 is a flow diagram of directional audio delivery processing according to another embodiment ofthe invention.
  • FIG. 42 A is a flow diagram of directional audio delivery processing according to yet another embodiment ofthe invention.
  • FIG. 42B is a flow diagram of an environmental accommodation process according to one embodiment ofthe invention.
  • FIG. 42C is a flow diagram of audio personalization process according to one embodiment ofthe invention.
  • FIG. 43 A is a perspective diagram of an ultrasonic transducer according to one embodiment ofthe invention.
  • FIG. 43B is a diagram that illustrates the ultrasonic transducer with its beam being produced for audio output according to an embodiment ofthe invention.
  • FIGs. 43C-43D illustrate two embodiments ofthe invention where the directional speakers are segmented.
  • FIGs. 43E-43G shows changes in beam width based on different carrier frequencies according to an embodiment ofthe present invention.
  • FIGs. 44A-44B are diagrams of two embodiments ofthe invention where the directional speakers have curved surfaces to expand the beam.
  • FIG. 44C shows beam expansion based on a convex reflector according to an embodiment ofthe invention.
  • FIGs. 45A-45B show two embodiments ofthe invention whose directional speakers have curved surfaces that are segmented.
  • FIGs. 46A and 46B are perspective diagrams of audio systems with directional audio delivery devices in a set-top-box environment according to different embodiments ofthe present invention.
  • FIG. 47 is a perspective diagram of a remote control device according to one embodiment ofthe invention.
  • FIGs. 48A-48B show two embodiments ofthe invention with directional audio delivery devices that allow ultrasonic signals to bounce back and forth before emitting into free space.
  • FIG.49 shows two directional audio delivery devices spaced apart to generate stereo effects according to one embodiment ofthe present invention.
  • FIG. 50 is a block diagram of a remote audio delivery system according to one embodiment ofthe invention.
  • FIG. 51 is a block diagram of a remote audio delivery system according to another embodiment ofthe invention.
  • FIG. 52 is a block diagram of a remote audio delivery system according to yet another embodiment ofthe invention.
  • FIG. 53 is a diagram of a building layout illustrating use of different embodiments ofthe present invention.
  • FIG. 54 is a flow diagram of a remote audio delivery process according to one embodiment ofthe invention.
  • FIG. 55 A is a flow diagram of an environmental accommodation process according to one embodiment ofthe invention.
  • FIG. 55B is a flow diagram of audio personalization process according to one embodiment ofthe invention.
  • FIGs. 56A-B illustrate ultrasonic transducers according to one embodiment ofthe invention.
  • FIG. 57 is a perspective diagram of audio systems that provide directional audio delivery to interested users. DETAILED DESCRIPTION OF THE INVENTION
  • the wireless communication system can, for example, be a mobile phone.
  • Fig. 1 shows a block diagram of wireless communication system 10 according to one embodiment ofthe invention.
  • the wireless corrimunication system 10 has a base unit 12 that is coupled to an interface unit 14.
  • the interface unit 14 includes a directional speaker 16 and a microphone 18.
  • the directional speaker 16 generates directional audio signals.
  • the angular beam width ⁇ of a source is roughly ⁇ / D, where ⁇ is the angular full width at half-maximum (FWHM), ⁇ is the wavelength and D is the diameter of the aperture.
  • FWHM angular full width at half-maximum
  • the wavelength
  • D the diameter of the aperture
  • the frequency is from a few hundred hertz, such as 500 Hz, to a few thousand hertz, such as 5000 Hz.
  • ⁇ of ordinary audible signals is roughly between 70 cm and 7 cm.
  • the dimension of a speaker can be in the order of a few cm. Given that the acoustic wavelength is much larger than a few cm, such a speaker is almost omni-directional. That is, the sound source is emitting energy almost uniformly at all directions. This can be undesirable if one needs privacy because an omni-directional sound source means that anyone in any direction can pick up the audio signals.
  • One approach is to decrease the wavelength of sound, but this can put the sound frequency out ofthe audible range.
  • Another technique is known as parametric acoustics.
  • the audible acoustic signal is f(t) where f(t) is a band- limited signal, such as from 500 to 5,000 Hz.
  • a modulated signal f(t) sin ⁇ c t is created to drive an acoustic transducer.
  • the carrier frequency coJ2 ⁇ should be much larger than the highest frequency component of f(t).
  • the carrier wave is an ultrasonic wave.
  • the acoustic transducer should have a sufficiently wide bandwidth at ⁇ c to cover the frequency band ofthe incoming signal f(t). After this signal f(t) sin ⁇ c t is emitted from the transducer, non- linear demodulation occurs in air, creating an audible signal, E(t), where
  • the demodulated audio signal is proportional to the second time derivative ofthe square ofthe modulating envelope f(t).
  • the first term provides the original audio signal. But the second term can produce undesirable distortions as a result ofthe DSB modulation.
  • One way to reduce the distortions is by lowering the modulation index m. However, lowering m may also reduce the overall power efficiency ofthe system.
  • the modulated signals S(t) sin ⁇ c t or f(t) sin ⁇ c t, have a better directivity than the original acoustic signal f(t), because ⁇ 0 is higher than the audible frequencies.
  • ⁇ c can be 2 ⁇ *40 kHz, though experiment has shown that ⁇ c can range from 2 ⁇ *20 kHz to well over 2 ⁇ * 1 MHz.
  • ⁇ c is chosen not to be too high because of the higher acoustic absorption at higher carrier frequencies.
  • the modulated signals have frequencies that are approximately ten times higher than the audible frequencies. This makes an emitting source with a small aperture, such as 2.5 cm in diameter, a directional device for a wide range of audio signals.
  • choosing a proper working carrier frequency ⁇ c takes into consideration a number of factors, such as:
  • the carrier frequency ⁇ c should not be high.
  • the FWHM ofthe ultrasonic beam should be large enough, such as 25 degrees, to accommodate head motions ofthe person wearing the portable device and to reduce the ultrasonic intensity through beam expansion.
  • the distance between the emitting device and the receiving ear r should be greater than 0.3*i? ⁇ , where RQ is the Rayleigh distance, and is defined as (the area ofthe emitting aperture / ⁇ ).
  • RQ is the Rayleigh distance
  • ⁇ c becomes 2 ⁇ *40 kHz. From this relation, it can be seen that the directivity of the ultrasonic beam can be adjusted by changing the carrier frequency ⁇ c . If a smaller aperture acoustic transducer is preferred, the directivity may decrease.
  • the power generated by the acoustic transducer is typically proportional to the aperture area. In the above example, the Rayleigh distance Ro is about 57 mm.
  • directional audio signals can be generated by the speaker 16 even with a relatively small aperture through modulated ultrasonic signals.
  • the modulated signals can be demodulated in air *o regenerate the audio signals.
  • the speaker can then generate directional audio signals even when emitted from an aperture that is in the order of a few centimeters. This allows the directional audio signals to be pointed at desired directions.
  • the audio signals can also be generated through mixing two ultrasonic signals whose difference frequencies are the audio signals.
  • Fig. 2 shows examples of characteristics of a directional speaker.
  • the directional speaker can, for example, be the directional speaker 16 illustrated in Fig.l.
  • the directional speaker can use a piezoelectric thin film.
  • the piezoelectric thin film can be deposited on a plate with many cylindrical tubes. An example of such a device is described in US Patent No. 6,011 ,855, which is hereby incorporated by reference.
  • the film can be a polyvinylidiene di-fluoride (PVDF) film, and can be biased by metal electrodes.
  • the film can be attached or glued to the perimeter ofthe plate of tubes.
  • the total emitting surfaces of all ofthe tubes can have a dimension in the order of a few wavelengths ofthe carrier or ultrasonic signals.
  • the piezoelectric film can be about 28 microns in thickness; and the tubes can be 9/64" in diameter and spaced apart by 0.16", from center to center ofthe tube, to create a resonating frequency of around 40 kHz.
  • the emitting surface ofthe directional speaker can be around 2 cm by 2 cm. A significant percentage ofthe ultrasonic power generated by the directional speaker can, in effect, be confined in a cone.
  • the amount of ultrasonic power within the cone for example, as a rough estimation, assume that (a) the emitting surface is a uniform circular aperture with the diameter of 2.8 cm, (b) the wavelength ofthe ultrasonic signals is 8.7 mm, and (c) all power goes to the forward hemisphere, then the ultrasonic power contained within the FWHM ofthe main lobe is about 97%, and the power contained from null to null ofthe main lobe is about 97.36%. Similarly, again as a rough estimation, if the diameter ofthe aperture drops to 1 cm, the power contained within the FWHM ofthe main lobe is about 97.2%, and the power contained from null to null ofthe main lobe is about 99%.
  • the FWHM ofthe signal beam is about 24 degrees.
  • a directional speaker 16 is placed on the shoulder of a user.
  • the output from the speaker can be directed in the direction of one ofthe ears ofthe user, with the distance between the shoulder and the ear being, for example, 8 inches.
  • More than 75% of the power ofthe audio signals generated by the emitting surface ofthe directional speaker can, in effect, be confined in a cone.
  • the tip ofthe cone is at the speaker, and the mouth ofthe cone is at the location ofthe user's ear.
  • the diameter ofthe mouth ofthe cone, or the diameter ofthe cone in the vicinity ofthe ear is less than about 4 inches.
  • the directional speaker can be made of a bimorph piezoelectric transducer.
  • the transducer can have a cone ofabout 1 cm in diameter.
  • the directional speaker can be a magnetic transducer.
  • the directional speaker does not generate ultrasonic signals, but generates audio signals directly; and the speaker includes, for example, a physical horn or cone to direct the audio signals.
  • the power output from the directional speaker is increased by increasing the transformation efficiency (e.g. demodulation or mixing efficiency) ofthe ultrasonic signals.
  • transformation efficiency e.g. demodulation or mixing efficiency
  • output audio power is proportional to the coefficient of non-linearity ofthe mixing or demodulation medium.
  • directional audio signals can be generated.
  • Fig. 3 shows examples of mechanisms to direct the ultrasonic signals. They represent different approaches, which can utilize, for example, a grating, a malleable wire, or a wedge.
  • Fig. 4A shows one embodiment of a directional speaker 50 having a blazed grating.
  • the speaker 50 is, for example, suitable for use as the directional speaker 16.
  • Each emitting device, such as 52 and 54, ofthe speaker 50 can be a piezoelectric device or another type of speaker device located on a step ofthe grating.
  • the sum of all ofthe emitting surfaces ofthe emitting devices can have a dimension in the order of a few wavelengths ofthe ultrasonic signals.
  • each ofthe emitting devices can be driven by a replica ofthe ultrasonic signals with an appropriate delay to cause constructive interference ofthe emitted waves at the blazing normal 56, which is the direction orthogonal to grating.
  • This is similar to the beam steering operation of a phase array, and can be implemented by a delay matrix.
  • the delay between adjacent emitting surfaces can be approximately h/c, with the height of each step being h.
  • One approach to simplify signal processing is to arrange the height of each grating step to be an integral multiple ofthe ultrasonic or carrier wavelength, and all the emitting devices can be driven by the same ultrasonic signals.
  • the array direction ofthe virtual audio sources can be the blazing normal 56.
  • the structure ofthe steps can set the propagation direction of the audio signals.
  • the total emitting surfaces are the sum ofthe emitting surfaces ofthe three devices.
  • the propagation direction is approximately 45 degrees from the horizontal plane.
  • the thickness of each speaker device can be less than half the wavelength ofthe ultrasonic waves. If the frequency ofthe ultrasonic waves is 40 kHz, the thickness can be about 4 mm.
  • Another approach to direct the audio signals to specific directions is to position a directional speaker ofthe present invention at the end of a malleable wire.
  • the user can bend the wire to adjust the direction of propagation ofthe audio signals. For example, if the speaker is placed on the shoulder of a user, the user can bend the wire such that the ultrasonic signals produced by the speaker are directed towards the ear adjacent to the shoulder ofthe user.
  • FIG. 4B shows an example of a wedge 75 with a speaker device 77.
  • the angle ofthe wedge from the horizontal can be about 40 degrees. This sets the propagation direction 79 ofthe audio signals to be about 50 degrees from the horizon.
  • the ultrasonic signals are generated by a steerable phase array of individual devices, as illustrated, for example, in Fig. 5. They generate the directional signals by constructive interference ofthe devices.
  • the signal beam is steerable by changing the relative phases among the array of devices.
  • One way to change the phases in one direction is to use a one-dimensional array of shift registers. Each register shifts or delays the ultrasonic signals by the same amount. This array can steer the beam by changing the clock frequency ofthe shift registers. These can be known as "x" shift registers. To steer the beam independently also in an orthogonal direction, one approach is to have a second set of shift registers controlled by a second variable rate clock.
  • This second set of registers is separated into a number of subsets of registers.
  • Each subset can be an array of shift registers and each array is connected to one "x" shift register.
  • the beam can be steered in the orthogonal direction by changing the frequency of the second variable rate clock.
  • the acoustic phase array is a 4 by 4 array of speaker devices.
  • the devices in the acoustic phase array are the same.
  • each can be a bimorph device or transmitter of 7mm in diameter.
  • the overall size ofthe array can be around 2.8 cm by 2.8 cm.
  • the carrier frequency can be set to 100 kHz.
  • Each bimorph is driven at less than 0.1 W.
  • the array is planar but each bimorph is pointed at the ear, such as at about 45 degrees to the array normal.
  • the FWHM main lobe of each individual bimorph is about 0.5 radian.
  • Each “x” shift register can be connected to an array of 4 "y” shift registers to create a 4 by 4 array of shift registers.
  • the clocks can be running at approximately 10 MHz (100 ns per shift).
  • the ultrasonic signals can be transmitted in digital format and delayed by the shift registers at the specified amount.
  • the main lobe of each array device covers an area of roughly 10 cm x 10 cm around the ear.
  • the beam can be steerable roughly by a phase of 0.5 radian over each direction. This is equivalent to a maximum relative time delay of 40 us across one direction ofthe phase array, or 5 us of delay per device.
  • the ultrasonic beam from each array element interferes with each other to produce a final beam that is 1/n narrower in beam width.
  • n is equal to 4, and the beam shape ofthe phase array is narrowed by a factor of 4 in each direction. That is, the FWHM is less than 8 degrees, covering an area of roughly 2.8 cm x 2.8 cm around the ear.
  • the above array can give the acoustic power of over 90 dB SPL.
  • the above example can use an array of piezoelectric thin film devices.
  • the interface unit can also include a pattern recognition device that identifies and locates the ear, or the ear canal. Then, if the ear or the canal can be identified, the beam is steered more accurately to the opening ofthe ear canal. Based on closed loop control, the propagation direction ofthe ultrasonic signals can be steered by the results ofthe pattern recognition approach.
  • One pattern recognition approach is based on thermal mapping to identify the entrance to the ear canal.
  • Thermal mapping can be through infrared sensors.
  • Another pattern recognition approach is based on a pulsed-infrared LED, and a reticon or CCD array for detection.
  • the reticon or CCD array can have a broadband interference filter on top to filter light, which can be a piece of glass with coating.
  • the system can expand the cone, or decrease its directivity.
  • all array elements can emit the same ultrasonic signals, without delay, but with the frequency decreased.
  • Fig. 6 shows one example ofthe interface unit 100 attached to a jacket 102 ofthe user.
  • the interface unit 100 includes a directional speaker 104 and a microphone 106.
  • the directional speaker 104 emits ultrasonic signals in the general direction towards an ear ofthe user.
  • the ultrasonic signals are transformed by mixing or demodulating in the air between the speaker and ear.
  • the directional ultrasonic signals confine most ofthe audio energy within a cone 108 that is pointed towards the ear ofthe user.
  • the surface area ofthe cone 108 when it reaches the head of the user can be tailored to be smaller than the head of the user.
  • the directional ultrasonic signals are able to provide certain degree of privacy protection.
  • the user's head can scatter a portion ofthe received audio signals. Others in the vicinity ofthe user may be able to pick up these scattered signals.
  • the additional speaker devices which can be piezoelectric devices, transmit random signals to interfere or corrupt the scattered signals or other signals that may be emitted outside the cone 108 ofthe directional signals to reduce the chance of others comprehending the scattered signals.
  • Fig. 7 shows examples of mechanisms to couple an interface unit to a piece of clothing.
  • the interface unit can be integrated into a user's clothing, such as located between the outer surface ofthe clothing and its inner lining.
  • the interface unit can have an electrical protrusion from the inside ofthe clothing.
  • the interface unit can be attachable to the user's clothing.
  • a user can attach the interface unit to his clothing, and then turn it on. Once attached, the unit can be operated hands-free.
  • the interface unit can be attached to a strap on the clothing, such as the shoulder strap of a jacket. The attachment can be through a clip, a pin or a hook.
  • the interface unit can be located in the pocket.
  • Velcro can be on both the interface unit and the clothing for attachment purposes.
  • the interface unit can also be attached by a band, which can be elastic (e.g., an elastic armband). Or, the interface unit can be hanging from the neck ofthe user with a piece of string, like an ornamental design on a necklace.
  • the interface unit can have a magnet, which can be magnetically attached to a magnet on the clothing. Note that one or more of these mechanisms can be combined to further secure the attachment.
  • the interface unit can be disposable. For example, the interface unit could be disposed of once it runs out of power.
  • the interface unit may be coupled wireiessiy or tethered to the base unit through a wire.
  • the interface unit may be coupled through Bluetooth, WiFi, Ultrawideband (UWB) or other wireless network/protocol.
  • Fig. 9 shows examples of additional attributes ofthe wireless communication system of the present invention.
  • the system can include additional signal processing techniques.
  • single-side band (SSB) or lower-side band (LSB) modulation can be used with or without compensation for fidelity reproduction.
  • a processor e.g., digital signal processor
  • Other components/functions can also be integrated with the processor. This can be local oscillation for down or up converting and impedance matching circuitry. Echo cancellation techniques may also be included in the circuitry. However, since the speaker is directional, the echo cancellation circuitry may not be necessary.
  • These other functions can also be performed by software (e.g., firmware or microcode) executed by the processor.
  • the base unit can have one or more antennae to communicate with base stations or other wireless devices. Additional antennae can improve antenna efficiency.
  • the antenna on the base unit can also be used to communicate with the interface unit. In this situation, the interface unit may also have more than one antenna.
  • the antenna can be integrated to the clothing.
  • the antenna and the base unit can both be integrated to the clothing.
  • the antenna can be located at the back ofthe clothing.
  • the system can have a maximum power controller that controls the maximum amount of power delivered from the interface unit.
  • average output audio power can be set to be around 60dB, and the maximum power controller limits the maximum output power to be below 70dB. In one embodiment, this maximum power is in the interface unit and is adjustable.
  • the wireless communication system may be voice activated. For example, a user can enter, for example, phone numbers using voice commands. Information, such as phone numbers, can also be entered into a separate computer and then downloaded to the communication system. The user can then use voice commands to make connections to other phones.
  • the wireless communication system can have an in-use indicator. For example, if the system is in operation as a cell phone, and if the user is talking on the phone, there can be a light- emitting diode blinking at the iulerface unit.
  • the in-use indicator allows others to be aware that the user is, for example, on the phone.
  • the base unit of the wireless communication system can also be integrated to the piece of clothing.
  • the base unit can have a data port to exchange information and a power plug to receive power. Such port or ports can protrude from the clothing.
  • Fig. 10 shows examples of attributes ofthe power source.
  • the power source may be a rechargeable battery or a non-rechargeable battery.
  • a bimorph piezoelectric device such as AT/R40-12P from Nicera, Nippon Ceramic Co., Ltd., can be used as a speaker device to form the speaker. It has a resistance of 1,000 ohms. Its power dissipation can be in the milliwatt range.
  • a coin-type battery that can store a few hundred mAHours of energy has sufficient power to run the unit for a limited duration of time. Other types of batteries are also applicable.
  • the power source can be from a DC supply.
  • the power source can be attachable, or integrated or embedded in a piece of clothing worn by the user.
  • the power source can be a rechargeable battery. In one embodiment, for a rechargeable battery, it can be integrated in the piece of clothing, with its charging port exposed.
  • the user can charge the battery on the road. For example, if the user is driving, the user can use a cigarette-lighter type charger to recharge the battery.
  • the power source is a fuel cell.
  • the cell can be a cartridge of fuel, such methanol.
  • the wireless communication system is a phone, particularly a cell phone that can be operated hands-free. In one embodiment, this can be considered as a hands-free mode phone.
  • Fig. 11 A shows one embodiment where the phone can alternatively be a dual-mode phone.
  • the audio signals are produced directly from a speaker integral with the phone (e.g., within its housing). Such a speaker is normally substantially non-directional, or does not generate audio signals through transforming ultrasonic signals in air.
  • a dual mode phone one mode is the hands-free mode phone as described above, and the other mode is the normal-mode phone.
  • the mode selection process can be set by a switch on the phone.
  • mode selection can be automatic.
  • Fig. 1 IB shows examples of different techniques to automatically select the mode of a dual mode phone. For example, if the phone is attached to the clothing, the directional speaker ofthe interface unit can be automatically activated, and the phone becomes the hands-free mode phone.
  • automatic activation can be achieved through a switch integrated to the phone.
  • the switch can be a magnetically-activated switch. For example, when the interface unit is attached to clothing (for hands-free usage), a magnet or a piece of magnetizable material in the clothing can cause the phone to operate in the hands-free mode.
  • the magnetically-activated switch can cause the phone to operate as a normal-mode phone.
  • the switch can be mechanical.
  • an on/off button on the unit can be mechanically activated if the unit is attached. This can be done, for example, by a lever such that when the unit is attached, the lever will be automatically pressed.
  • activation can be based on orientation. If the interface unit is substantially in a horizontal orientation (e.g., within 30 degrees from the horizontal), the phone will operate in the hands-free mode. However, if the unit is substantially in a vertical orientation (e.g., within 45 degrees from the vertical), the phone will operate as a normal-mode phone. A gyro in the interface unit can be used to determine the orientation ofthe interface unit.
  • the wireless communication system is a phone with a directional speaker and a microphone.
  • the present invention can be applied to other areas.
  • Fig. 12 shows examples of other embodiments ofthe interface unit, and
  • Fig. 13 shows examples of additional applications.
  • the interface unit can have two speakers, each propagating its directional audio signals towards one ofthe ears ofthe user.
  • one speaker can be on one shoulder ofthe user, and the other speaker on the other shoulder.
  • the two speakers can provide a stereo effect for the user.
  • the microphone and the speaker are integrated together in a single package.
  • the microphone can be a separate component and can be attached to the clothing as well.
  • the wires from the base unit can connect to the speaker and at least one wire can split off and connect to the microphone at a location close to the head ofthe user.
  • the interface unit does not need to include a microphone.
  • a wireless communication system can be used as an audio unit, such as a MP3 player, a CD player or a radio.
  • Such wireless communication systems can be considered one-way communication systems.
  • the interface unit can be used as the audio output, such as for a stereo system, television or a video game player.
  • the user can be playing a video game.
  • the audio signals, or a representation ofthe audio signals are transmitted wireiessiy to a base unit or an interface unit. Then, the user can hear the audio signals in a directional manner, reducing the chance of annoying or disturbing people in his immediate environment.
  • the base unit and the interface unit are integrated together in a package, which again can be attached to the clothing by techniques previously described for the interface unit.
  • the interface unit can include a monitor or a display.
  • a user can watch television or video signals in the public, again with reduced possibility of disturbing people in the immediate surroundings because the audio signals are directional.
  • video signals can be transmitted from the base unit to the interface unit through UWB signals.
  • the base unit can also include the capability to serve as a computation system, such as in a personal digital assistant (PDA) or a notebook computer.
  • PDA personal digital assistant
  • the user can simultaneously communicate with another person in a hands-free manner using the interface unit, without the need to take her hands off the computation system.
  • Data generated by a software application the user is working on using the computation system can be transmitted digitally with the voice signals to a remote device (e.g., another base station or unit).
  • a remote device e.g., another base station or unit.
  • the directional speaker does not have to be integrated or attached to the clothing ofthe user. Instead, the speaker can be integrated or attached to the computation system, and the computation can function as a cell phone.
  • Directional audio signals from the phone call can be generated for the user while the user is still able to manipulate the computation system with both of his hands.
  • the user can simultaneously make phone calls and use the computation system.
  • the computation system is also enabled to be connected wireiessiy to a local area network, such as to a WiFi or WLAN network, which allows high-speed data as well as voice communication with the network.
  • the user can make voice over IP calls.
  • the high-speed data as well as voice communication permits signals to be transmitted wireiessiy at frequencies beyond 1 GHz.
  • the wireless communication system can be a personalized wireless communication system.
  • the audio signals can be personalized to the hearing characteristics ofthe user ofthe system.
  • the personalization process can be done periodically, such as once every year, similar to periodic re-calibration. Such re-calibration can be done by another device, and the results can be stored in a memory device.
  • the memory device can be a removable media card, which can be inserted into the wireless communication system to personalize the amplification characteristics ofthe directional speaker as a function of frequency.
  • the system can also include an equalizer that allows the user to personalize the amplitude ofthe speaker audio signals as a function of frequency.
  • the system can also be personalized based on the noise level in the vicinity ofthe user.
  • the device can sense the noise level in its immediate vicinity and change the amplitude characteristics of the audio signals as a function of noise level.
  • the form factor ofthe interface unit can be quite compact. In one embodiment, it is rectangular in shape. For example, it can have a width ofabout "x", a length ofabout "2x", and a thickness that is less than "x". "X" can be 1.5 inches, or less than 3 inches. In another example, the interface unit has a thickness of less than 1 inch. In yet another example, the interface unit does not have to be flat. It can have a curvature to conform to the physical profile ofthe user.
  • a speaker is considered directional if the FWHM of its ultrasonic signals is less than about 1 radian or around 57 degrees. In another embodiment, a speaker is considered directional if the FWHM of its ultrasonic signals is less than about 30 degrees. In yet another embodiment, a speaker is transmitting from, such as, the shoulder ofthe user, or a speaker is transmitting signals towards a user's ear. The speaker is considered directional if in the vicinity ofthe user's ear or in the vicinity 6-8 inches away from the speaker, 75% ofthe power of its audio signals is within an area of less than 50 square inches.
  • a speaker is considered directional if in the vicinity of the ear or in the vicinity a number of inches, such as 8 inches, away from the speaker, 75% ofthe power of its audio signals is within an area of less than 20 square inches. In yet a further embodiment, a speaker is considered directional if in the vicinity ofthe ear or in the vicinity a number of inches, such as 8 inches, away from the speaker, 75% ofthe power of its audio signals is within an area of less than 13 square inches. Also, referring back to Fig. 6, in one embodiment, a speaker can be considered a directional speaker if most ofthe power of its audio signals is propagating in one general direction, confined within a virtual cone, such as the cone 108 in Fig.
  • the directional speaker generates ultrasonic signals in the range of 40 kHz.
  • the ultrasonic signals are between 200 kHz to 1 MHz. It can be generated by multilayer piezoelectric thin films, or other types of solid state devices.
  • the carrier frequency is at a higher frequency range than 40 kHz
  • the absorption/attenuation coefficient by air is considerably higher.
  • the attenuation coefficient ⁇ can be about 4.6, implying that the ultrasonic wave will be attenuated by exp(- ⁇ *z) or 40 dB/m.
  • the waves are more quickly attenuated, reducing the range of operation ofthe speaker in the propagation direction ofthe ultrasonic waves.
  • privacy is enhanced and audible interference to others is reduced.
  • the resultant propagation direction ofthe ultrasonic waves is not orthogonal to the horizontal, but at, for example, 45 degrees.
  • the ultrasonic waves can be at an angle so that the main beam ofthe waves is approximately pointed at an ear ofthe user.
  • the propagation direction ofthe ultrasonic waves is approximately orthogonal to the horizontal.
  • Such a speaker does not have to be on a wedge or a step. It can be on a surface that is substantially parallel to the horizontal.
  • the speaker can be on the shoulder of a user, and the ultrasomc waves propagate upwards, instead of at an angle pointed at an ear ofthe user. If the ultrasonic power is sufficient, the waves would have sufficient acoustic power even when the speaker is not pointing exactly at the ear.
  • the ultrasonic speaker generates virtual sources in the direction of propagation. These virtual sources generate secondary acoustic signals in numerous directions, not just along the propagation direction. This is similar to the antenna pattern which gives non-zero intensity in numerous directions away from the direction of propagation.
  • the acoustic power is calculated to be from 45 to 50 dB SPL if (a) the ultrasomc carrier frequency is 500 kHz; (b) the audio frequency is 1 kHz; (c) the emitter size ofthe speaker is 3 cm x 3 cm; (d) the emitter power (peak) is 140 dB SPL; (e) the emitter is positioned at 10 to 15 cm away from the ear, such as located on the shoulder ofthe user; and (f) with the ultrasonic beam pointing upwards, not towards the ear, the center ofthe ultrasonic beam is about 2 - 5 cm away from the ear.
  • the ultrasonic beam is considered directed towards the ear as long as any portion ofthe beam, or the cone ofthe beam, is immediately proximate to, such as within 7cm of, the ear.
  • the direction ofthe beam does not have to be directed at the ear. It can even be orthogonal to the ear, such as propagating up from one's shoulder, substantially parallel to the face ofthe person.
  • the emitting surface ofthe ultrasonic speaker does not have to be flat. It can be designed to be concave or convex to eventually create a diverging ultrasonic beam. For example, if the focal length of a convex surface is f, the power ofthe ultrasonic beam would be 6 dB down at a distance of ffrom the emitting surface. To illustrate numerically, if f is equal to 5 cm, then after 50 cm, the ultrasonic signal would be attenuated by 20 dB.
  • attachable to the clothing worn by a user includes wearable by the user.
  • the user can wear a speaker on his neck, like a pendant on a necklace. This also would be considered as attachable to the clothing worn by the user.
  • the necklace can be considered as the "clothing" worn by the user, and the device is attachable to the necklace.
  • two directional speakers can be positioned one on each side of a notebook computer. As the user is playing games on the notebook computer, the user can communicate with other players using the microphone on the notebook computer and the directional speakers, again without taking his hands off a keyboard or a game console. Since the speakers are directional, audio signals are more confined to be directed to the user in front ofthe notebook computer.
  • FIG. 14 shows one embodiment of a hearing enhancement system 2010 ofthe present invention.
  • the hearing enhancement system 2010 includes an interface unit 2014, which includes a directional speaker 2016 and a microphone 2018.
  • the embodiment may also include a base unit 2012, which has or, can couple to, a power source.
  • the interface unit 2014 can electrically couple to the base unit 2012, In one embodiment, the base unit 2012 can be integrated within the interface unit 2014.
  • the coupling can be in a wired (e.g., cable) or a wireless (e.g., Bluetooth technologies) manner.
  • FIG. 15 shows a person wearing an interface unit 2100 ofthe present invention on his jacket 2102.
  • the interface unit 2100 can, for example, be the interface unit 2014 shown in FIG. 14.
  • the interface unit 2100 includes a directional speaker 2104 and a microphone 2106.
  • the speaker 2104 can be in a line of sight of an ear ofthe user.
  • the microphone 2106 picks up the friend's speech, namely, her audio signals.
  • a hearing enhancement system according to the invention can then use the audio signals to modulate ultrasound signals.
  • the directional speaker 2104 transmits the modulated ultrasonic signals in air towards the ear ofthe user.
  • the transmitted signals are demodulated in air to create the output audio signals.
  • the speaker 2.04 Based on ultrasound transmission, the speaker 2.04 generates directional audio signals and sends them as a cone (virtual cone) 108 to the user's ear.
  • the directional speaker 2104 includes a physical cone or a horn that directly transmits directional audio signals.
  • the audio signals from the speaker can be steered to the ear or the ear canal, whose location can be identified through mechanisms, such as pattern recognition.
  • a number of different embodiments ofthe directional speakers have been previously described in this application.
  • hearing of both ears decreases together. In a sense, this is similar to our need to wear glasses. Rarely would one eye of a person need glasses, and the other eye has 20/20 vision.
  • the left ear unit can be on the left shoulder, and the right ear unit can be on the right shoulder.
  • These two interface units can be electrically coupled, or can be coupled to one base unit. Again, the coupling can be wired or wireless.
  • the interface unit can be worn by the user as a pendant on a necklace in front ofthe user. Output audio signals can then be propagated to both ears.
  • the system is designed to operate in the frequency range between 500Hz to 8kHz.
  • decreased in hearing is not the same across all audio frequencies.
  • the user might be able to easily pick up the sound of vowels, but not the sound of consonants, such as the "S" and the "P".
  • FIG. 16 shows different embodiments ofthe invention regarding frequency-dependent amplification ofthe received audio signals. Note that amplification is not limited to amph ' fying the received audio signals directly.
  • amplification can mean the power level ofthe output audio signals being higher than the received audio signals. This can be through increasing the power ofthe ultrasonic signals.
  • the embodiment amplifies the audio signals so that around the entrance ofthe ear, the signals can have sound pressure level ("SPL") ofabout 80dB from 2 kHz to 4 kHz.
  • SPL sound pressure level
  • the SPL ofthe output audio signals can be 70dB from 1.5 kHz to 4 kHz, and the 3 dB cutoff is also at 1.5 kHz. With a roll off being 12 dB/octave, at 750 Hz, the SPL becomes about 58 dB.
  • Another frequency-dependent amplification approach assumes that most information in the audio signals resides within a certain frequency band. For example, about 70% ofthe information in the audio signal!: ca be within the frequency range of 1 to 2 kHz. Since the ear canal remains open and the user may only be mildly or moderately hearing impaired, the user can be hearing the audio signals directly from his sender (i.e., without assistance provided by the hearing enhancement system). In this approach, the system filters audio signals in the identified frequency range, such as the 1 to 2 kHz range, and processes them for amplification and transmission to the user. For frequencies not within the frequency band, they are not processed for amplification. The user can pick them up directly from the sender. Low to mid frequencies, such as those below 2 kHz, are typically louder.
  • the hearing enhancement system does not require having any hearing aid inserted into the ear, the low to mid frequencies can enter into the ear unaltered. Frequencies in the mid to high range, such as from 2000-3000 Hz, they will be in the natural resonance ofthe ear canal, which is typically around 2700 Hz. As a result, these frequencies can be increased by about 15 dB. With no hearing aid inserted into one ear, the audio signals do not experience any insertion loss, and there is also no occlusion effect due to the user's own voice.
  • amplification across frequencies is directly tailored to the hearing needs ofthe user. This can be done through calibration. This third approach can also be used in conjunction with either the first approach or the second approach.
  • FIG. 17 shows a number of embodiments regarding calibration of a user's hearing across various frequencies.
  • Calibration enables the system to determine (e.g., estimate) the hearing sensitivity ofthe user.
  • the user's hearing profile is generated.
  • the user can perform calibration by himself. For example, the audio frequencies are separated into different bands.
  • the system generates different SPL at each band to test the user's hearing. The specific power level that the user feels most comfortable would be the power level for that band for the user.
  • the system After testing is done for all ofthe bands, based on the power levels for each band, the system creates the user's personal hearing profile. In this calibration process, the system can prompt the user and lead the user through an interactive calibration process.
  • calibration can be done remotely through a web site.
  • the web site can guide the user through the calibration process. This can be done, for example, by the user being positioned proximate to a computer terminal that is connected through the Internet to the web site.
  • the terminal has a speaker or headset that produces audio sounds as part ofthe calibration process.
  • this calibration process can also be done by a third party, such as an audiologist.
  • the user's hearing profile can be stored in the hearing enhancement system. If the calibration is done through a computer terminal, the hearing profile can be downloaded into the hearing enhancement system wireiessiy, such as through Bluetooth or infrared technology.
  • the hearing profile can alternatively be stored in a portable media storage device, such as a memory stick.
  • the memory stick could be inserted into the hearing enhancement system, or some other audio generating device, which desires to access the hearing profile and personalizes the system's amplification across frequencies to the user.
  • the system can also periodically alert the user for re-calibration.
  • the period can be, for example, once a year.
  • the calibration can also be done in stages so that it is less onerous and less obvious that a hearing evaluation is being performed.
  • Frequency-dependent amplification has the added advantage of power conservation because certain frequency bands may not need or may not have amplification.
  • the user has the option of manually changing the amplification ofthe system.
  • the system can also have a general volume controller that allows the user to adjust the output power ofthe speaker. This adjustment can also be across certain frequency bands.
  • signal processing speed ofthe system cannot be too low.
  • the user would not be able to distinguish two identical sets of audio signals if the difference in arrival times of the two signals is below a certain delay time, such as 10 milliseconds.
  • the system's signal processing speed is faster than that certain delay time.
  • FIG. 18A shows a number of embodiments for managing power consumption ofthe system.
  • One embodiment includes a manual on/off switch, which allows the user to manually turn the system off as he desires.
  • the on/off switch can be on a base unit, an interface unit, or a remote device.
  • This on/off switch can also be voice activated.
  • the system is trained to recognize specific recitation, such as specific sentences or phrases, and/or the user's voice. To illustrate, when the user says sentences like any ofthe following, the system would be automatically turned on: What did you say? What? Louder. You said what?
  • the system can be on-demand.
  • the system can identify noise (e.g., background noise), as opposed to audio signals with information.
  • noise e.g., background noise
  • the system could assume that the input audio signals are noise.
  • the system would assume that there are no audio signals worth amplifying.
  • the system can then be deactivated, such as to be placed into a sleep mode, a reduced power mode or a standby mode.
  • the system can be deactivated.
  • This duration of time can be adjustable, and can be, for example, 10 seconds or 10 minutes.
  • the system be activated i.e., awakened from the sleep mode, the reduced power mode or the standby mode.
  • Another approach to manage power consumption can make use of a directional microphone. This approach can improve the signal-to-noise ratio.
  • the gain at specific directions of such a microphone can be 20 dB higher than omni-directional microphones.
  • the direction of the directional microphone can vary with application. However, in one embodiment, the direction ofthe directional microphone can be pointing forward or outward from the front ofthe user. The assumption is that the user typically faces the person talking to him, and thus it is the audio signals from the person in front of him that are to be enhanced.
  • the system namely, the interface unit
  • FIG. 19A shows an interface unit 2202 with four directional microphones pointing in four orthogonal directions. With the microphones in symmetry, the user does not have to think about the orientation ofthe microphones if the user is attaching the interface unit to a specific location on his clothing.
  • FIGS. 19B-19C show interface units 2204 and 2206, each with two directional microphones pointing in two orthogonal directions.
  • one unit can be on the left shoulder and the other unit on the right shoulder ofthe user, with the user's head in between the interface units in FIG. 19B and FIG. 19C.
  • the amplification ofthe system can also depend on the ambient power level, or the noise level ofthe environment ofthe system.
  • One approach to measure the noise level is to measure the average SPL at gaps ofthe audio signals. For example, a person asks the user the following question, "Did you leave your heart in San Francisco?" Typically, there are gaps between every two words or between sentences or phrases.
  • the system measures, for example, the root mean square (“rms") value ofthe power in each ofthe gaps, and can calculate another average among all ofthe rms values to determine the noise level.
  • the system increases the gain ofthe system so as to ensure that the average power ofthe output audio signals is higher than the noise level by a certain degree. For example, the average SPL ofthe output audio signals can be 1 OdB above the noise level.
  • the average power level ofthe environment or the ambient noise level is higher than a threshold value, signal amplification is reduced.
  • This average power level can include the audio signals ofthe person talking to the user.
  • the rationale is that if the environment is very noisy, it would be difficult for the user to hear the audio signals from the other person anyway. As a result, the system should not keep on amplifying the audio signals independent ofthe environment. For example, if the average power level ofthe environment is more than 75 dB, the amplification ofthe system is reduced, such as to 0 dB.
  • Another power management approach is to increase the power ofthe audio signals.
  • One embodiment to create more power is to increase the surface area ofthe medium responsible for generating the output audio signals.
  • audio signals are generated by a piezoelectric film
  • a number of embodiments are based on ultrasonic demodulation or mixing.
  • a 1-cm diameter bimorph can give 140 dB ultrasonic SPL.
  • the device may need about 0.1 W of input power.
  • Ten such devices would increase output power by about 20 dB.
  • Another approach to increase power is to increase the demodulation or mixing efficiency ofthe ultrasonic signals by having at least a portion ofthe transformation performed in a medium other than air. Depending on the medium, this may make the directional speaker more power efficient.
  • the system can include one or more rechargeable batteries. These batteries can be recharged by coupling the system to a battery re-charger.
  • Another feature ofthe system that may be provided is one or more electrical connections on the system so as to facilitate electrical connection with a battery charger.
  • the system includes at least one connector or conductive element (e.g., terminal, pin, pad, trace, etc.) so that the electrical coupling between the rechargeable battery and the charger can be achieved.
  • the electrical connector or conductive element is provided on the system and electrically connected to the battery.
  • the placement ofthe electrical connector or conductive element on the system serves to allow the system to be simply placed within a charger. Consequently, the electrical connector or conductive element can be in electrical contact with a counterpart or corresponding electrical connector or conductive element ofthe charger.
  • FIG. 18B shows an embodiment ofthe interface unit 2150 with an electrical connection 2152 and a cover 2154.
  • the interface unit 2150 can be the interface unit 2014 shown in FIG. 14.
  • the electrical connection 2152 can be a USB connector. With the cover 2154 removed, the connection 2152 can be used, for example, to couple to a battery charger to recharge the interface unit 2150.
  • the charger can be considered a docking station, upon which the system is docked so that the battery within the system can be charged.
  • the system can likewise include an electrical connector or conductive element that facilitates electrical connection to the docking station when docked.
  • the system which can include the base unit, can also have the electronics to serve as a cell phone.
  • FIG. 20 shows such an embodiment.
  • the system can change its mode of operation and function as a cell phone.
  • the system can alert the user of an incoming call. This can be through, for example, ringing, vibration or a blinking light.
  • the user can pick up the call by, for example, pushing a button on the interface unit. Picking up the call can also be through an activation mechanism on the base unit or a remote control device.
  • FIG. 21 is a flow diagram of call processing 2400 according to one embodiment ofthe invention.
  • the call processing 2400 is performed using the system.
  • the system can be the system shown in FIG. 14.
  • the call processing 2400 begins with a decision 2402 that determines whether a call is incoming. When the decision 2402 determines that there is no incoming call, the call processing 2400 waits for such a call. Once the decision 2402 determines that a call is incoming, the system is activated 2408. Here, the wireless communications capability ofthe system is activated (e.g., powered-up, enabled, or woken-up). The user ofthe system is then notified 2410 ofthe incoming call. In one embodiment, the notification to the user ofthe incoming call can be achieved by an audio sound produced by the system (via a speaker). Alternatively, the user of the system could be notified by a vibration ofthe system, or a visual (e.g., light) indication provided by the system. Alternatively, the base unit could include a ringer that provides audio sound and/or or vibration indication to signal an incoming call.
  • a decision 2412 determines whether the incoming call has been answered.
  • the base unit can activate 2414 a voice message informing the caller to leave a message or instructing the caller as to the unavailability ofthe recipient.
  • the call can be answered 2416 at the base unit.
  • a wireless link is established 2418 between the interface unit and the base unit.
  • the wireless link is, for example, a radio communication link such as utilized with Bluetooth or WiFi networks.
  • communication information associated with the call can be exchanged 2420 over the wireless link.
  • the base unit receives the incoming call, and communicates wireiessiy to the interface unit such that communication information is provided to the user via the system.
  • the user ofthe system is accordingly able to communicate with the caller by way ofthe system and, thus, in a hands-free manner.
  • a decision 2422 determines whether the call is over (completed). When the decision 2422 determines that the call is not over, the call processmg 2400 returns to repeat the operation 2420 and subsequent operations so that the call can continue. On the other hand, when the decision 2422 determines that the call is over, then the system is deactivated 2424, and the wireless link and the call are ended 2426.
  • the deactivation 2424 ofthe system can place the system in a reduced-power mode. For example, the deactivation 2424 can power-down, disable, or sleep me wireless communication capabilities (e.g., circuitry) ofthe system. Following the operation 2426, as well as following the operations 2406 and 2414, the call processing 2400 for the particular call ends.
  • the system can have a directional microphone pointing at the head ofthe user.
  • a directional microphone pointing at the head ofthe user is shown in FIG. 19 A.
  • FIG. 22 shows a number of embodiments regarding improving privacy of the present invention.
  • the audio signal propagation angle can inherently improve privacy.
  • the cone ofthe audio signals typically propagates from low to high in order to get to an ear ofthe user.
  • the elevation angle can be 45 degrees.
  • One advantage of such a propagation direction is that most ofthe audio signals reflected from the head radiate towards the sky above the head. This reduces the chance of having the audio signals being eavesdropped particularly when the signal power is going down as the square ofthe propagation distance.
  • Privacy can be enhanced based on frequency-dependent amplification. Since certain audio frequencies may not be amplified, and may be relatively low in SPL, their reflected signals can be very low. This reduces the probability ofthe entire audio signals being heard by others.
  • Another approach to improve privacy is to reduce the highest power level ofthe output audio signals to below a certain threshold, such as 70dB. This level may be sufficient to improve the hearing of those who have mild hearing loss.
  • narrowing the cone can be done, for example, by increasing the carrier frequency ofthe audio signals.
  • the higher the carrier frequency the narrower the cone, such as a cone created by 100 kHz signals typically being narrower than a cone created by 40 kHz signals.
  • sidelobes can also be suppressed.
  • Another approach to narrow the cone is to increase the gain of the cone or the horn that generates the audio signals.
  • a focused beam has the added advantage of better power conservation. With the audio signals restricted to a smaller cone, less power is needed to generate the audio signals.
  • the system is further designed to pick up, capture or access audio signals from portable or nonportable instruments, with the interface unit serving as a personalized listening unit.
  • Audio signals from these instruments can be transmitted through wire to the system.
  • the interface unit can provide an electrical input for connecting to the instrument by wires. If transmission is wireless, the system can be designed to include the electronics to capture wireless signals from the instruments through a wireless local area network, such as WiFi or Bluetooth. The audio signals from these instruments can be up-converted and transmitted as a WiFi signal to be picked up by the system. The system then down-converts the WiFi signal to re-generate the audio signals for the user.
  • FIG. 23 shows examples of such other portable or non-portable instruments.
  • the instruments can be used in a private environment, such as at home, or attached to the user. This can include entertainment units, ⁇ ch as televisions, stereo systems, CD players, or radios.
  • Private use can include a phone, which can be a desktop phone with a conference speaker or a cell phone.
  • the system can function as the headset of a phone, and can be coupled to the phone in a wireless manner, such as through Bluetooth.
  • the user can be at a conference or a theater.
  • the system can be coupled to the conference microphone or the theater speaker wireiessiy, and thus be capable of capturing and enhancing the audio signals therefrom.
  • the directional speaker generates ultrasonic signals in the range of 40 kHz.
  • the ultrasonic signals are between 200 kHz to 1 MHz. It can be generated by multilayer piezoelectric thin films, or other types of solid state devices. Since the carrier frequency is at a higher frequency range than 40 kHz, the absorption/attenuation coefficient by air is considerably higher. On the other hand, privacy is enhanced and audible interference to others is reduced.
  • the resultant propagation direction ofthe ultrasonic waves is not orthogonal to the horizontal, but at, for example, 45 degrees.
  • the ultrasonic waves can be at an angle so that the main beam ofthe waves is approximately pointed at an ear ofthe user.
  • the propagation direction ofthe ultrasonic waves is approximately orthogonal to the horizontal.
  • Such a speaker does not have to be on a wedge or a step. It can be on a surface that is substantially parallel to the horizontal.
  • the speaker can be on the shoulder of a user, and the ultrasonic waves propagate upwards, instead of at an angle towards an ear ofthe user. If the ultrasonic power is sufficient, the waves would have sufficient acoustic power even when the speaker is not pointing exactly at the ear.
  • the ultrasomc beam is considered directed towards the ear as long as any portion ofthe beam, or the cone ofthe beam, is immediately proximate to, such as within 7cm of, the ear.
  • the direction ofthe beam does not have to be directed at the ear. It can even be orthogonal to the ear, such as propagating up from one's shoulder, substantially parallel to the face of the person.
  • a number of embodiments ofthe present invention pertain to a directional speaker for a portable electronic device.
  • the directional speaker can be used with the electronic device to direct audio output in a directionally constrained manner. As a result, a certain degree of privacy with respect to the audio output is achieved for the user ofthe electronic device, yet the user need not wear a headset or ear phone, or have to hold a speaker against one's ear.
  • the directional speaker can be integral with the electronic device. Alternatively, the directional speaker can be an attachment (or peripheral) to the electronic device.
  • the electronic device can be a computing device, such as a personal computer, a portable computer, or a personal digital assistant.
  • the device can be a CD player, a portable radio, a communications device, or an electric musical instrument, such as an electric piano.
  • a communications device is a mobile telephone, such as a 2G, 2.5G or 3G phone.
  • FIG. 24A illustrates a mobile telephone 3100 with an integrated directional speaker according to one embodiment ofthe invention.
  • the mobile telephone 3100 is, for example, a cellular phone.
  • the mobile telephone 3100 includes a housing 3102 that provides an overall body for the mobile telephone 3100.
  • the mobiletelephone 3100 includes a display 3104.
  • the mobile telephone 3100 also includes a plurality of buttons 3106 that allow user input of alphanumeric characters or functional requests, and a navigational control 3108 that allows directional navigation with respect to the display 3104.
  • the mobile telephone 3100 also includes an antenna 3110.
  • the mobile telephone 3100 includes a microphone 3112 for voice pickup and an ear speaker 3114 for audio output.
  • the ear speaker 3114 can also be referred to an earpiece.
  • the mobile telephone 3100 also includes a directional speaker 3116.
  • the directional speaker 3116 provides directional audio sound for the user ofthe mobile telephone 3100.
  • the directional audio sound produced by the directional speaker 3116 allows the user ofthe mobile telephone 3100 to hear the audio sound even though neither ofthe speaker's ears is proximate to the mobile telephone 3100.
  • the directional nature ofthe directional sound output is towards the user and thus provides privacy by restricting the audio sound to a confined directional area. In other words, bystanders in the vicinity ofthe user but not within the confined directional area would not be able to directly hear the audio sound produced by the directional speaker 3116.
  • the bystanders might be able to hear a degraded version ofthe audio sound after it reflects from a surface.
  • the reflected audio sound, if any, that reaches the bystander would be at a reduced decibel level (e.g., at least a 20 dB reduction) making it difficult for bystanders to hear and understand the audio sound.
  • FIG. 24B is a perspective view of a flip-type mobile telephone 3150 with an integrated directional speaker according to another embodiment ofthe invention.
  • the mobile telephone 3150 is, for example, a cellular phone.
  • the mobile telephone 3150 shown in FIG. 24B is similar to the mobile telephone 3100 illustrated in FIG. 24A. More particularly, the mobile telephone 3150 includes a housing 3152 that provides a body for the mobile telephone 3150.
  • the mobile telephone 3150 includes a display 3154, a plurality of keys 3156, and a navigation control 3158. To support wireless commumcations, the mobile telephone 3150 also includes an antenna 3160.
  • the mobile telephone 3150 includes a microphone 3162 for voice pickup and an ear speaker 3164 for audio output.
  • the mobile telephone 3150 includes a directional speaker 3166.
  • the directional speaker 3166 is provided in a lower region of a lid portion 3168 of the housing 3152 of the mobile telephone 3150.
  • the directional speaker 3166 directs audio output to the user ofthe mobile telephone 3150 in a directional manner.
  • the directional nature ofthe directional sound output is towards the user and thus provides privacy by restricting the audio sound to a confined directional area.
  • the direction for the audio output by the directional speaker 3116, 3166 can be estimated and thus fixed in advance.
  • the directional speakers 3116, 3166 shown in FIGs. 24A and 24B can be primarily structurally fixed with respect to their directional audio output.
  • the angle and direction can be set such that the directional speaker 3116, 3166 would output audio in the direction ofthe user's ears assuming that the user holds the mobile telephone 3100, 3150 in front of them so as to view information on the display 3104, 3154.
  • the directional speakers 3116, 3166 can be structurally movable so that a user is able to alter the direction ofthe directional audio output to suit his needs.
  • the directional speakers 3116, 3166 can, for example, be repositionable to allow repositioning ofthe output direction for the directional speakers 3116, 3166.
  • the directional speakers 3116, 3166 can, for example, be repositionable by being mounted on a pivot, flexible wire or other rotatable or flexible member.
  • the mobile telephones 3100, 3150 include a knob or a switch that electronically controls the direction ofthe audio output.
  • a knob or a switch that electronically controls the direction ofthe audio output.
  • the plurality of keys on the phone 3150 shown in FIG. 24B establishes the x-y plane, with x being approximately along the direction ofthe hinge ofthe phone.
  • a user can adjust the output direction ofthe audio signals from the directional speaker 3166 in the y-z plane.
  • the placement of directional speaker 3116, 3166 with respect to its housing 3102, 3152, respectively can vary with implementation. Typically, however, the placement is designed to facilitate directing the output audio in the direction of a person that is to hear the audio sounds.
  • the placement ofthe directional speaker 3116 with respect to the housing 3102 shown in FIG. 24A and placement of the directional speaker 3166 with respect to the housing 3152 shown in FIG. 24B are merely representative placements, as various other placement are possible.
  • a directional speaker could be placed near the ear speaker, near the display, on the outer or back surface ofthe housing, etc.
  • FIG. 25 is a perspective view of a personal digital assistant 3200 with an integrated directional speaker according to one embodiment ofthe invention.
  • the personal digital assistant 3200 includes a housing 3202 that provides a body for the personal digital assistant 3200.
  • the personal digital assistant 3200 includes a display 3204, an input pad 3206, navigation buttons 3208, and other buttons 3210.
  • the display 3204 presents information to be viewed by the user of the personal digital assistant 3200.
  • the input pad 3206 for example, allows user to select soft buttons or enter characters through gestures.
  • the navigation buttons 3208 allow a user to interact with information displayed by the display 3204.
  • the buttons 3210 can provide various functions, such as initiating a particular operation, data entry, or item selection.
  • the personal digital assistant 3200 includes a directional speaker 3212.
  • the directional speaker 3212 provides directional audio output for the user ofthe personal digital assistant 3200.
  • the audio output by the directional speaker 3212 is not only directed in a predetermined direction but also substantially confined to that predetermined direction. As a result, the audio output by the directional speaker 3212 is not easily heard by others but the user ofthe personal digital assistant 3200.
  • the positioning ofthe directional speaker 3212 can be fixed or adjustable, as noted above with respect to FIGs. 24A and 24B. If adjustable, the direction ofthe audio output is able to be altered. Still further, the placement ofthe directional speaker 3212 shown in FIG. 25 is one possible embodiment; therefore, it should be recognized that the directional speaker 3212 can be positioned in any of a wide variety of places on the personal digital assistant 3200. However, in preferred embodiments, the directional speaker 3212 is placed on the front side ofthe housing 3202.
  • the personal digital assistant 3200 may or may not have wireless communication capabilities. However, if the personal digital assistant 3200 does have wireless communication capabilities, the personal digital assistant 3200 may also include one or more of a microphone and a traditional speaker. In yet another embodiment, the personal digital assistant 3200 also includes a camera. If the personal digital assistant 3200 has these components, then the user of the personal digital assistant 3200 can, for example, use the personal digital assistant 3200 as a video phone or participate in video conferences using the personal digital assistant 3200. By using the directional speaker 3212 instead of a traditional speaker, the audio output from the personal digital assistant 3200 can be directed primarily to the user ofthe personal digital assistant 3200.
  • the audio output enjoys a certain level of privacy without requiring the user ofthe personal digital assistant 3200 to hold the personal digital assistant 3200 to her ear or to wear a headset.
  • the user ofthe personal digital assist 3200 would be able to view the display 3204 while also hstening to audio output in a relatively private manner.
  • FIG. 26 is a block diagram of a wireless communication device 3300 according to one embodiment ofthe invention.
  • the wireless communication device 3300 is, more generally, an electronic device with wireless communication capability.
  • the wireless communication device 3300 can, for example, represent the mobile telephone 3100 shown in FIG. 24A, the mobile telephone 3150 shown in FIG. 24B, or the personal digital assistant 3200 shown in FIG. 25 (with such supporting wireless communication circuitry).
  • the wireless communication device 3300 includes a controller 3302 that controls overall operation for the wireless communication device 3300.
  • a user input device 3304 can represent one or more buttons or a keypad that enables the user to interact with the wireless communication device 3300.
  • a display device 3306 allows the controller 3302 to visually present information to the user ofthe wireless communication device 3300.
  • the controller 3302 also couples to read-only memory (ROM) 3308 and random access memory (RAM) 3310.
  • the wireless communication device 3300 also includes a wireless cui-imunication interface 3312 that enables the wireless communication device 3300 to couple to a wireless link 3314 so that information can be transmitted between the wireless communication device 3300 and another communication device.
  • the wireless commumcation device 3300 also includes a microphone 3316 and a directional speaker 3318.
  • the microphone 3316 may be designed to pickup incoming audio signals with respect to a particular direction.
  • the directional speaker 3318 is specifically designed to output audio sound in a confined direction. In one embodiment, the directional speaker 3318 ( outputs ultrasonic sound that become audio sound so that a user ofthe wireless communication device 3300 can hear the audio output. However, by using the directional speaker 3318, other persons (besides the user) in the vicinity ofthe wireless commumcation device 3300 would have difficulty hearing the audio output produced by the wireless communication device 3300.
  • the wireless commumcation device 3300 can also include a traditional speaker 3320 and a camera 3322.
  • the traditional speaker 3320 can be used when the user ofthe wireless communication device 3300 is not concerned about privacy, desires others to hear the audio output, or is holding the device right next to one of her ears.
  • the camera 3322 can allow the wireless communication device 3300 to transmit video (or at least still images) to other devices over the wireless link 3314.
  • the microphone 3316, the directional speaker 3318, the traditional speaker 3320 or the camera 3322 are a part of or integral to the wireless communication device 3300.
  • any ofthe microphone 3316, the directional speaker 3318, the traditional speaker 3320 or the camera 3322 could be provided external to the wireless communication device 3300 and coupled thereto in a wired or wireless manner.
  • FIG. 27A is a block diagram of a directional audio conversion apparatus 3400 according to one embodiment ofthe invention.
  • the directional audio conversion apparatus 3400 transforms audio input signals into directional audio output signals.
  • the directional audio conversion apparatus 3400 includes a pre-processor 3402 and an ultrasonic speaker 3406.
  • the pre-processor 3402 can be implemented by hardware or software. In one embodiment, at least a portion o. the pre-processor 3402 can be internal to and thus part ofthe controller 3302 shown in FIG. 26.
  • the pre-processor 3402 can be separate circuitry, either within or external to the wireless communication device 3300. The separate circuitry can be an integrated circuit.
  • the ultrasonic speaker 3406 is one type of directional speaker (e.g., the directional speaker 3318).
  • the pre-processor 3402 receives audio input signals 3408, and converts the audio input signals 3408 into ultrasonic drive signals 3410.
  • the ultrasonic drive signals 3410 are supplied to the ultrasonic speaker 3406 to generate ultrasonic output 3412.
  • the ultrasonic output 3412 is subsequently transformed, for example, by air to audio output 3414. Often it is desirable to make the frequency spectrum ofthe audio output 3414 as similar to the audio input 3408 as possible.
  • the audio input is represented by f(t), the ultrasonic carrier signals by ⁇ c t, the drive signals by f ⁇ (t), the impulse response ofthe ultrasonic speaker or transducer by h(t), the ultrasonic output by g(t), and the audio output by y(t).
  • f(t) dt 2 1 2 * cos ⁇ c t
  • pre-processing operations by the pre-processor to generate f ⁇ (t) This can be known as the basic pre-processing performed by a basic pre- processing circuit.
  • ft(t) ® h(t) represents the operation performed by the ultrasonic speaker to generate g(t), with the symbol ® denoting signal convolution operations.
  • S dt 2 [ g 2 (t) ] represents self-demodulation ofthe ultrasonic output g(t) by air to generate the audio output y(t).
  • the pre-processor can further perform a number of additional operations to modify the drive signals 3410 before feeding them to the speaker.
  • One objective of such additional preprocessing is to make the frequency spectrum ofthe audio output signals 3414 to be as similar to that ofthe audio input 3408 as possible.
  • FIG. 27B is a block diagram ofthe pre-processor 3402 according to one embodiment ofthe invention.
  • the pre-processor 3402 in this embodiment, includes a basic pre-processing circuit 3450 and an estimation circuit 3452.
  • the estimation circuit 3452 in a feedback loop formed by the basic pre-processing circuit 3450.
  • D(t - T) represents delaying the audio input 3408 by T, which is the total loop delay.
  • FIG. 27C shows one embodiment of an estimation circuit 3452.
  • H(t) represents the estimated impulse response ofthe ultrasonic speaker
  • G(t) represents the estimated ultrasonic output, both subject to finite transmission bandwidth ofthe system.
  • LPFl and LPF2 represent low-pass filter 1 and low-pass filter 2, respectively.
  • the basic pre-processing circuit 3450 can be of different embodiments. Assume F(t) represents the audio input f(t), shifted by 90 degrees. For an amplitude modulated signal preprocessing scheme, various embodiments for the basic pre-processing circuit 3450 can perform any one of the following operations:
  • various embodiments for the basic pre-processing circuit 3450 can perform any one ofthe following operations: cos ⁇ c t + cos ( ⁇ c t + ⁇ f(t) dt 2 ), for phase modulation with carrier; and cos ( ⁇ c t + JJ f(t) dt 2 ), for phase modulation with suppressed carrier.
  • FIG. 28 illustrates different embodiments of directional speaker characteristics according to the present invention.
  • the directional speaker can, for example, be any ofthe directional speakers 3116, 3166, 3212, 3318 and 3406 illustrated in FIGs. 24A, 24B, 25, 26 and 27A respectively.
  • the directional speaker can be implemented using a piezoelectric thin film.
  • the piezoelectric thin film can be deposited on a plate with many cylindrical tubes, for example, as previously described.
  • a significant percentage ofthe power of the ultrasonic/audio output generated by the emitting surface ofthe directional speaker can, in effect, be confined in a cone (virtual or physical).
  • the FWHM of the signal beam can be about 24 degrees. Assume that such a directional speaker is held by the user, such as in front ofthe user in one ofthe user's hands.
  • the output from the speaker can be directed in the anticipated direction ofthe user's head, with the distance between the hand and the head being, for example, 10-30 inches.
  • More than 75% ofthe power ofthe audio output generated by the emitting surface ofthe directional speaker is, in effect, confined in a virtual cone.
  • the tip ofthe cone is at the speaker, and the mouth ofthe cone is at the location ofthe user's head.
  • the diameter ofthe mouth ofthe cone, or the diameter ofthe cone in the vicinity of the user's can be about 4 to 12 inches.
  • the ultrasonic frequency is at 100 KHz, with convex surfaces to expand the beam, for example, as to be described below.
  • the emitting surface ofthe directional speaker is around 5 cm by 1 cm.
  • the direction ofthe audio output from the directional speaker can be adjusted electronically.
  • One approach is to attach the speaker to a base that can be rotated electronically.
  • the orientation ofthe base can be set by turning a knob on, for example, the phone 3150.
  • the speaker is composed of a number of directional speakers.
  • the phase among the signals from the directional speakers can be modified to adjust the direction ofthe resultant beam. This is similar to techniques used in a phase-array antenna to adjust the direction ofthe beam.
  • the directional speaker can make use of a curved emitting surface
  • the curved emitting surface or reflector enable the width ofthe beam to be increased.
  • FIG. 29 is a flow diagram of audio signal processing 3600 according to one embodiment ofthe invention.
  • the wireless communication device contains not only a directional speaker but also a traditional speaker (e.g., ear speaker).
  • 3600 is, for example, performed by a wireless communication device.
  • the controller 3302 ofthe wireless communication device 3300 illustrated in FIG. 26 can perform the audio signal processing 3600.
  • the wireless communication device can be a mobile telephone.
  • a mobile telephone can have dual modes of operation, namely, a normal or traditional mode, and a two-way or directional-speaker mode.
  • a normal-mode the audio sound is produced directly from a traditional (or standard) speaker (e.g., an ear speaker integral with the mobile telephone (e.g., within its housing).
  • a speaker is substantially non-directional (and further does not generate audio sound through transforming ultrasonic signals in air).
  • the audio sound is produced by a directional speaker.
  • the mobile telephone is, for example, operating as a walkie-talkie, a dispatch type communicator, or a video phone.
  • the mobile telephone may also have a speakerphone mode in which audio output is produced by a speaker that allows those in the vicinity ofthe mobile telephone to hear the audio output.
  • the speaker in this case is more powerful than the ear speaker but also substantially non- directional. Mode selection, whether manual or automatic to be described, can also be used to select a speakerphone mode.
  • the audio signal processing 3600 initially receives 3602 incoming audio signals over a wireless communication path.
  • a decision 3604 determines whether a directional speaker is active. When the decision 3604 determines that the directional speaker is not active, then the incoming audio signals are output 3606 to the traditional speaker ofthe wireless communication device.
  • the wireless communication device is a mobile telephone
  • the traditional speaker is, for example, an ear speaker (earpiece).
  • the wireless communication device is a personal digital assistant or portable computer
  • the traditional speaker could simply be a standard audio speaker.
  • the decision 3604 determines that the directional speaker is active, then the incoming audio signals can be pre-processed 3608.
  • the preprocessing can utilize the techniques described under FIGs. 27A-C. After the incoming audio signals are pre-processed 3608, the pre-processed signals are converted 3610 to ultrasound drive signals. Then, the directional speaker is driven 3612 in accordance with the ultrasound drive signals.
  • a decision 3614 determines whether there are more incoming audio signals to be processed at this time. When the decision 3604 determines that there are more incoming audio signals to be processed, then the audio signal processing 3600 returns to repeat the operation 3602 and subsequent operations so that the additional incoming audio signals can be similarly processed. Alternatively, when the decision 3614 determines that there are no more audio signals to be processed at this time, then the audio signal processing 3600 is complete and ends.
  • the directional audio conversion apparatus 3400 illustrated in FIG. 27A can also perform the audio signal processing 3600.
  • FIG. 30 is a flow diagram of speaker selection processing 3700 according to one embodiment ofthe invention.
  • the speaker selection processing 3700 is, for example, performed by a wireless communication device.
  • the controller 3302 ofthe wireless communication device 3300 illustrated in FIG. 26 can perform the speaker selection processing 3700.
  • the speaker selection processing 3700 begins with a decision 3702 that determines whether a manual speaker selection has been made. When the decision 3702 determines that a manual speaker selection has been made, then the selected speaker is activated 3704 in accordance with the manual request.
  • the manual speaker selection can, for example, be made by a user in a variety of ways, such as by (a) a button on the device, (b) a user selection with respect to a user interface presented on a display, (c) a sensor in accordance with certain sensing conditions, or (d) other means.
  • the decision 3702 determines that a manual speaker selection has not been made, then device condition information is obtained 3706.
  • the device condition information can result from one or more sensors integral or coupled to the device.
  • the appropriate speaker to be selected is then determined 3708 based upon the device condition information. For example, if the wireless communication device was placed against the user's ear, then a sensor could detect (e.g., estimate) such placement and, as a result, use an earpiece i type speaker.
  • the device is determined (e.g., estimated) to be at least a certain distance away from an object (such as the user's head or ear), then the directional speaker can be utilized. In any case, the appropriate speaker is then activated 3710. Following the operation 3704 or 3710, the selection processing 3700 is complete and ends.
  • FIG. 31 is a diagram indicating exemplary conditions that can be utilized to select the appropriate speaker.
  • the speaker selection processing 3700 and the exemplary conditions shown in FIG. 31 assume that the wireless communication device has multiple speakers to be selected from, and at least one of which is a directional speaker and at least another of which a traditional speaker.
  • mode selection can be achieved through a switch integrated to the mobile telephone.
  • the switch can be electrical, mechanical or electro-mechanical.
  • a mechanical switch can be located right next to the traditional speaker. When the traditional speaker is against the user's ear, the switch will be pressed and the traditional speaker will be activated.
  • mode selection can be determined based on a distance.
  • the mobile telephone can include a sensor to sense the distance the mobile telephone (e.g., its ear speaker region) is from a surface.
  • a sensor can use a light beam (e.g., infrared beam) to sense the distance.
  • the normal mode can be automatically selected, and when the distance is greater than the short distance, then the mobile telephone is deemed not against the user's ear, so the two-way mode is automatically selected.
  • One way to detect distance based on infrared beam is to measure the intensity of reflected beam. If the reflecting surface is very close to the infrared source, the intensity ofthe reflected beam would be high. However, if the reflecting surface is 12" or more away, the intensity would be relatively much lower. As a result, by measuring the intensity ofthe reflected beam, distances can be inferred.
  • mode selection can be based on orientation. If the mobile telephone is substantially in a vertical orientation (e.g., within 45 degrees from the vertical), the mobile telephone will operate in the two-way mode. However, if the mobile telephone is substantially in a horizontal orientation (e.g., within 30 degrees from the horizontal), the mobile telephone will operate in the normal mode. A gyro (gyroscope) in the mobile telephone can be used to determine the orientation ofthe mobile telephone.
  • mode selection can be based on usage. For example, if the mobile telephone is receiving user input via its integral keypad, acting as a video phone, or playing a video, then the mobile telephone can be set to operate in the two-way mode. FIG.
  • the 32A is a perspective view of a personal digital assistant 3900 according to another embodiment ofthe invention.
  • the personal digital assistant 3900 is generally similar to the personal digital assistant 3200 shown in FIG. 25.
  • the personal digital assistant 3900 further includes a card 3902 that is inserted into a card slot ofthe personal digital assistant 3900.
  • the card 3902 is an add-on card that provides wireless communication capabilities as well as audio and video capabilities for the personal digital assistant 3900. More particularly, the card 3902 includes a directional speaker 3904, a camera 3906, a microphone 3908 and an antenna 3910.
  • the directional speaker 3904 provides confined audio output in a particular direction as noted above with respect to other embodiments.
  • the camera 3906 provides video input capabilities to the personal digital assistant 3900.
  • the microphone 3908 allows audio input.
  • the antenna 3910 is used for wireless communications.
  • the card 3902 allows the personal digital assistant 3900, that otherwise does not support wireless commumcation or audio- video features, to operate as a video phone or participate in video conferences.
  • the user's audio output (voice) can be picked up by the microphone 3908, and the user's face or other desired picture or video can be acquired by the camera 3906.
  • the user ofthe personal digital assistant 3900 can then hear incoming audio by way ofthe directional speaker 3904, which through its directional characteristics provides a certain degree of privacy to the user.
  • video input can be displayed on the display 3204 for the benefit ofthe user.
  • the card 3902 can include circuitry within the housing ofthe card 3902 to support the functionality offered by the card 3902.
  • the circuitry can pertain to various discrete electronic devices andor integrated circuits. The circuitry can thus supplement the circuitry ofthe personal digital assistant 3900.
  • the card 3902 includes wireless communication capabilities, a microphone, a directional speaker and a camera, it should be understood that other cards that can be used in a similar manner need not support each of these items.
  • the addon card could simply pertain to a directional speaker 3904 and its associated circuitry (e.g., audio conversion apparatus).
  • FIG. 32B is a perspective view of a personal digital assistant 3920 according to another embodiment ofthe invention.
  • the personal digital assistant 3920 is also generally similar to the personal digital assistant 3200 shown in FIG. 25.
  • the personal digital assistant 3920 further includes a card 3922 that is inserted into a card slot ofthe personal digital assistant 3920.
  • the card 3922 is an add-on card that provides directiona 1 avdio capabilities for the personal digital assistant 3920.
  • the card 3922 includes a directional speaker 3904.
  • the directional speaker 3904 provides confined audio output in a particular direction as noted above with respect to other embodiments.
  • the personal digital assistant 3920 may or may not already support various other communications capabilities such as audio or video input, wireless voice communications, and wireless data transfer.
  • the card 3922 can include circuitry within the housing ofthe card 3922 to support the directional speaker 3924.
  • the circuitry can pertain to various discrete electronic devices and/or integrated circuits. The circuitry can thus supplement the circuitry ofthe personal digital assistant 3900. Alternatively, the card 3922 may rely significantly on circuitry within the personal digital assistant 3920.
  • the card 3902, 3922 can also take various forms.
  • the card 3902, 3922 is a rectangular card often know as a PC-CARD or PCMCIA card.
  • the card 3902, 3922 is of a smaller scale than a PC-CARD or PCMCIA card, such as a mini-card.
  • the card 3902, 3922 is a peripheral device that plugs directly into a peripheral port (e.g., USB or Fire Wire), or is a peripheral device that is tethered to the personal digital assistant through a wire such as shown in FIG. 33.
  • FIG. 33 is a perspective view of a mobile telephone 4000 and a peripheral attachment 4002.
  • the mobile telephone 4000 includes a microphone 4004 and an ear speaker 4006.
  • the peripheral device 4002 is an add-on to the mobile telephone 4000 to provide an external speaker arrangement for use by the user ofthe mobile telephone 4000.
  • the peripheral attachment 4002 includes a base 4008 that supports and positions a directional speaker 4010.
  • the directional speaker 4010 has characteristics as noted above, namely, directionally constrained audio sound output.
  • the base 4008 supports the directional speaker 4010. By repositioning the base 4008, the particular direction in which the constrained audio output is directed can be altered.
  • the direction ofthe audio output can also be adjusted electronically by the techniques as described above.
  • the base 4008 is also connected to a cord 4012 that, in turn, has a connector 4014.
  • the connector 4014 can plug into a receptacle 4016 ofthe mobile phone 4000.
  • the receptacle 4016 pertains to a headset jack or external speaker connector associated with the mobile telephone 4000.
  • the housing 4008 contains electronics to convert the standard audio signals ttia. would be delivered to the housing 4008 via the receptacle 4016 ofthe mobile telephone 4000.
  • the electronic circuitry e.g. pre-processing circuits in FIG. 27 A
  • the power necessary for the electronic circuitry within the base 4008 can be supplied by a battery or by a connection to a power source.
  • the connection can be to a separate power source or to the power source associated with the mobile telephone 4000.
  • Such connection can be through the cord 4012 or another cord.
  • the receptacle 4016 can pertain to a peripheral port (e.g., Universal Serial Bus (USB) or FireWire, etc.). If the port provides both data and power, the electronics within the base 4008 can be powered via the cable ofthe peripheral port. Still further, such ports can transmit data signal to the base 4008, which can produce the drive signal for the directional speaker 4010. In other words, at least a portion ofthe pre-processing operations can be performed by the mobile telephone 4000.
  • USB Universal Serial Bus
  • FireWire FireWire
  • the electronics required in the base 4008 can be reduced as compared to other embodiments because electronic capabilities (e.g., circuitry) in the mobile telephone 4000 can be used to perform some ofthe operations needed to operate the directional speaker 4010 of the peripheral attachment 4002.
  • electronic capabilities e.g., circuitry
  • FIG. 34 is a diagram depicting additional applications associated with the present invention.
  • the portable electronic device with a directional speaker is a mobile telephone.
  • the invention can be applied to various other applications, with a number of examples shown in FIG. 34. These various embodiments can be used separately or in combination.
  • the device can be an audio unit, such as a MP3 player, a CD player or a radio. Such systems can be considered one-way communication systems.
  • the device can be an audio output device, such as for a stereo system, television or a video game player.
  • the device may not be portable.
  • the user can be playing a video game and instead of having the audio signals transmitted by a normal speaker, the audio signals, or a representation ofthe audio signals, are directed to a directional speaker. The user can then hear the audio signals in a directional manner, reducing the chance of annoying or disturbing people in his immediate environment.
  • the device can, for example, be used for a hearing aid.
  • a hearing aid Different embodiments on hearing enhancement through personalizing or tailoring to the hearing ofthe user have been described in this application.
  • the wireless communication device can function both as a hearing aid and a cell phone. When there is no incoming call, the system functions as a hearing aid. On the other hand, when there is an incoming call, instead of capturing audio signals in its vicinity, the system transmits the incoming call through the directional speaker to be received by the user.
  • the device can include a monitor or a display. A user can watch television or video signals in the public, again with reduced possibility of disturbing people in the immediate surroundings because the audio signals are directional.
  • the device can also include the capability to serve as a computation system, such as in a personal digital assistant (PDA) or a notebook computer.
  • PDA personal digital assistant
  • the user can simultaneously communicate with another person in a hands-free manner.
  • Data generated by a software application the user is working on using the computation system can be transmitted digitally with the voice signals to a remote device.
  • the device can be a personalized system.
  • the system can selectively amplify different audio frequencies by different amounts based on user preference or user hearing characteristics. In other words, the audio output can be tailored to the hearing ofthe user.
  • the personalization process can be done periodically, such as once every year, similar to periodic re-calibration. Such re-calibration can be done by another device, and the results can be stored in a memory device.
  • the memory device can be a removable media card, which can be inserted into the system to personalize the amplification characteristics ofthe directional speaker as a function of frequency.
  • the system can also include an equalizer that allows the user to personalize the amplitude ofthe speaker audio signals as a function of frequency.
  • the device can also be personalized based on the noise or sound level in the vicinity ofthe user.
  • the device can sense the noise or sound level in its immediate vicinity and change the amplitude characteristics ofthe audio signals as a function ofthe noise or sound level.
  • a number of embodiments have been described with the speaker being directional.
  • a speaker is considered a directional if it is driven by ultrasonic signals.
  • Such a directional speaker is also referred to herein as an ultrasonic speaker.
  • the ultrasonic speaker produces an ultrasonic output that is converted into an audio output by mixing in air.
  • the ultrasonic output results from modulating audio output with an ultrasonic carrier wave, and the ultrasonic output is thereafter self-demodulated through non-linear mixing in air to produce the audio signals.
  • the device is also applicable in a moving vehicle, such as a car, a boat or a plane.
  • a directional audio conversion apparatus can be integrated into or attachable to the moving vehicle.
  • the moving vehicle can be a car.
  • the apparatus can be inserted into the port to generate directional audio signals.
  • one or more directional speakers are incorporated into a moving vehicle.
  • the speakers can be used for numerous applications, such as personal entertainment and commumcation applications, in the vehicle.
  • the directional speaker emits ultrasonic beams.
  • the frequency ofthe ultrasonic beams can be, for example, in the 40 kHz range, and the beams can be diverging.
  • a 3-cm (diameter) emitter generates an ultrasonic beam that diverges to a 30-cm (diameter) cone after propagating for a distance of 20 to 40 cm. With the diameter ofthe beams increased by 10 dB, the ultrasonic intensity is reduced by around 20 dB.
  • the frequency ofthe beams is at a higher range, such as in the 200 to 500 kHz range. Such higher frequency ultrasonic beams experience higher attenuation in air, such as in the 8 to 40 dB/m range depending on the frequency.
  • the beams with higher ultrasonic frequencies are diverging beams also.
  • Such embodiments with higher frequencies and diverging beams are suitable to other applications also, such as in areas where the distance of travel is short, for example, 20 cm between the speaker and ear.
  • the speaker can be mounted directly above where a user should be, such as on the rooftop ofthe vehicle above the seat.
  • the speaker can be located closer to the back than the front ofthe seat because when a person sits, the person typically leans on the back ofthe seat.
  • the directional speaker is mounted slightly further away, such as at the dome light of a car, with ultrasonic beams directed approximately at the head rest of a user's seat inside the car.
  • one speaker is located in the vicinity of the corner ofthe dome-light that is closest to the driver, with the direction ofthe signals, pointing towards the approximate location of the head of the driver.
  • Signals not directly received by the intended recipient, such as the driver can be scattered by the driver and/or the seat fabrics thereby reducing the intensity ofthe reflected signals to be received by other passengers in the car.
  • the speakers can emit audio beams, with any directivity depending on the physical structure ofthe speaker.
  • the speaker is a horn or cone or other similar structure.
  • the directivity of such a speaker depends on the aperture size ofthe structure.
  • a 10-cm horn has a ⁇ /D ofabout 1 at 3 kHz, and a ⁇ /D ofabout 0.3 at 10 kHz.
  • the intensity ofthe beams goes as 1 R 2 , with R being the distance measured from, for example, the apex ofthe horn. To achieve isolation, proximity becomes more relevant.
  • the speaker is positioned close to the user.
  • the speaker is placed directly behind the passenger's ears, such as around 10 to 15 cm away.
  • the speaker can be in the head rest or head cushion ofthe user's seat. Or, the speaker can be in the user's seat, with the beam directed towards the user. If other passengers in the vehicle are spaced at least 1 meter away from the user, based on propagation attenuation (or attenuation as the signals travel in air), the sound isolation effect is around 16 to 20 dB.
  • the structure ofthe horn or cone can provide additional isolation effect, such as another 6 to 10 dB.
  • the user can control one or more attributes ofthe beams.
  • the user can control the power, direction, distance or coverage ofthe beams.
  • the controls can be on the dash board ofthe vehicle. In another embodiment, the controls are in the armrest ofthe seat the user is sitting on.
  • the controls can be mechanical.
  • the speaker is at the dome light, and there can be a rotational mechanism at the dome light area.
  • the rotational mechanism allows the user to adjust the direction of beam as desired.
  • the rotational mechanism allows two-dimensional rotations.
  • the beams are emitting at a 30 degrees angle from the rooftop, and the rotational mechanism allows the beams to be rotated 180 degrees around the front side ofthe vehicle.
  • the elevation angle can also be adjusted, such as in the range of 20 to 70 degrees from the rooftop.
  • Another mechanical control can be used to turn the speaker off. For example, when the user stands up from the user's seat, after a preset amount of time, such as 3 seconds, the speaker is automatically turned off.
  • the controls can also be in a remote controller.
  • the remote controller can use BlueTooth, WiFi, ultrasonic, or infrared or other wireless technologies.
  • the remote controller can also include a fixed or detachable display.
  • the remote controller can be a portable device.
  • the sound level does not have to be too high.
  • the sound level can be about 60 dB SPL at 5 cm away from the speaker.
  • the content ofthe signals from the speaker can be accessed in a number of ways.
  • the content which can be from a radio station, is wireiessiy received by the speaker.
  • the content can be received through the Internet, a WiFi network, a WiMax network, a cell-phone network or other types of networks.
  • the speaker does not have to receive the content directly from the broadcaster, or the source.
  • the vehicle receives the content wireiessiy from the source, and then through a wired or a wireless connection, the vehicle transmits the content to the speaker.
  • the content can be selected from a multimedia player, such as a CD player, from the vehicle.
  • the multimedia player can receive from multiple channels to support multiple users in the vehicle. Again, the contents or channels can be received from a broadcast station and selected locally. Or, the content can be created on-demand and streamed to the user demanding it by a wireless server station.
  • the content can be downloaded to a multimedia player from a high-speed wireless network in its entirely before being played.
  • Another type of control is to select the radio station or a piece of music on a multimedia player. Again, these types of selection control can be from a fixed location in the vehicle, such as there can be control knobs at the dashboard, console, arm rest, door or seat ofthe vehicle. Or, as another example, the selection controller can be in a portable device.
  • a number of embodiments have been described regarding one speaker. In yet another embodiment, there can be more than one speaker for a user. The multiple speakers allow the creation of stereo or surround sound effects.
  • the player can receive from multiple channels to support multiple users in the vehicle. If there is more than one user in the vehicle, each user can have a directional speaker or a set of directional speakers. Regarding the locations ofthe speakers for multiple users, in one embodiment, they are centralized. All ofthe speakers are, for example, at the dome light of a vehicle. Each user has a corresponding set of directional beams, radiating from the dome towards the user. Or, the speakers can be distributed. Each user can have a speaker mounted, for example, on the rooftop above where the user should be seating, or in the user's headrest. Regarding control, each user can independently control the signals to that user.
  • a user's controller can control the user's own set of beams, or to select the content of what the user wants to hear.
  • Each user can have a remote controller.
  • the controller for a user is located at the armrest, seat or door for that user.
  • a number of embodiments ofthe invention pertain to a directional audio delivery device for an audio system.
  • the audio system can be a stereo system, a DVD player, a compact disc player, a music amplifier or a musical instrument, a VCR, a television, a home-entertainment system, or other audio system. It typically delivers audio output based on, or pertaining to, certain audio signals. These audio signals can be generated by the audio system, or they can be transmitted to and received by the audio system. The reception by the audio system can be wireless or wireline, such as through cables. Without the directional audio delivery device, the audio system produces audio sound for the benefit of any persons in its general vicinity.
  • the delivery device converts the audio signals into directional audio output that is substantially confined within a beam, with a beam width.
  • the directional audio output is targeted to one or more persons who would like to hear the audio output. In one embodiment, these one or more persons can also control a number of attributes ofthe beam. Other persons in the same vicinity who are not desirous of hearing the audio output, would only hear a substantially lower level of ⁇ the audio output. Hence, they are less disturbed by the unwanted audio sounds.
  • the audio system with its corresponding directional audio delivery device can be known as a directional audio apparatus.
  • the directional device can be incorporated into the audio system, or can be confined in a separate housing, such as in a set-top box.
  • the set-top box can be tethered or wireiessiy coupled to the audio system.
  • the audio signals can be received either by the set-top box or by the audio system.
  • FIG. 35 is a block diagram of a directional audio apparatus 5100 with an audio system 5102 and a directional audio delivery device 5104, according to one embodiment ofthe invention.
  • FIG. 36A is a block diagram of a directional audio delivery device 5200 according to one embodiment ofthe invention.
  • the directional audio delivery device 5200 is, for example, suitable for use as the directional audio delivery device 5104 illustrated in FIG. 35.
  • the directional audio delivery device 5200 includes audio conversion circuitry 5202 and a directional speaker 5204.
  • the audio conversion circuitry 5202 receives audio signals (Audio- In). The reception can be from the audio system 5102, or can be from another device.
  • the audio signals can be, for example, electrical signals from the audio system 5102, or audio waves wireiessiy transmitted to be received by the audio conversion circuitry.
  • the received audio signals can then be pre-processed, and are then converted into ultrasonic signals that are supplied to the directional speaker 5204.
  • the directional speaker 5204 is an ultrasonic speaker that produces ultrasonic output to generate audio output.
  • the ultrasonic output carries the audio output to be delivered in a directionally constrained manner.
  • FIG. 36B is a block diagram of a directional audio delivery device 5220 according to another embodiment ofthe invention.
  • the directional audio delivery device 5220 is, for example, suitable for use as the directional audio delivery device 5 :04 illustrated in FIG. 35.
  • the directional audio delivery device 5220 includes audio conversion circuitry 5222, a beam-attribute control unit 5224 and a directional speaker 5226.
  • the audio conversion circuitry 5222 converts the received audio signals into ultrasonic signals.
  • the beam-attribute control unit 5224 controls one or more attributes ofthe audio output.
  • the beam-attribute control unit 5224 receives a beam attribute input, which in this example is related to the direction ofthe beam. This can be known as a direction input.
  • the direction input provides information to the beam-attribute control unit 5224 pertaining to a propagation direction ofthe ultrasonic output produced by the directional speaker 5226.
  • the direction input can be a position reference, such as a position for the directional speaker 5226 (relative to its housing), the position of a person desirous of hearing the audio sound, or the position of an external electronic device (e.g., remote controller).
  • the beam-attribute control unit 5224 receives the direction input and determines the direction of the audio output.
  • Another attribute can be the desired distance traveled by the beam. This can be known as a distance input.
  • the ultrasonic frequency ofthe ultrasonic output can be adjusted. By controlling the ultrasonic frequency, the desired distance traveled by the beam can be adjusted. This will be further explained below.
  • the directional speaker 5226 generates the desired audio output accordingly.
  • FIG. 37A is a diagram illustrating a representative arrangement 5300 suitable for use with the invention.
  • the representative arrangement 5300 uses a directional audio apparatus 5302 to deliver audio output, which can be an ultrasonic cone 5304 (or beam) of ultrasonic output towards a first user (user-1).
  • the directional audio apparatus 5302 can, for example, be the directional audio apparatus 5100, using any implementation of a directional audio delivery device.
  • a second user (user-2) and a third user (user-3) are also in the vicinity ofthe directional audio apparatus 5302.
  • the directional audio apparatus 5302 produces the ultrasonic output in a directionally constrained manner such that its cone 5304 (or beam) is directed towards the first user (user-1).
  • the resultant audio sound is delivered to the first user (user-1). Only the resultant audio sound of significantly lower level is received by the second user (user-2) and the third user (user- 3). Consequently, they are not disturbed by the audio output that is being heard by the first user (user-1).
  • FIG. 37B is a diagram of a representative building layout 5320 illustrating one application ofthe present invention.
  • the representative building layout 5320 is used to illustrate how a directional audio apparatus 5328 according to the invention can be utilized.
  • the representative building layout 5320 includes a first room 5322, a second room 5324 and a third room 5326.
  • the first room 5322 can, for example, be a family room.
  • the first room 5322 includes a directional audio apparatus 5328.
  • a first user (u-1), a second user (u-2) and a third user (u-3) are in the first room 5322.
  • the directional audio apparatus 5328 can deliver audio
  • the directional audio apparatus 5328 can, for example, be the directional audio apparatus 5100, using any implementation of a directional audio delivery device in the present invention.
  • the directional audio apparatus 5328 delivers a constrained cone 5330 (beam) of audio output or sound towards the first user (u-1). Note that the audio output is substantially constrained within the cone 5330. As a result, the second user (u-2) and the third user (u-3) do not hear the audio output produced by the directional audio apparatus 5328 in any significant way. Some ofthe sound from the cone 5330 might be reflected or dispersed off a rear wall, and received by the second and third users. If so, the sound would have attenuated to a substantially lower level. In one embodiment, the distance covered by the cone 5330 of sound can be adjusted.
  • FIG. 38 is a flow diagram of directional audio delivery processing 5400 according to an embodiment ofthe invention.
  • the directional audio delivery processing 5400 is, for example, performed by a directional audio delivery device, such as the directional audio delivery device 5104 illustrated in FIG. 35. More particularly, the directional audio delivery processing 5400 is particularly suitable for use by the directional audio delivery device 5220 illustrated in FIG. 36B.
  • the directional audio delivery processing 5400 initially receives 5402 audio signals for directional delivery. The audio signals can be supplied by an audio system.
  • a beam attribute input is received 5404.
  • the beam attribute input is a reference or indication of one or more attributes regarding the audio output to be delivered. After the beam attribute input has been received 5404, one or more attributes ofthe beam is determined 5406 based on the attribute input.
  • the input can set the constrained delivery direction ofthe beam.
  • the constrained delivery direction is the direction that the output is delivered.
  • the audio signals that were received are converted 5408 to ultrasonic signals with appropriate attributes, which may include one or more ofthe determined attributes.
  • the directional speaker is driven 5410 to generate ultrasonic output again with appropriate attributes.
  • the ultrasonic output is directed in the constrained delivery direction.
  • the directional audio delivery processing 5400 is complete and ends. Note that the constrained delivery direction can be altered dynamically or periodically, if so desired.
  • FIG. 39 shows examples of attributes 5500 ofthe constrained audio output according to the invention.
  • the attributes can be for the beam-attribute control unit, 5224.
  • the direction 5502 ofthe beam is the direction 5502 ofthe beam.
  • Another attribute can be the beam width, 5504.
  • the width ofthe ultrasonic output can be controlled.
  • the beam width is the width ofthe beam at the desired position. For example, if the desired location is 10 feet directly in front of the directional audio apparatus, the beam width can be the width ofthe beam at that location.
  • the widti 5504 of the beam is defined as the width ofthe beam at its full-width-half-max (FWHM) position.
  • the desired distance 5506 to be covered by the beam can be set.
  • the rate of attenuation ofthe ultrasonic output/audio output can be controlled to set the desired distance.
  • the volume or amplification of the beam can be changed to control the distance to be covered.
  • One attribute ofthe beam is the number 5512 of beams present. Multiple beams can be utilized, such that multiple persons are able to receive the audio signals via the ultrasonic output by the directional speaker (or a plurality of directional speakers). Each beam can have its own attributes.
  • These attribute inputs can be provided either automatically, such as periodically, or manually, such as at the request of a user.
  • the directional audio apparatus can include a normal speaker.
  • a normal speaker generates its audio output based on audio signals, without the need for generating ultrasonic outputs.
  • a directional speaker requires ultrasonic signals to generate its audio output.
  • the inputs can be the position 5508, and the size 5510 of the beam.
  • the position input can pertain to the position of a person desirous of hearing the audio sound, or the position of an electronic device (e.g., remote controller).
  • the beam-attribute control unit 5504 receives the beam position input and the beam size input, and then determines how to drive the directional speaker 5506 to output the audio sound to a specific position with the appropriate beam width. Then, the beam-attribute control unit 5504 produces drive signals, such as ultrasonic signals and other control signals.
  • the drive signals controls the directional speaker 5506 to generate the ultrasonic output towards a certain position with a particular beam size.
  • FIG. 40 is another representative building layout 5600 illustrating an application ofthe present invention.
  • the representative building layout 5600 is generally similar to the representative building layout 5320 illustrated in FIG. 37B.
  • the representative building layout 5600 includes a first room 5602, a second room 5604 and a third room 5606.
  • a first user (u-1), a second user (u-2) and a third user (u-3) are all within the first room 5602, only the first user (u-1) and the second user (u-2) want to hear the audio sound from an audio system.
  • the first room 5602 includes a directional audio apparatus 5608 to output a cone 5610 (or beam) of ultrasonic output towards the first user (u-1) and the second user (u-2).
  • the cone 5610 can have a greater width or footprint than does the cone 5330 illustrated in FIG. 37B so that it substantially encompasses both the first user (u-1) and the second user (u-2). Nevertheless, the third user (u-3) is not significantly disturbed by the audio sound that the first and second users hear by way ofthe ultrasonic output from the directional audio apparatus 5608.
  • the cone 5610 or the beam does not have to propagate directly to the first (u-1) and the second user (u-2).
  • the beam can propagate towards the ceiling of the building, which reflects the beam back towards the floor to be received by the users.
  • One advantage of such an embodiment is to lengthen the propagation distance to broaden the width of the beam when it reaches the users.
  • Another feature of this embodiment is that the users do not have to be in the line-of-sight ofthe directional audio apparatus.
  • FIG. 41 is a flow diagram of directional audio delivery processing 5700 according to another embodiment ofthe invention.
  • the directional audio delivery processing 5700 is, for example, performed by the directional audio delivery device 5104 illustrated in FIG. 35. More particularly, the directional audio delivery processing 5700 is particularly suitable for use by the directional audio delivery device 5220 illustrated in FIG. 36B.
  • the directional audio delivery processing 5700 receives 5702 audio signals for directional delivery.
  • the audio signals are provided by an audio system.
  • two beam attribute inputs are received, and they are a position input 5704, and a beam size input 5706.
  • the directional audio delivery processing 5700 determines 5708 a delivery direction and a beam size based on the position input and the beam size input.
  • the desired distance to be covered by the beam can also be determined.
  • the audio signals are then converted 5710 to ultrasonic signals, with the appropriate attributes. For example, the frequency and/or the power level ofthe ultrasonic signals can be generated to set the desired travel distance ofthe beam.
  • a directional speaker e.g., ultrasonic speaker
  • the directional speaker produces ultrasonic output (that carries the audio sound) towards a certain position, with a certain beam size at that position.
  • the ultrasonic signals are dependent on the audio signals, and the delivery direction and the beam size are used to control the directional speaker.
  • the ultiasonic signals can be dependent on not only the audio signals but also the delivery direction and the beam size.
  • FIG. 42A is a flow diagram of directional audio delivery processing 5800 according to yet another embodiment ofthe invention.
  • the directional audio dehvery processing 5800 is, for example, suitable for use by the directional audio delivery device 5104 illustrated in FIG. 35. More particularly, the directional audio delivery processing 5800 is particularly suitable for use by the directional audio delivery device 5220 illustrated in FIG. 36B, with the beam attribute inputs being beam position and beam size received from a remote device.
  • the directional audio delivery processing 5800 initially activates a directional audio apparatus that is capable of constrained directional delivery of audio sound.
  • a decision 5804 determines whether a beam attribute input has been received.
  • the audio apparatus has associated with it a remote control device, and the remote control device can provide the beam attributes.
  • the remote control device enables a user positioned remotely (e.g., but in line-of-sight) to change settings or characteristics ofthe audio apparatus.
  • One beam attribute is the desired location ofthe beam.
  • Another attribute is the beam size.
  • a user ofthe audio apparatus might hold the remote contiol device and signal to the directional audio apparatus a position reference. This can be done by the user, for example, through selecting a button on the remote control device. This button can be the same button for setting the beam size because in transmitting beam size information, location signals can be relayed as well.
  • the beam size can be signaled in a variety of ways, such as via a button, dial or key press, using the remote control device.
  • control signals for the directional speaker are determined 5806 based on the attribute received. If the attribute is a reference position, a delivery direction can be determined based on the position reference. If the attribute is for a beam size adjustment, control signals for setting a specific beam size are determined. Then, based on the control signals determined, the desired ultiasonic output that is constrained is produced 5812. Next, a decision 5814 determines whether there are additional attribute inputs. For example, an additional attribute input can be provided to incrementally increase or decrease the beam size. The user can adjust the beam size, hear the effect and further adjust it, in an iterative manner.
  • the audio sound can optionally be additionally altered or modified in view ofthe user's hearing characteristics or preferences, or in view ofthe audio conditions in the vicinity ofthe user.
  • FIG. 42B is a flow diagram of an environmental accommodation process 5840 according to one embodiment ofthe invention.
  • the environmental accommodation process 5840 determines 5842 environmental characteristics.
  • the environmental characteristics can pertain to measured sound (e.g., noise) levels at the vicinity ofthe user.
  • the sound levels can be measured by a pickup device (e.g., microphone) at the vicinity ofthe user.
  • the pickup device can be at the remote device held on by the user.
  • the environmental characteristics can pertain to estimated sound (e.g., noise) levels at the vicinity ofthe user.
  • the sound levels at the vicinity ofthe user can be estimated, based on a position of the user/device and the estimated sound level for the particular environment. For example, sound level in a department store is higher than the sound level in the wilderness.
  • the position ofthe user can, for example, be determined by Global Positioning System (GPS) or other triangulation techniques, such as based on infrared, radio-frequency or ultrasound frequencies with at least three non-collinear receiving points.
  • GPS Global Positioning System
  • the audio signals are modified based on the environmental characteristics. For example, if the user were in an area with a lot of noise (e.g., ambient noise), such as at a confined space with various persons or where construction noise is present, the audio signals could be processed to attempt to suppress the unwanted noise, and/or the audio signals (e.g., in a desired frequency range) could be amplified.
  • One approach to suppress the unwanted noise is to intioduce audio outputs that are opposite in phase to the unwanted noise so as to cancel the noise. In the case of amplification, if noise levels are excessive, the audio output might not be amplified to cover the noise because the user might not be able to safely hear the desired audio output.
  • Noise suppression and amplification can be achieved through conventional digital signal processing, amplification and/or filtering techniques.
  • the environmental accommodation process 5840 can, for example, be performed periodically or if there is a break in audio signals for more than a preset amount of time. The break may signify that there is a new audio stream.
  • a user might have a hearing profile that contains the user's hearing characteristics.
  • the audio sound provided to the user can optionally be customized or personalized to the user by altering or modifying the audio signals in view ofthe user's hearing characteristics.
  • FIG. 42C is a flow diagram of an audio personalization process 5860 according to one embodiment ofthe invention.
  • the audio personalization process 5860 retrieves 5862 an audio profile associated with the user.
  • the hearing profile contains information that specifies the user's hearing characteristics. For example, the hearing characteristics may have been acquired by the user taking a hearing test. Then, the audio signals are modified 5864 or pre-processed based on the audio profile associated with the user.
  • the hearing profile can be supplied to a directional audio delivery device performing the personalization process 5860 in a variety of different ways.
  • the audio profile can be electronically provided to the directional audio delivery device inrough a network.
  • the audio profile can be provided to the directional audio delivery device by way of a removable data storage device (e.g., memory card). Additional details on audio profiles and personalization to enhance hearing can be found in other sections of this patent application.
  • the environmental accommodation process 5840 and/or the audio personalization process 5860 can optionally be performed together with any ofthe directional audio delivery devices or processes discussed above.
  • the environmental accommodation process 5840 and/or the audio personalization process 5860 can optionally be performed together with any ofthe directional audio delivery processes 5400, 5700 or 5800 embodiments discussed above with respect to FIGs. 38, 41 and 42.
  • the environmental accommodation process 5840 and/or the audio personalization process 5860 typically would precede the operation 5408 in FIG. 38, the operation 5710 in FIG. 41 and/or the operation 5812 in FIG. 42A.
  • FIG. 43 A is a perspective diagram of an ultrasonic transducer 5900 according to one embodiment ofthe invention.
  • the ultrasonic transducer 5900 can implement the directional speakers discussed herein.
  • the ultrasonic transducer 5900 produces the ultrasonic output utilized as noted above.
  • the ultrasomc transducer 5900 includes a plurality of resonating tubes 5902 covered by a piezoelectric thin-film, such as PVDF, that is under tension, as described in other part of this application.
  • M is the mass ofthe membrane per unit area.
  • ⁇ (0,0) to be the fundamental resonance frequency, and is set to be at 50 kHz. Then, ⁇ (0,l) is 115 kHz, and ⁇ (0,2) is 180 kHz etc.
  • n 0 modes are all axisymmetric modes. In one embodiment, by driving the thin-film at the appropriate frequency, such as at any ofthe axisymmetric mode frequencies, the structure resonates, generating ultrasonic waves at that frequency.
  • the ultiasonic transducer is made of a number of speaker elements, such as unimorph, bimorph or other types of multilayer piezoelectric emitting elements.
  • the elements can be mounted on a solid surface to form an array. These emitters can operate at a wide continuous range of frequencies, such as from 40 to 200 kHz.
  • One embodiment to contiol the distance of propagation ofthe ultrasonic output is by changing the carrier frequency, such as from 40 to 200 kHz. Frequencies in the range of 200 kHz have much higher acoustic attenuation in air than frequencies around 40 kHz. Thus, the ultrasonic output can be attenuated at a much faster rate at higher frequencies, reducing the potential risk of ultrasonic hazard to health, if any.
  • the degree of attenuation can be changed continuously, such as based on multi-layer piezoelectric thin-film devices by continuously changing the carrier frequency.
  • the degree of isolation can be changed more discreetly, such as going from one eigen mode to another eigen mode ofthe tube resonators with piezoelectric membranes.
  • FIG. 43B is a diagram that illustrates the ultrasomc transducer 5900 generating its beam 5904 of ultrasonic output.
  • the width ofthe beam 5904 can be varied in a variety of different ways. For example, a reduced area or one segment ofthe transducer 5900 can be used to decrease the width ofthe beam 5904.
  • a membrane over resonating tubes there can be two concentric membranes, an inner one 5910 and an outer one 5912, as shown in FIG. 43C. One can turn on the inner one only, or both at the same time with the same frequency, to contiol the beam width.
  • FIG. 43D illustrates another embodiment 5914, with the transducer segmented into four quadrants. The membrane for each quadrant can be individually controlled. They can be turned on individually, or in any combination to contiol the width ofthe beam.
  • the width ofthe beam can be broadened by increasing the frequency ofthe ultrasonic output.
  • the dimensions ofthe directional speaker are made to be much larger than the ultrasonic wavelengths.
  • beam divergence based on aperture diffraction is relatively small.
  • One reason for the increase in beam width in this embodiment is due to the increase in attenuation as a function ofthe ultiasonic frequency. Examples are shown in FIGs.
  • the acoustic attenuations are assumed to be 0.2 per meter for 40 kHz, 0.5 per meter for 100 kHz and 1.0 per meter for 200 kHz.
  • the beam patterns are calculated at a distance of 4 m away from the emitting surface and normal to the axis of propagation.
  • the x-axis ofthe figures indicates the distance ofthe test point from the axis (from -2 m to 2 m), while the y-axis ofthe figures indicates the calculated acoustic pressure in dB SPL ofthe audio output at the + e ⁇ t point.
  • the emitted power for the three examples are normalized so that the received power for the three audio outputs on-axis are roughly the same (e.g. at 56 dB SPL 4 m away). Comparing the figures, one can see that the lowest carrier frequency (40 kHz in FIG. 43E) gives the narrowest beam and the highest carrier frequency (200 kHz in FIG. 43G) gives the widest beam.
  • the lowest carrier frequency 40 kHz in FIG. 43E
  • the highest carrier frequency 200 kHz in FIG. 43G
  • a lower carrier frequency provides better beam isolation, with privacy enhanced.
  • the audio output is in a constrained beam for enhanced privacy.
  • the width ofthe beam can be expanded in a controlled manner based on curved structural surfaces or other phase-modifying beam forming techniques.
  • FIG. 44A illustrates one approach to diverge the beam based on an ultrasonic speaker with a convex emitting surface.
  • the surface can be structurally curved in a convex manner to produce a diverging beam.
  • the embodiment shown in FIG.44A has a spherical-shaped ultrasonic speaker 6000, or an ultrasonic speaker whose emitting surface of ultrasonic output is spherical in shape.
  • a spherical surface 6002 has a plurality of ultrasonic elements 6004 affixed (e.g. bimorphs) or integral thereto.
  • the ultrasonic speaker with a spherical surface 6002 forms a spherical emitter that outputs an ultiasonic output within a cone (or beam) 6006. Although the cone will normally diverge due to the curvature ofthe spherical surface 6002, the cone 6006 remains directionally constrained.
  • each ultrasonic element 6004 is oriented to point towards the center of a sphere of which the spherical surface 6002 is a part of.
  • the length-wise axis of each resonating cavity 6026 points to the center ofthe sphere of which the spherical surface 6002 is a part of.
  • the resonating tubes 6026 can be formed in a single fabrication step so as to ensure their uniformity. This can be done, for example, by form- pressing all ofthe holes at the same time.
  • the ultrasonic speaker includes resonating tubes
  • the membrane is assumed to be mounted on the concave side. After the membrane is mounted, vacuum can be formed to have the membrane press onto the tubes. Voltages can be applied to the membrane to generate the ultrasonic output. This creates an emitting surface that is structurally curved in a concave manner. As shown in FIG. 44B, the beam produced 6040 initially converges and then diverges.
  • the degree of divergence is determined, for example, by the curvature ofthe surface 6002 or 6036.
  • the radius ofthe spherical surface is about 40 cm, its height 6006 is about 10 cm and its width 6008 is about 20 cm.
  • Diverging beams can also be generated even if the emitting surface ofthe ultiasonic speaker is a planar surface.
  • a convex reflector 6050 can be used to reflect the beam 5904 into a diverging beam 5918 (and thus with an increased beam width).
  • the ultrasonic speaker can be defined to include the convex reflector 6050.
  • the directional speaker includes a number of speaker elements, such as bimorphs.
  • the phase shifts to individual elements ofthe speaker can be individually controlled. With the appropriate phase shift, one can generate ultiasonic outputs with a quadratic phase wave-front to produce a converging or diverging beam.
  • the phase of each emitting element is modified by k* ⁇ / (2F 0 ), where (a) r is the radial distance of the emitting element from the point where the diverging beam seems to originate from, (b) F 0 is the desired focal distance, (c) k— the propagation constant ofthe audio frequency f ⁇ is equal to 27rf / Co, where Co is the acoustic velocity.
  • beam width can be changed by modifying the focal length or the focus ofthe beam, or by de-focusing the beam. This can be done electronically through adjusting the relative phases ofthe ultrasonic signals exciting different directional speaker elements.
  • FIG. 45 A illustrates a cylindrical-shaped ultiasonic speaker 6100 according to an embodiment ofthe invention.
  • the emitting surface ofthe directional speaker is cylindrical in shape and is segmented.
  • a cylindrical surface 6102 has a plurality of ultrasonic elements 6104 affixed (e.g., bimorphs) or integral thereto (e.g., tubes covered by a membrane).
  • Each ultrasonic element 6104 is oriented horizontally on, but pointed towards the center line of, a cylinder of which the cylindrical surface 6102 is a part of.
  • the length-wise axis of each tube is horizontal and points towards the center line ofthe cylinder of which the cylindrical surface is a part of.
  • the cone of ultrasonic output 6106 will normally diverge, the cone remains directionally constrained.
  • the radius 6108 ofthe cylindrical surface is about 40 cm, its height 6110 is about 10 cm and its width 6112 is about 20 cm.
  • the transducer surface 6102 can be segmented, such as into three separate controllable segments 6102, 6104 and 6106. Each ofthe segments can be selectably activated to control the direction and or width ofthe ultrasonic output. For the embodiment where the speaker is made of tubes covered by membranes, each segment can have its own membrane. To generate the widest beam, all three segments are activated simultaneously by signals with substantially the same frequencies, phases and amplitudes.
  • FIG. 45B shows another example of segmenting the emitting surface according to the present invention.
  • the transducer surface 6140 has a curved configuration 6142 that includes four controllable segments 6144, 6146, 6148 and 6150.
  • Each ofthe segments ofthe curved configuration 6142 can be selectably activated to control the direction and/or width ofthe ultrasonic output.
  • the ultrasonic output from the segment 6144 resides within the constrained region 6152.
  • the ultrasonic output by the segment 6146 resides within the constrained area 6154.
  • the ultrasonic output by the segment 6148 resides within the constrained area 6156.
  • the ultrasonic output from the segment 6150 resides within the constrained area 6158.
  • Segmenting the transducer surface shown in FIG. 45B can be done by turning on elements in the different segments.
  • a subset ofthe ultrasonic elements 6004 can be activated.
  • the spherical emitter is shown as having sixty-four (64) ultiasonic elements 6004, which can be bimorph devices. A smaller beam could be emitted if, for example, only the interior sixteen (16) ultrasonic elements were utilized.
  • the propagation direction ofthe ultiasonic beam can be changed by electrical and/or mechanical mechanisms, o illustrate based on the spherical-shaped ultiasonic speaker shown in FIG. 44A, a user can physically reposition the spherical surface 6002 to change its beam's orientation or direction.
  • a motor can be mechanically coupled to the spherical surface 6002 to change its orientation or the propagation direction ofthe ultrasonic output.
  • the direction ofthe beam can be changed electronically based on phase array techniques.
  • the movement ofthe spherical surface 6002 to adjust the delivery direction can track user movement. This tracking can be performed dynamically. This can be done through different mechanisms, such as by GPS or other triangulation techniques.
  • the user's position is fed back to or calculated by the directional audio apparatus. The position can then become a beam attribute input.
  • the beam-attribute contiol unit would convert the input into the appropriate control signals to adjust the dehvery direction ofthe audio output.
  • the movement of the spherical surface 6002 can also be in response to a user input. In other words, the movement or positioning ofthe beam 1006 can be done automatically or at the instruction ofthe user.
  • FIGs.46 A and 46B are perspective diagrams of one embodiment of directional audio apparatus that provides directional audio output to interested users.
  • FIG.46 A and 46B are perspective diagrams of one embodiment of directional audio apparatus that provides directional audio output to interested users.
  • 46A illustrates a directional audio apparatus 6200 that includes an entertainment center, such as a television 6202, a set-top box 6204 and a directional speaker 6206.
  • the television 6202 displays video that is supplied, for example, by a satellite link or a cable line via the set-top box 6204.
  • the set-top box 6204 operates to decode the encoded video and audio content transmitted over the satellite link or cable line. Once decoded, the appropriate audio and video signals are delivered to the television 6202.
  • the television 6202 may include conventional or normal speakers to provide audio output. These speakers typically do not produce audio output through generating ultrasonic signals to be converted into the audio frequency range by air. Nevertheless, the audio apparatus 6200 includes the directional speaker 6206.
  • the directional speaker 6206 provides delivery of audio signals in a constrained direction. Further, the directionally-constiained audio outputs can be controlled as to the target distance for its users as well as for the width ofthe resulting audio beam.
  • the directional speaker 6206 generates ultrasonic output by way of an emitter surface 6208.
  • the emitter surface 6208 can include a single or multiple segments of groups of ultrasonic or speaker elements.
  • the directional speaker 6206 is mounted to the set-top box 6204 such that its position can be adjusted with respect to the set-top box 6204 as well as the television 6202. For example, the directional speaker 6206 can be rotated to cause a change in the direction in which the directionally-constiained audio output outputs are delivered.
  • a user of the audio system 6200 can manually position (e.g., rotate) the directional speaker 6206 to adjust the delivery direction.
  • the directional speaker 6206 can be positioned (e.g., rotated) by way of an electrical motor provided within the set-top box 6204 or the directional speaker 6206.
  • an electrical motor can be controlled by a conventional control circuit and can be instructed by one or more buttons provided on the set-top box 6204, the directional speaker 6206 or a remote contiol device.
  • FIG. 46B is a diagram of another directional audio apparatus 6220 in a set-top box environment according to another embodiment ofthe invention.
  • the audio apparatus 6220 includes an entertainment system, such as a television 6222, a set-top box 6224 and a directional speaker 6226.
  • the set-top box 6224 is typically coupled to a satellite link or a cable line to receive audio and video signals.
  • the set-top box 6224 decodes the audio and video signals and supplies the resulting audio and video signals to the television 6222.
  • the television 6222 displays the video signals and may use its conventional speakers to output audio sound. However, when directional delivery of audio sound is desired, the conventional speakers ofthe television 6222 are not utilized. Instead, the directional speaker 6226 is utilized.
  • the directional speaker 6226 can be activated by a button, switch or other means. Once activated, the directional speaker 6226 outputs the audio signals in a directionally constrained manner.
  • the television 6222 has an audio-output connection that is connected to the set-top box 6224. If conventional speakers are preferred, the signal line from the audio-output connection is electrically disconnected, and normal audio output is directly from the television 6222. However, if directionally-constiained audio output is desired, audio signals from the television 6222 is channeled to the set-top box 6224, and normal audio output from the television 6222 is de-activated.
  • the volume control in the television 6222 can be turned down also if directionally-constiained audio outputs are preferred.
  • the set-top box 6224 and/or the directional speaker 6226 can permit control over the distance and/or width of the audio output to be tiansmitted to the one or more interested users.
  • the position ofthe directional speaker 6226 is fixed relative to the set-top box 6224.
  • the directional speaker 6226 is affixed to the set-top box 6224.
  • the directional speaker 6226 is integral with the set-top box 6224.
  • the direction for the directionally-constrained audio output outputs can be electrically controlled through a variety of different techniques. One technique is to activate only certain segments ofthe emitting surface 6228 ofthe directional speaker 6226. Another technique is to utilize beam-steering operations based on phase control inputs.
  • the directional audio apparatuses 6200 and 6220 illustrated in FIGs. 46A and 46B can utilize the various methods and processes discussed above.
  • the set-top boxes with directional speakers shown in FIGs. 46 A and 46B are able to transform conventional audio systems in televisions into audio systems having directional audio delivery as explained in the present invention.
  • the directional speaker with the emitting surface 6140 shown in FIG. 45B can be used as the emitting surface 6228 for the directional speaker 6226 illustrated in FIG. 46B.
  • the segment 6146 is in operation.
  • the user signals the set-top box that its beam width should be increased.
  • the segment 6148 can be additionally activated, thereby increasing the width or area associated with the ultrasonic output (and thus resulting audio outputs).
  • non-adjacent segments can be simultaneously activated to generate multiple separate beams.
  • a user can signal the set-top box to activate the two outer most beams, 6152 and 6158. This will generate two separate beams for two separate users. Then, a person located in the middle between the two users would only hear a substantially reduced output level.
  • more than one user are sitting close to the television 6200 in FIG. 46A. It would be advantageous to have a wider beam that covers a shorter distance.
  • One embodiment uses a directional speaker 6206 that operates at a higher frequency, such as the one shown in FIG. 43G, working at 200 kHz.
  • the beam width is broader than the version shown in FIG. 43 E, but the beam covers a shorter distance due to higher attenuation.
  • FIG. 47 is a perspective diagram of a remote control device 6300 according to one embodiment ofthe invention.
  • the remote contiol device 6300 is one embodiment for a directional audio apparatus.
  • the remote control device 6300 has a top surface 6302 with a plurality of buttons 6304 as is common with remote controllers. Some of these buttons 6304 can correspond to various options a user might request of a directional audio apparatus via a remote control device. Examples of these options include start, stop, play, channels, volume, etc.
  • the remote contiol device 6300 also includes options for the beam attribute inputs, such as 3 discrete sizes of beam width (large, medium and small), and 3 discrete distance coverage (long, medium and short).
  • the remote control device 6300 can also include a directional speaker 6306 that produces directional audio delivery to one or at most a few users desirous of hearing the audio output.
  • the directional speaker 6306 can be substantially flush or recessed with respect to the top surface 6302.
  • a grating 6308 can optionally be provided over the directional speaker 6306.
  • the directional speaker can be mounted at an angle with respect to the top surface 6302, or can be movably mounted with respect to the top surface 6302 so that the direction of delivery can be manipulated.
  • a thin layer of material e.g., plastic housing
  • a wireless link window 6310 provides a window through which the remote control device 6300 is able to communicate in a wireless manner (e.g., radio or optical) with an audio system, which may or may not have directional audio capability. Audio signals can then be received and directed to one or at most a few users proximate to the remote control device 6300 via the directional speaker 6306.
  • a wireless manner e.g., radio or optical
  • Audio signals can then be received and directed to one or at most a few users proximate to the remote control device 6300 via the directional speaker 6306.
  • FIGs. 48A-48B show two such embodiments that can be employed, for example, for such a purpose.
  • FIG. 43 A illustrates a directional speaker with a planar emitting surface 6404 of ultiasonic output.
  • the dimension ofthe planar surface can be much bigger than the wavelength ofthe ultrasonic signals.
  • the ultiasonic frequency is 100 kHz and the planar surface dimension is 15 cm, which is 50 times larger than the wavelength.
  • the ultiasonic waves emitting from the surface are controlled so that they do not diverge significantly within the enclosure 6402.
  • the directional audio delivery device 6400 includes an enclosure 6402 with at least two reflecting surfaces for the ultrasonic waves.
  • the emitting surface 6404 generates the ultiasonic waves, which propagate in a beam 6406.
  • the beam reflects within the enclosure 6402 back and forth at least once by reflecting surfaces 6408.
  • the beam emits from , the enclosure at an opening 6410 as the output audio 6412.
  • the dimensions ofthe opening 6410 can be similar to the dimensions ofthe emitting surface 6404.
  • the last reflecting surface can be a concave or convex surface 6414, instead of a planar reflector, to generate, respectively, a converging or diverging beam for the output audio 6412.
  • FIG. 48B shows another embodiment of a directional audio dehvery device 6450 that allows the ultiasonic waves to bounce back and forth at least once by ultrasonic reflecting surfaces before emitting into free space.
  • the directional speaker has a concave emitting surface 6460.
  • the concave surface first focuses the beam and then diverges the beam.
  • the focal point 6464 ofthe concave surface 6460 is at the mid-point ofthe beam path within the enclosure.
  • the beam width at the opening 6466 ofthe enclosure can be not much larger than the beam width right at the concaved emitting surface 6460.
  • the beam is converging. While at the opening 6466, the beam is diverging.
  • the curvatures ofthe emitting and reflecting surfaces can be computed according to the desired focal length or beam divergence angle similar to techniques used in optics, such as in telescopic structures.
  • FIG. 49 shows one such embodiment as illustrated by a building layout 6500.
  • An audio system 1506 is coupled to two directional audio delivery devices 6502 and 6504 that are spaced apart.
  • the audio system transmits different types of audio signals, either wireline or wireiessiy, to the two directional audio delivery devices 6502 and 6504.
  • the different types of audio signals can represent a left channel and a right channel.
  • the two directional audio delivery devices 6502 and 6504 generate two directionally- constiained audio output beams 6510 and 6512 that are directed towards and received by a user 6508.
  • the number of directional audio delivery devices does not have to be limited to two.
  • a surround sound arrangement can be achieved through more than two directional audio delivery devices.
  • a number of attributes ofthe constrained audio outputs can be adjusted, either by a user or automatically and dynamically based on certain monitored or tracked measurements, such as the position of the user.
  • One adjustable attribute is the direction ofthe constrained audio outputs. It can be controlled, for example, by (a) activating different segments of a planar or curved speaker surface, (b) using a motor, (c) manually moving the directional speaker, or (d) through phase array beam steering techniques.
  • Another adjustable attribute is the width of the beam ofthe constrained audio outputs. It can be controlled, for example, by (a) modifying the frequency ofthe ultrasonic signals, (b) activating one or more segments ofthe speaker surface, (c) using phase array beam forming techniques, (d) employing curved speaker surfaces to diverge the beam, (e) changing the focal point ofthe beam, or (f) de-focusing the beam.
  • the degree of isolation or privacy can also be controlled independent of the beam width.
  • Isolation or privacy can also be controlled through, for example, (a) phase array beam forming techniques, (b) adjusting the focal point ofthe beam, or (c) de-focusing the beam.
  • the volume ofthe audio output can be modified through, for example, (a) changing the amplitude ofthe ultrasonic signals driving the directional speakers, (b) modifying the ultrasonic frequency to change its distance coverage, or (c) activating more segments of a planar or curved speaker surface.
  • the audio output can also be personalized or adjusted based on the audio conditions of the areas surrounding the directional audio apparatus.
  • Signal pre-processing techniques can be applied to the audio signals for such personalization and adjustment.
  • Ultiasonic hazards can be minimized by increasing the path lengths ofthe ultrasonic waves from the directional speakers before the ultrasonic waves emit into free space.
  • Another way to reduce potential hazard, if any, is to increase the frequency ofthe ultiasonic signals to reduce their distance coverage.
  • Stereo effects can also be introduced by using more than one directional audio delivery devices that are spaced apart. This will generate multiple and different constiained audio outputs to create stereo effects for a user.
  • Directionally-constrained audio outputs are not limited to be generated by set-top boxes. They can also be generated from a remote control.
  • Numerous embodiments ofthe present invention have been applied to an indoor environment, using building layouts. However, many embodiments ofthe present invention are perfectly suitable for outdoor applications also. For example, a user can be sitting inside a patio reading a book, while listening to music from a directional audio apparatus ofthe present invention. The apparatus can be in the outside, 10 meters away from the user. Due to the directionally constiained nature ofthe audio output, sound can still be localized within the direct vicinity ofthe user. As a result, the degree of noise pollution to the user's neighbors is significantly reduced.
  • an existing audio system can be modified with one ofthe described set-top boxes to generate directionally-constrained audio output outputs.
  • a user can select either directionally constiained or normal audio outputs from the audio system, as desired.
  • a number of embodiments ofthe invention pertain to techniques for providing wireless delivery of audio sounds from audio systems, which can be stationary, to personal audio devices, which, typically, are portable. These techniques can permit users ofthe personal audio device to be mobile yet still acquire the audio sounds. Based on different embodiments, audio systems can be readily adapted to provide the wireless delivery of audio sounds. These techniques can also optionally provide customization (or personalization) ofthe audio sounds to user's hearing and/or modification ofthe audio sounds in view of environmental conditions.
  • audio output from an audio system can be delivered to one or more persons desirous of hearing the audio output.
  • Each person has a personal audio device.
  • the device causes audio sound corresponding to audio output from the audio system to be output personally, in a directionally constrained manner. Consequently, other persons not desirous of hearing the audio output do not receive substantial amounts ofthe audio sounds. Thus, they are less disturbed by the unwanted audio sounds.
  • a wireless adapter can serve as an after market modification to an audio system.
  • the wireless adapter enables audio signals output by the audio system to be wireiessiy transmitted to one or more personal audio devices. Each personal audio device produces audio sound for its user.
  • FIG. 50 is a block diagram of a remote audio delivery system 7100 according to one embodiment ofthe invention.
  • the remote audio delivery system 7100 includes an audio system 7102 that produces an audio output.
  • the audio system 7102 is, for example, a television, a Compact Disc (CD) player, Digital Versatile Disk (DVD) player, a stereo, a computer with speakers etc.
  • the audio system 7102 can also be referred to as an entertainment system.
  • the audio system 7102 is stationary.
  • the audio output from the audio system 7102 is supplied to a wireless transmission apparatus 7104.
  • the wireless transmission apparatus 7104 is coupled to an audio output port (e.g., terminal, connector, receptacle, etc.) ofthe audio system 7102.
  • an audio output port e.g., terminal, connector, receptacle, etc.
  • the coupling can be directly to the audio output port ofthe audio system 7102 or can be coupled to the audio output port by way of a cable.
  • the wireless transmission apparatus 7104 can also be referred to as a wireless audio adapter because it is able to adapt the audio system 7102 for wireless audio delivery without requiring changes to the audio system 7102.
  • the wireless transmission apparatus 7104 receives the audio output from the audio system 7102 and transmits the audio output over a wireless channel 7105 (or wireless link) to a wireless receiver 7106 of a personal audio device 7107.
  • the wireless channel 105 is typically a short range wireless link that is not in the audio frequency ranges, for example, such as available using Bluetooth, WiFi or other dedicated frequency (e.g., 900 MHz, 2.4 GHz) techniques.
  • the wireless receiver 7106 receives the audio output that is tiansmitted by the wireless transmission apparatus 7104 over the wireless channel 7105.
  • the received audio output is then supplied to control circuitry 7108.
  • the contiol circuitry 7108 converts the received audio output into speaker drive signals.
  • the speaker drive signals are then used to activate a directional speaker 7110 which produces output sound.
  • the output sound from the directional speaker 7110 is directionally confined for enhanced privacy.
  • the control circuitry 7108 can also provide customization or personalization to the person and/or the environment.
  • the directionally confined output sound produced by the directional speaker 7110 allows the user ofthe personal audio device 7107 to hear the audio sound even though neither ofthe user's ears touches or coupled against the directional speaker 7110.
  • the directional nature ofthe output sound is towards the user (e.g., user's ear(s)) and thus provides privacy by restricting the output sound to a confined directional area.
  • the directional speaker 7110 is an ultiasonic speaker
  • the control circuitry 7208 converts the received audio output into ultrasonic drive signals that are used to drive the ultrasonic speaker.
  • the ultrasonic drive signals are supplied to the ultrasonic speaker to generate ultrasonic output.
  • the ultrasonic output is subsequently transformed, for example, by air, into audio output.
  • the frequency spectrum ofthe resulting audio output (after such transformation) is similar to the audio output from the audio system 7102.
  • the frequency spectrum ofthe resulting audio output is altered so as to provide customized hearing (e.g., enhanced hearing), or to adapt to environmental conditions or physical conditions ofthe user.
  • FIG. 51 is a block diagram of a remote audio delivery system 7200 according to another embodiment ofthe invention.
  • the remote audio delivery system 7200 includes an audio system 7202 and a wireless tiansmitter 7204.
  • the wireless tiansmitter 7204 can also be referred to as a wireless audio adapter. It is able to adapt the audio system 7202 for wireless audio delivery without requiring physical changes to the audio system 7202.
  • the wireless transmitter 7204 is coupled to the audio system 7202 via an audio output port ofthe audio system 7202. Such coupling can be achieved by a connector alone or in combination with a cable.
  • the wireless tiansmitter 7204 is integral and thus part ofthe audio system so that no connector or cable is necessary. The audio system 7202 and the wireless transmitter 7204 together form a wireless audio delivery system.
  • Audio output from the audio system 7202 is supplied to the wireless transmitter 7204 via the audio output port ofthe audio system 7202 or other means. Then, the wireless transmitter 7204 transmits the audio output over a wireless channel (wireless link) 7205 to a wireless receiver 7206 of a personal audio device 7207. The received audio output at the wireless receiver 7206 is then supplied to control circuitry 7208.
  • the contiol circuitry 7208 can receive user information pertaining to the user from a data storage device 7202. For example, the user information can pertain to an audio profile associated with the user. An audio profile contains or is based on hearing characteristics of an associated user. The user information can be stored in a data storage device 7210.
  • the data storage device 7210 can be a dedicated or removable data storage medium. Examples of removable data storage medium include a memory card (Flash memory card, memory stick, credit card with data storage, PC card (PCMCIA), etc.).
  • the control circuitry 7208 produces speaker drive signals that are used to drive a speaker 7212.
  • the speaker drive signals are produced by the control circuitry 7208 based upon not only the received audio output but also the user information.
  • the control circuitry 7208 can modify the drive signals being supplied to the speaker 7212 based upon the user information.
  • the audio sound being produced by the speaker 7212 can be customized for (or personalized to) the user.
  • the control circuitry 7208 is able to produce customized drive signals for the speaker 7212 such that the resulting audio output by the speaker 7212 is customized for the hearing characteristics and/or user preferences ofthe user.
  • the personal audio device 7207 can include the wireless receiver 7206, the contiol circuitry 7208, the data storage device 7210 and the speaker 7212. Nevertheless, it should be noted that the customization could also be performed elsewhere.
  • the audio system 7202 or the wireless transmitter 7204 can further include control circuitry (not shown) that would obtain user information and then customize audio output prior to its transmission to the personal audio device 7207. Such an implementation could provide centralized customization ofthe audio output for one or more personal audio devices.
  • FIG. 52 is a block diagram of a remote audio delivery system 7300 according to yet another embodiment ofthe invention.
  • the remote audio delivery system 7300 includes an audio system 7302, a wireless network 7304, and personal audio devices 7306 and 7308.
  • the wireless network 7304 can be a wireless local area network, such as a Bluetooth or WiFi network.
  • the remote audio delivery system 7300 illustrates that the audio system 7302 can supply audio output to one or more personal audio devices 7306 and 7308 over a wireless network 7304.
  • the wireless network 7304 can, for example, be used in the vicinity of a home or business.
  • the audio output from the audio system 7302 can be broadcast, multicast or unicast over the wireless network 7304.
  • the audio output from the audio system 7302 can be directed to one or more ofthe personal audio devices 7306 and 7308.
  • a different network address is associated with each ofthe personal audio devices, and thus the audio output can be transmitted to the appropriate one or more ofthe personal audio devices via the wireless network 7304 using the associated network addresses.
  • FIG. 52 illustrates only the personal audio devices 7306 and 7308, it should be understood that the remote audio delivery system 7300 can support many personal audio devices, and such personal audio devices can be ofthe same type or of different types.
  • the wireless audio adapter 7204 can be matched to the personal audio device 7207. In other words, each wireless audio adapter can have a corresponding personal audio device.
  • wireless signals from a wireless audio adapter 7204 can be received by multiple personal audio devices. This can be done, for example, by broadcasting the signal and requesting all the personal audio devices to tune to the broadcast wireless channel.
  • each personal audio device 7207 can be first initialized with the wireless audio adapter 7204. The initializing process can be performed by requiring each audio device to transmit, wireiessiy or through a wired connection, an identifier to the adapter. Then the adaptor transmits the personalization information to the corresponding personal audi device according to the identifier. After the personalization information is received, the personal audio device can be configured accordingly and then start to receive the audio output.
  • Layer 3 e.g. IP multicast
  • Layer 2 e.g. IEEE 802.11
  • each personal audio device 7207 can be first initialized with the wireless audio adapter 7204. The initializing process can be performed by requiring each audio device to transmit, wireiessiy or through a wired connection, an identifier to the adapter. Then the adaptor transmits the personalization information to the corresponding personal audi device according to the identifier. After the personalization information is received, the personal audio device can be configured accordingly and then start to receive the audio output.
  • a personal audio device can be configured to be selected by a specific wireless audio adapter or an audio system.
  • Such configurations would be applicable for after-market sales. They can be achieved through a number of approaches. For example, there can be switches on both the device and the adapter, or both can have a number of channels. These switches or channels can be changed by users. When both set of switches or channels are matched, then the device is configured for the wireless audio adapter.
  • Another approach is based on the media address control (MAC) layer address, IP address or TCP or UDP port numbers.
  • the personal audio device and the wireless audio adapter can agree on a specific TCP or UDP port number. They can then be configured to receive packets or signals from that port only.
  • the personal audio device and the wireless audio adapter can also be identified by their specific IP addresses, or MAC layer addresses.
  • FIG. 53 is a diagram of a building layout 7400 illustrating use of different embodiments ofthe present invention.
  • the building layout 7400 illustrates a representative floor plan having a first room 7402, second room 7404 and a third room 7406.
  • the first room 7402 includes an audio system (AS) 7408 that includes a wireless transmission apparatus 7410, or a wireless audio adapter, coupled to the audio system 7408.
  • the audio system 7408 can use a traditional speaker and/or a directional speaker to direct audio sound to one or more of a first user (u-1) and a second user (u-2) located within the first room.
  • the audio output from the audio system 7408 can also be tiansmitted over a wireless channel (link) to one or more other users that are relatively nearby the wireless transmission apparatus 7410.
  • the type ofthe wireless channel sets the range.
  • the range is relatively short, such as less than 400 meters.
  • any one or more ofthe third user (u-3), a fourth user (u-4) and a fifth user (u-5) are able to hear the audio output by way of a personal audio device that receives the audio output over a wireless channel.
  • the fifth user (u-5) has a personal audio device 7412 attached or proximate thereto.
  • the fifth user (u-5) wears the portable audio device, and is able to hear the audio output from the audio system 7408 even though the fifth user (u-5) is, for example, outside ofthe building, such as in the backyard.
  • the personal audio device 7412 thus allows a remote user (e.g., u-5) to hear the audio output from the audio system 7408 even though they are not within the same room or building as the audio system 7408. So long as the remote user is within communication range ofthe wireless channel, the user can hear the audio output even as the remote user moves around.
  • FIG. 54 is a flow diagram of a remote audio delivery process 7500 according to one embodiment ofthe invention.
  • the remote audio delivery process 7500 is, for example, performed by a remote audio delivery system, such as the remote audio delivery system 7100, 7200, or 7300.
  • the remote audio dehvery process 7500 begins with audio signals being received 7502 at a wireless audio adapter or a wireless tiansmission apparatus. Typicallvj-however, prior to receiving 7502 the audio signals, the wireless audio adapter would have been attached to the audio system that initially provides the audio signals. In any case, the audio signals that are received 7502 are thereafter wireiessiy transmitted 7504 to a personal audio device. Typically, the audio signals are wireiessiy received by a predetermined personal audio device. In other words, the wireless audio adapter can be configured to transmit audio signals to be wireiessiy received by a predetermined personal audio device. However, the audio signals may be transmitted to a plurality of predetermined personal audio devices.
  • the audio signals are received 7506 at the personal audio device.
  • additional processing can be performed to enhance the resulting audio sound that will eventually be delivered to a user ofthe personal audio device.
  • a decision 7508 determines whether user personalization is to be performed. When the decision 7508 determines that user personalization is to be performed, then the audio signals are modified 7510 based on user information.
  • the user information can be provided by a data storage device, such as the data storage device 7212 as illustrated in FIG. 51.
  • the user information is related to an audio profile that pertains to the hearing characteristics ofthe user.
  • the user information is related to the physical conditions ofthe user.
  • Such physical conditions can be detected by a sensor, which can be embedded in the personal audio device, or wireiessiy coupled to the personal audio device. As an example, if the user is sleeping, the volume ofthe output sound should be reduced or even turned off. Dete ⁇ nining physical conditions can be dynamically performed. For example, a sensor can keep track ofthe user's heart beat and identify patterns accordingly.
  • a decision 7512 determines whether environmental adjustments are to be performed.
  • the audio signals are modified 7514 based on environmental characteristics.
  • environmental characteristics can be detected or sensed by the personal audio device, which can include one or more environmental sensors.
  • the environmental sensor(s) can measure ambient or background noise.
  • the environmental characteristics could also be wireiessiy transmitted to the personal audio device.
  • the audio signals are converted 7516 to ultiasonic drive signals.
  • the ultiasonic drive signals are then used to drive 7518 a directional speaker that, in turn, outputs ultrasonic sound in a directionally constrained manner.
  • the ultiasonic sound is directed to the user ofthe personal audio device and interacts with air such that audio sound is present when the acoustic output from the directional speaker is in the vicinity ofthe head (or ears) ofthe user.
  • the ultrasonic (and resulting audio) sound produced is directionally constrained, it is delivered in a targeted way to the user. Thus, other users in the vicinity ofthe user will not hear any substantial amount ofthe audio sound, and therefore will not be disturbed thereby.
  • FIG. 55 A is a flow diagram of an environmental accommodation process 7600 according to one embodiment ofthe invention.
  • the environmental accommodation process 7600 determines 7602 environmental characteristics.
  • the environmental characteristics can pertain to measured sound (e.g., noise) levels at the vicinity ofthe user.
  • the sound levels can be measured by a pickup device (e.g., microphone) at the vicimty ofthe user.
  • the pickup device can be incorporated in the personal audio device.
  • the environmental characteristics can pertain to estimated sound (e.g., noise) levels at the vicinity ofthe user.
  • the sound levels at the vicinity ofthe user can be estimated based on a position of the user/device and a linking of position with an estimated sound level for the particular environment.
  • the position ofthe user can, for example, be determined by GPS or network triangulation.
  • the audio signals are modified based on the environmental characteristics. For example, if the user were in an area with a lot of noise (e.g., ambient noise), such as a confined space with various persons or where construction noise is present, the audio signals could be processed to attempt to suppress (or cancel) the unwanted noise and/or the audio signals (e.g., in a desired frequency range) could be amplified. In the case of amplification, if noise levels are excessive, the amplification might not occur as the user might not be able to safely hear the desired audio signals.
  • noise e.g., ambient noise
  • Noise suppression and amplification can be achieved through conventional digital signal processing, amplification and/or filtering.
  • the environmental accommodation process 7600 can, for example, be performed periodically or for every new audio stream.
  • a user might have a hearing profile that contains the user's hearing characteristics.
  • the audio sound provided to the user can optionally be customized or personalized to the user by altering or modifying the audio signals in view ofthe user's hearing characteristics.
  • the audio output can be enhanced for the benefit ofthe user. Additional details on hearing enhancement are described in other sections of this patent application.
  • FIG. 55B is a flow diagram of audio personalization process 7620 according to one embodiment ofthe invention.
  • the audio personalization process 7620 retrieves 7622 an audio profile associated with the user.
  • the hearing profile contains information that specifies the user's hearing characteristics. For example, the hearing characteristics may have been acquired by the user taking a hearing test. Then, the audio signals are modified 7624 based on the audio profile associated with the user.
  • the hearing profile can be supplied to a personal audio device or to a directional audio delivery system that performs the personalization process 7620 in a variety of different ways.
  • the audio profile can be electronically provided to the device or the directional audio delivery system through a network.
  • the audio profile can be provided by way of a removable data storage device (e.g., memory card). Additional details on audio profiles and personalization can be found in other sections of this patent application.
  • the environmental accommodation process 7600 and or the audio personalization process 7620 can optionally be performed together with any ofthe processes to produce the directionally confined output sound, as discussed above.
  • FIG. 56A is a perspective diagram of an ultrasonic transducer 7700 according to one embodiment ofthe invention.
  • the ultiasonic transducer 7700 can implement a directional speaker as discussed herein.
  • the ultiasonic transducer 7700 produces the ultiasonic sound utilized as noted above.
  • FIG. 56B is a diagram that illustrates the ultiasonic transducer 7700 with its beam 7704 being produced to output ultiasonic sound.
  • the beam 7704 can have its attributes, such as its beam width, varied in a variety of different ways. Additional details on the ultrasonic transducer 7700 can be found in other sections of this patent application.
  • An audio system ofthe present invention can include or couple to a set top box that includes the wireless audio adapter or permits attachment thereto.
  • a set-top box enables a television set to receive and decode digital television broadcasts.
  • the set-top box is positioned proximate to the television set.
  • FIG. 57 is a perspective diagram of an audio system that provides directional audio delivery to interested users.
  • the figure illustrates an audio system 7800 that includes a television 7802, a set-top box 7804 and a directional speaker 7806.
  • the directional speaker 7806 provides dehvery of audio signals in a constrained direction. Further, the directionally constrained audio signals can be controlled as to the target distance for its users as well as for the width ofthe resulting audio signals.
  • the directional speaker 7806 outputs ultrasonic sound by way of an emitter surface 7808.
  • the emitter surface 7808 can be comprised of a single or multiple ultrasonic transducers.
  • the directional speaker 7806 is mounted to the set-top box 7804 such that it is able to be rotated with respect to the set-top box 7804 as well as the television 7802.
  • the rotation ofthe directional speaker 7806 causes a change in the direction in which the directionally constrained audio signals are delivered. Additional details on such or different set-top boxes can be found in other sections of this patent application. Besides the ability ofthe audio system 7800 to include optionally directional speaker
  • the audio system 7800 illustrated in FIG. 57 can utilize the various methods and processes discussed above to provide wireless audio delivery to personal audio devices.
  • the set-top box 7804 can also include a wireless audio adapter as discussed above.
  • the set-top box 7804 can include the wireless transmission apparatus 7104 (and possibly the audio system 7102).
  • the set-top box 7804 can include the wireless transmitter 7204 (and possibly the audio system 7202) ofthe remote audio delivery system 7200.
  • the set-top box with directional speakers shown in FIG. 57 is able to transform conventional televisions into televisions whose audio systems have directional audio delivery (as well as wireless delivery to personal audio devices).
  • the ultrasonic beam is considered directed towards the ear as long as any portion ofthe beam, or the cone ofthe beam, is immediately proximate to, such as within 7cm of, the ear.
  • the direction of the beam does not have to be directed at the ear. It can even be orthogonal to the ear, such as propagating up from one's shoulder, substantially parallel to the face ofthe person.
  • the audio system 7102 is stationary - meaning that the audio system 7102, although movable, generally remain in a fixed location.
  • the invention can be implemented in software, hardware or a combination of hardware and software.
  • a number of embodiments ofthe invention can also be embodied as computer readable code on a computer readable medium.
  • the computer readable medium is any data storage device that can store data which can thereafter be read by a computer system. Examples of the computer readable medium include read-only memory, random-access memory, CD- ROMs, magnetic tape, optical data storage devices, and carrier waves.
  • the computer readable medium can also be distributed over network-coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
  • references to "one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment ofthe invention.
  • the appearances ofthe phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments m ⁇ + sUy exclusive of other embodiments.
  • the order of blocks in process flowcharts or diagrams representing one or more embodiments ofthe invention do not inherently indicate any particular order nor imply any limitations in the invention.

Abstract

Audio signals from a directional speaker (16) generated by transforming ultrasonic signals in air.

Description

DIRECTIONAL SPEAKERS
BACKGROUND OF THE INVENTION
Filed ofthe Invention
The present invention relates generally to electronic devices with audio output, and more particularly, to directional speakers.
Description ofthe Related Art
Cell phones and other wireless communication devices have become an integral part of our lives. However, the proliferation of such devices has brought on its share of headaches and challenges.
For example, there is still a need for improved ways to enable a wireless communication device, such as a cellular phone, to be used hands-free so that its user can participate in conversations with greater ease of use, without an earpiece placed against the user's ear, while maintaining a certain degree of privacy.
A significant portion of our population has a certain degree of hearing loss. There is still a need for improved techniques to assist those who are mildly or moderately hearing impaired. Audio systems, such as stereo systems, DVD players, VCRs, and televisions, typically provide audio sounds to one or more users. There is also a need for improved approaches for audio systems to providing audio sounds to desirous persons while reducing disturbance to other persons in the same environment, not desirous of hearing the audio sounds.
In addition, there is a need for improved approaches to providing wireless delivery of audio sounds from audio systems to personal audio devices that are not in the immediate neighborhood ofthe audio systems. SUMMARY
A number of embodiments ofthe present invention are based on a directional speaker. The audio signals from the speaker can be generated by fransforming ultrasonic signals in air. Different embodiments can be applied to a number of different areas, such as a cell phone, a hearing aid, a portable electronic device, and an entertainment system. The embodiments can be personalized to the hearing characteristics ofthe user, or to the ambient noise level ofthe environment.
One embodiment is applicable to a wireless communication system, such as a cell phone. The system can include an interface unit and a base unit. The audio signals from the speaker can be heard hands-free, while privacy protection is enhanced. The interface unit can be attached or integrated to a piece of clothing at the shoulder ofthe user, with the audio signals from the speaker directed towards one ofthe user's ears.
Another embodiment provides a hearing enhancement system that enhances a user's hearing based on a directional speaker. The system can include an interface unit that has the directional speaker and a microphone. The microphone captures input audio signals, which are transformed into ultrasonic signals. The speaker transmits the ultrasonic signals, which are transformed into output audio signals by air. At least a portion ofthe output audio signals has higher power than the input audio signals to enhance the hearing ofthe user. Based on the system, the user's ear remains free from any inserted objects and thus is free from the annoying occlusion effects. Compared to existing hearing aids, the system is relatively inexpensive. For example, the system does not require an individually-fitted ear mold.
Yet another embodiment uses a directional speaker in a portable electronic device, such as a handheld game console, to direct audio output in a directionally constrained manner. A certain degree of privacy with respect to the audio output is achieved, yet the user need not wear a headset or an ear phone, or have to hold a speaker against one's ear, while freeing up both of the user's hands. The directional speaker can be integral with the portable electronic device. Alternatively, the directional speaker can be attached or coupled to the portable electronic device. One embodiment is on a directional audio apparatus, such as an entertainment system, that provides directional delivery of audio output targeted to those one or more persons desirous of hearing the audio output. Consequently, other persons not desirous of hearing the audio output do not receive substantial amounts ofthe audio output and thus are less disturbed by the unwanted audio sounds. The directional audio apparatus includes a directional speaker. A number ofthe attributes of he audio output can be controlled, either by a user or by monitored measurements. Such attributes include the beam width, the beam direction, the degree of isolation or privacy, and the volume ofthe audio outputs. The audio output can also be personalized or modified according to the audio conditions ofthe surroundings ofthe apparatus. To control these attributes or characteristics, a number of approaches can be used. For example, the surface ofthe speaker can be segmented or curved, the ultrasonic frequencies can be changed, the phases to individual speaker elements can be adjusted, or the path lengths ofthe ultrasonic waves from the emitting surface ofthe speaker can be elongated before the audio output emits into free space. Also, more than one directional speaker can be used to generate stereo effects.
Yet another embodiment ofthe invention includes techniques to provide wireless delivery of audio sounds from audio systems to personal audio devices. These techniques permit users ofthe personal audio device to be mobile yet still acquire the audio sounds. According to one aspect ofthe invention, a wireless adapter can serve as an after market modification to an audio system.
Other aspects and advantages ofthe present invention will become apparent from the following detailed description, which, when taken in conjunction with the accompanying drawings, illustrates by way of example the principles ofthe invention.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:
Fig. 1 shows one embodiment ofthe invention with a base unit coupled to a directional speaker and a microphone.
Fig. 2 shows examples of characteristics ofthe directional speaker of the present invention.
Fig. 3 shows examples of mechanisms to set the direction ofthe audio signals ofthe present invention.
Fig. 4A shows one embodiment of a blazed grating for the present invention.
Fig. 4B shows an example of a wedge to direct the propagation angle ofthe audio signals in the present invention.
Fig. 5 shows an example of a steerable phase array of devices to generate the directional audio signals in the present invention.
Fig. 6 shows one example of an interface unit attached to a piece of clothing of a user in the present invention. Fig. 7 shows examples of mechanisms to couple the interface unit to a piece of clothing in the present invention.
Fig. 8 shows examples of different coupling techniques between the interface unit and the base unit in the present invention.
Fig. 9 shows examples of additional attributes ofthe wireless communication system in the present invention.
Fig. 10 shows examples of attributes of a power source for use with the present invention. Fig. 11 A shows the phone being a hands-free or a normal mode phone according to one embodiment ofthe present invention.
Fig. 1 IB shows examples of different techniques to automatically select the mode of a dual mode phone in the present invention. Fig. 12 shows examples of different embodiments ofthe interface unit ofthe present invention.
Fig. 13 shows examples of additional applications for the present invention.
FIG. 14 shows another embodiment ofthe present invention. FIG. 15 shows a person wearing one embodiment of the present invention.
FIG. 16 shows different embodiments regarding frequency-dependent amplification ofthe present invention.
FIG. 17 shows a number of embodiments regarding calibration ofthe present invention.
FIG. 18A shows a number of embodiments regarding power management ofthe present invention.
FIG. 18B shows an embodiment ofthe interface unit with an electrical connection.
FIGS. 19A-19C show different embodiments regarding microphones in the present invention.
FIG. 20 shows embodiments ofthe present invention, which can also function as a phone. FIG. 21 is a flow diagram of call processing according to one embodiment ofthe invention.
FIG. 22 shows a number of embodiments regarding improving privacy ofthe present invention.
FIG. 23 shows a number of embodiments ofthe present invention accessing audio signals from other instruments wirelessly or through wired connection.
FIG. 24A is a view of a mobile telephone with an integrated directional speaker according to one embodiment ofthe invention.
FIG. 24B is a perspective view of a flip-type mobile telephone with an integrated directional speaker according to another embodiment ofthe invention. FIG. 25 is a perspective view of a personal digital assistant with an integrated directional speaker according to one embodiment ofthe invention.
FIG. 26 is a block diagram of an electronic device with wireless communication capability according to one embodiment ofthe invention.
FIG. 27A is a block diagram of a directional audio conversion apparatus according to one embodiment ofthe invention. FIG. 27B is a block diagram of a pre-processor according to one embodiment ofthe invention.
FIG. 27C is a block diagram of an estimation circuit for a pre-processor according to one embodiment ofthe invention. FIG. 28 illustrates different embodiments of directional speaker characteristics according to the invention. '
FIG. 29 is a flow diagram of audio signal processing according to one embodiment ofthe invention.
FIG. 30 is a flow diagram of speaker selection processing according to one embodiment ofthe invention.
FIG. 31 is a diagram indicating exemplary conditions that can be utilized to select the appropriate speaker.
FIG. 32A is a perspective view of a personal digital assistant with an attachable directional speaker according to another embodiment ofthe invention. FIG. 32B is a perspective view of a personal digital assistant with an attachable directional speaker according to another embodiment ofthe invention.
FIG. 33 is a perspective view of a mobile telephone with yet another attachable directional speaker according to one embodiment ofthe invention.
FIG. 34 is a diagram depicting examples of additional applications associated with the invention.
FIG. 35 is a block diagram of a directional audio delivery device coupled to an audio system according to one embodiment ofthe invention.
FIG. 36A is a block diagram of a directional audio delivery device according to one embodiment ofthe invention. FIG. 36B is a block diagram of a directional audio delivery device according to another embodiment ofthe invention.
FIG. 37A is a diagram illustrating a representative arrangement suitable for use by different embodiments ofthe invention.
FIG. 37B is a diagram of a representative building layout illustrating one application of the present invention. FIG. 38 is a flow diagram of directional audio delivery processing according to an embodiment ofthe invention.
FIG. 39 shows examples of attributes ofthe constrained audio output according to the invention. FIG. 40 is another representative building layout illustrating one application ofthe present invention.
FIG. 41 is a flow diagram of directional audio delivery processing according to another embodiment ofthe invention.
FIG. 42 A is a flow diagram of directional audio delivery processing according to yet another embodiment ofthe invention.
FIG. 42B is a flow diagram of an environmental accommodation process according to one embodiment ofthe invention.
FIG. 42C is a flow diagram of audio personalization process according to one embodiment ofthe invention. FIG. 43 A is a perspective diagram of an ultrasonic transducer according to one embodiment ofthe invention.
FIG. 43B is a diagram that illustrates the ultrasonic transducer with its beam being produced for audio output according to an embodiment ofthe invention.
FIGs. 43C-43D illustrate two embodiments ofthe invention where the directional speakers are segmented.
FIGs. 43E-43G shows changes in beam width based on different carrier frequencies according to an embodiment ofthe present invention.
FIGs. 44A-44B are diagrams of two embodiments ofthe invention where the directional speakers have curved surfaces to expand the beam. FIG. 44C shows beam expansion based on a convex reflector according to an embodiment ofthe invention.
FIGs. 45A-45B show two embodiments ofthe invention whose directional speakers have curved surfaces that are segmented. FIGs. 46A and 46B are perspective diagrams of audio systems with directional audio delivery devices in a set-top-box environment according to different embodiments ofthe present invention.
FIG. 47 is a perspective diagram of a remote control device according to one embodiment ofthe invention.
FIGs. 48A-48B show two embodiments ofthe invention with directional audio delivery devices that allow ultrasonic signals to bounce back and forth before emitting into free space.
FIG.49 shows two directional audio delivery devices spaced apart to generate stereo effects according to one embodiment ofthe present invention. FIG. 50 is a block diagram of a remote audio delivery system according to one embodiment ofthe invention.
FIG. 51 is a block diagram of a remote audio delivery system according to another embodiment ofthe invention.
FIG. 52 is a block diagram of a remote audio delivery system according to yet another embodiment ofthe invention.
FIG. 53 is a diagram of a building layout illustrating use of different embodiments ofthe present invention.
FIG. 54 is a flow diagram of a remote audio delivery process according to one embodiment ofthe invention. FIG. 55 A is a flow diagram of an environmental accommodation process according to one embodiment ofthe invention.
FIG. 55B is a flow diagram of audio personalization process according to one embodiment ofthe invention.
FIGs. 56A-B illustrate ultrasonic transducers according to one embodiment ofthe invention.
FIG. 57 is a perspective diagram of audio systems that provide directional audio delivery to interested users. DETAILED DESCRIPTION OF THE INVENTION
Embodiments ofthe invention are discussed below with reference to FIGs. 1-57. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes as the invention extends beyond these limited embodiments.
One embodiment ofthe present invention is a wireless communication system that provides improved hands-free usage. The wireless communication system can, for example, be a mobile phone. Fig. 1 shows a block diagram of wireless communication system 10 according to one embodiment ofthe invention. The wireless corrimunication system 10 has a base unit 12 that is coupled to an interface unit 14. The interface unit 14 includes a directional speaker 16 and a microphone 18. The directional speaker 16 generates directional audio signals.
From basic aperture antenna theory, the angular beam width θ of a source, such as the directional speaker, is roughly λ / D, where θ is the angular full width at half-maximum (FWHM), λ is the wavelength and D is the diameter of the aperture. For simplicity, assume the aperture to be circular.
For ordinary audible signals, the frequency is from a few hundred hertz, such as 500 Hz, to a few thousand hertz, such as 5000 Hz. With the speed of sound in air c being 340 m/s, λ of ordinary audible signals is roughly between 70 cm and 7 cm. For personal or portable applications, the dimension of a speaker can be in the order of a few cm. Given that the acoustic wavelength is much larger than a few cm, such a speaker is almost omni-directional. That is, the sound source is emitting energy almost uniformly at all directions. This can be undesirable if one needs privacy because an omni-directional sound source means that anyone in any direction can pick up the audio signals. To increase the directivity ofthe sound source, one approach is to decrease the wavelength of sound, but this can put the sound frequency out ofthe audible range. Another technique is known as parametric acoustics.
Parametric acoustic operation has previously been discussed, for example, in the following publications: "Parametric Acoustic Array," by P. J. Westervelt, in J., Acoust. Soc. Am., Vol. 35 (4), pp. 535-537, 1963; "Possible exploitation of Non-Linear Acoustics in Underwater Transmitting Applications," by H. O. Berktay, in J. Sound Vib. Vol. 2 (4):435-461 (1965); and "Parametric Array in Air," by Bennett et al., in J. Acoust. Soc. Am., Vol. 57 (3), pp.562-568, 1975.
In one embodiment, assume that the audible acoustic signal is f(t) where f(t) is a band- limited signal, such as from 500 to 5,000 Hz. A modulated signal f(t) sin ωc t is created to drive an acoustic transducer. The carrier frequency coJ2π should be much larger than the highest frequency component of f(t). In an example, the carrier wave is an ultrasonic wave. The acoustic transducer should have a sufficiently wide bandwidth at ωc to cover the frequency band ofthe incoming signal f(t). After this signal f(t) sin ωc t is emitted from the transducer, non- linear demodulation occurs in air, creating an audible signal, E(t), where
E(t) oc tfldt2 [ f(τ) ]
with τ= t-L I c , and L being the distance between the source and the receiving ear. In this example, the demodulated audio signal is proportional to the second time derivative ofthe square ofthe modulating envelope f(t).
To retrieve the audio signal f(t) more accurately, a number of approaches pre-process the original audio signals before feeding them into the transducer. Each has its specific attributes and advantages. One pre-processing approach is disclosed in "Acoustic Self-demodulation of Pre-distorted Carriers," by B. A. Davy, Master's Thesis submitted to U. T. Austin in 1972. The disclosed technique integrates the signal f(t) twice, and then square-roots the result before multiplying it with the carrier sin ωc t. The resultant signals are applied to the transducer. Tn doing so, an infinite harmonics of f(t) could be generated, and a finite transmission bandwidth can create distortion. Another pre-processing approach is described in "The audio spotlight: An application of nonlinear interaction of sound waves to a new type of loudspeaker design," by Yoneyama et al., Journal ofthe Acoustic Society of America, Vol. 73 (5), pp. 1532-1536, May 1983. The preprocessing scheme depends on double side-band (DSB) modulation: Let S(t) = 1 + m f(t), where m is the modulation index. S(t) sin ωc t is used to drive the acoustic transducer instead of f(t) sin ωc t. Thus, E(t) oc cr'/dΫ [ S2(τ) ] oc 2 m f{τ) + m2 tf/b [ f(τ)2] .
The first term provides the original audio signal. But the second term can produce undesirable distortions as a result ofthe DSB modulation. One way to reduce the distortions is by lowering the modulation index m. However, lowering m may also reduce the overall power efficiency ofthe system.
In "Development of a parametric loudspeaker for practical use," Proceedings of 10th International Symposium on Non-linear Acoustics, pp. 147-150, 1984, Kamakura et al. introduced a pre-processing approach to remove the undesirable terms. It uses a modified amplitude modulation (MAM) technique by defining S(t) = [1 + m f(t) ]1 2. That is, the demodulated signal E(t) oc m f(t). The square-rooted envelope operation ofthe MAM signal can broaden the bandwidth of S(t), and can require an infinite transmission bandwidth for distortion- free demodulation. In "Suitable Modulation of the Carrier Ultrasound for a Parametric Loudspeaker,"
Acoustica, Vol. 23, pp. 215-217, 1991, Kamakura et al. introduced another pre-processing scheme, known as "envelope modulation". In this scheme, S(t) = [e(t) + m l(t) ] m where e(t) is the envelope of f(t). The transmitted power was reduced by over 64% using this scheme and the distortion was better than the DSB or single-side band (SSB) modulation, as described in "Self- demodulation of a plane- wave - Study on primary wave modulation for wideband signal transmission," by Aoki et al., J. Acoust. Soc. Jpn., Vol. 40, pp. 346-349, 1984.
Back to directivity, the modulated signals, S(t) sin ωc t or f(t) sin ωc t, have a better directivity than the original acoustic signal f(t), because ω0 is higher than the audible frequencies. As an example, ωc can be 2π*40 kHz, though experiment has shown that ωc can range from 2π*20 kHz to well over 2π* 1 MHz. Typically, ωc is chosen not to be too high because of the higher acoustic absorption at higher carrier frequencies. Anyway, with ωc being 2π*40 kHz, the modulated signals have frequencies that are approximately ten times higher than the audible frequencies. This makes an emitting source with a small aperture, such as 2.5 cm in diameter, a directional device for a wide range of audio signals. In one embodiment, choosing a proper working carrier frequency ωc takes into consideration a number of factors, such as:
• To reduce the acoustic attenuation, which is generally proportional to ωc 2, the carrier frequency ωc should not be high. • The FWHM ofthe ultrasonic beam should be large enough, such as 25 degrees, to accommodate head motions ofthe person wearing the portable device and to reduce the ultrasonic intensity through beam expansion.
• , To avoid the near-field effect which may cause amplitude fluctuations, the distance between the emitting device and the receiving ear r should be greater than 0.3*i?ø, where RQ is the Rayleigh distance, and is defined as (the area ofthe emitting aperture / λ). As an example, with FWHM being 20 degrees, θ = λ //J> = (c 2π / ωc) /Z) ~ l/3. Assuming D is 2.5 cm, ωc becomes 2π*40 kHz. From this relation, it can be seen that the directivity of the ultrasonic beam can be adjusted by changing the carrier frequency ωc. If a smaller aperture acoustic transducer is preferred, the directivity may decrease. Note also that the power generated by the acoustic transducer is typically proportional to the aperture area. In the above example, the Rayleigh distance Ro is about 57 mm.
Accordingly, in one embodiment, directional audio signals can be generated by the speaker 16 even with a relatively small aperture through modulated ultrasonic signals. The modulated signals can be demodulated in air *o regenerate the audio signals. The speaker can then generate directional audio signals even when emitted from an aperture that is in the order of a few centimeters. This allows the directional audio signals to be pointed at desired directions.
Note that a number of examples have been described on generating audio signals through demodulating ultrasonic signals. However, the audio signals can also be generated through mixing two ultrasonic signals whose difference frequencies are the audio signals.
Fig. 2 shows examples of characteristics of a directional speaker. The directional speaker can, for example, be the directional speaker 16 illustrated in Fig.l. The directional speaker can use a piezoelectric thin film. The piezoelectric thin film can be deposited on a plate with many cylindrical tubes. An example of such a device is described in US Patent No. 6,011 ,855, which is hereby incorporated by reference. The film can be a polyvinylidiene di-fluoride (PVDF) film, and can be biased by metal electrodes. The film can be attached or glued to the perimeter ofthe plate of tubes. The total emitting surfaces of all ofthe tubes can have a dimension in the order of a few wavelengths ofthe carrier or ultrasonic signals. Appropriate voltages applied through the electrodes to the piezoelectric thin film create vibrations ofthe thin film to generate the modulated ultrasonic signals. These signals cause resonance ofthe enclosed tubes. After emitted from the film, the ultrasonic signals self-demodulate through non-linear mixing in air to produce the audio signals.
As one example, the piezoelectric film can be about 28 microns in thickness; and the tubes can be 9/64" in diameter and spaced apart by 0.16", from center to center ofthe tube, to create a resonating frequency of around 40 kHz. With the ultrasonic signals being centered around 40 kHz, the emitting surface ofthe directional speaker can be around 2 cm by 2 cm. A significant percentage ofthe ultrasonic power generated by the directional speaker can, in effect, be confined in a cone. To calculate the amount of ultrasonic power within the cone, for example, as a rough estimation, assume that (a) the emitting surface is a uniform circular aperture with the diameter of 2.8 cm, (b) the wavelength ofthe ultrasonic signals is 8.7 mm, and (c) all power goes to the forward hemisphere, then the ultrasonic power contained within the FWHM ofthe main lobe is about 97%, and the power contained from null to null ofthe main lobe is about 97.36%. Similarly, again as a rough estimation, if the diameter ofthe aperture drops to 1 cm, the power contained within the FWHM ofthe main lobe is about 97.2%, and the power contained from null to null ofthe main lobe is about 99%.
Referring back to the example ofthe piezoelectric film, the FWHM ofthe signal beam is about 24 degrees. Assume that such a directional speaker 16 is placed on the shoulder of a user. The output from the speaker can be directed in the direction of one ofthe ears ofthe user, with the distance between the shoulder and the ear being, for example, 8 inches. More than 75% of the power ofthe audio signals generated by the emitting surface ofthe directional speaker can, in effect, be confined in a cone. The tip ofthe cone is at the speaker, and the mouth ofthe cone is at the location ofthe user's ear. The diameter ofthe mouth ofthe cone, or the diameter ofthe cone in the vicinity ofthe ear, is less than about 4 inches. In another embodiment, the directional speaker can be made of a bimorph piezoelectric transducer. The transducer can have a cone ofabout 1 cm in diameter. In yet another embodiment, the directional speaker can be a magnetic transducer. In a further embodiment, the directional speaker does not generate ultrasonic signals, but generates audio signals directly; and the speaker includes, for example, a physical horn or cone to direct the audio signals.
In yet another embodiment, the power output from the directional speaker is increased by increasing the transformation efficiency (e.g. demodulation or mixing efficiency) ofthe ultrasonic signals. According to the Berktay's formula, as disclosed, for example, in "Possible exploitation of Non-Linear Acoustics in Underwater Transmitting Applications," by H. O. Berktay, in J. Sound Vib. Vol. 2 (4):435-461 (1965), output audio power is proportional to the coefficient of non-linearity ofthe mixing or demodulation medium.
As explained, in one embodiment, based on parametric acoustic techniques, directional audio signals can be generated. Fig. 3 shows examples of mechanisms to direct the ultrasonic signals. They represent different approaches, which can utilize, for example, a grating, a malleable wire, or a wedge.
Fig. 4A shows one embodiment of a directional speaker 50 having a blazed grating. The speaker 50 is, for example, suitable for use as the directional speaker 16. Each emitting device, such as 52 and 54, ofthe speaker 50 can be a piezoelectric device or another type of speaker device located on a step ofthe grating. In one embodiment, the sum of all ofthe emitting surfaces ofthe emitting devices can have a dimension in the order of a few wavelengths ofthe ultrasonic signals.
In another embodiment, each ofthe emitting devices can be driven by a replica ofthe ultrasonic signals with an appropriate delay to cause constructive interference ofthe emitted waves at the blazing normal 56, which is the direction orthogonal to grating. This is similar to the beam steering operation of a phase array, and can be implemented by a delay matrix. The delay between adjacent emitting surfaces can be approximately h/c, with the height of each step being h. One approach to simplify signal processing is to arrange the height of each grating step to be an integral multiple ofthe ultrasonic or carrier wavelength, and all the emitting devices can be driven by the same ultrasonic signals. Based on the grating structure, the array direction ofthe virtual audio sources can be the blazing normal 56. In other words, the structure ofthe steps can set the propagation direction of the audio signals. In the example shown in Fig. 4A, there are three emitting devices or speaker devices, one on each step. The total emitting surfaces are the sum ofthe emitting surfaces ofthe three devices. The propagation direction is approximately 45 degrees from the horizontal plane. The thickness of each speaker device can be less than half the wavelength ofthe ultrasonic waves. If the frequency ofthe ultrasonic waves is 40 kHz, the thickness can be about 4 mm.
Another approach to direct the audio signals to specific directions is to position a directional speaker ofthe present invention at the end of a malleable wire. The user can bend the wire to adjust the direction of propagation ofthe audio signals. For example, if the speaker is placed on the shoulder of a user, the user can bend the wire such that the ultrasonic signals produced by the speaker are directed towards the ear adjacent to the shoulder ofthe user.
Still another approach is to position the speaker device on a wedge. Fig. 4B shows an example of a wedge 75 with a speaker device 77. The angle ofthe wedge from the horizontal can be about 40 degrees. This sets the propagation direction 79 ofthe audio signals to be about 50 degrees from the horizon.
In one embodiment, the ultrasonic signals are generated by a steerable phase array of individual devices, as illustrated, for example, in Fig. 5. They generate the directional signals by constructive interference ofthe devices. The signal beam is steerable by changing the relative phases among the array of devices.
One way to change the phases in one direction is to use a one-dimensional array of shift registers. Each register shifts or delays the ultrasonic signals by the same amount. This array can steer the beam by changing the clock frequency ofthe shift registers. These can be known as "x" shift registers. To steer the beam independently also in an orthogonal direction, one approach is to have a second set of shift registers controlled by a second variable rate clock.
This second set of registers, known as "y" shift registers, is separated into a number of subsets of registers. Each subset can be an array of shift registers and each array is connected to one "x" shift register. The beam can be steered in the orthogonal direction by changing the frequency of the second variable rate clock. For example, as shown in Fig. 5, the acoustic phase array is a 4 by 4 array of speaker devices. The devices in the acoustic phase array are the same. For example, each can be a bimorph device or transmitter of 7mm in diameter. The overall size ofthe array can be around 2.8 cm by 2.8 cm. The carrier frequency can be set to 100 kHz. Each bimorph is driven at less than 0.1 W. The array is planar but each bimorph is pointed at the ear, such as at about 45 degrees to the array normal. The FWHM main lobe of each individual bimorph is about 0.5 radian.
There can be 4 "x" shift registers. Each "x" shift register can be connected to an array of 4 "y" shift registers to create a 4 by 4 array of shift registers. The clocks can be running at approximately 10 MHz (100 ns per shift). The ultrasonic signals can be transmitted in digital format and delayed by the shift registers at the specified amount.
Assuming the distance ofthe array from an ear is approximately 20 cm, the main lobe of each array device covers an area of roughly 10 cm x 10 cm around the ear. As the head can move over an area of 10 cm x 10 cm, the beam can be steerable roughly by a phase of 0.5 radian over each direction. This is equivalent to a maximum relative time delay of 40 us across one direction ofthe phase array, or 5 us of delay per device.
For a n by n array, the ultrasonic beam from each array element interferes with each other to produce a final beam that is 1/n narrower in beam width. In the above example, n is equal to 4, and the beam shape ofthe phase array is narrowed by a factor of 4 in each direction. That is, the FWHM is less than 8 degrees, covering an area of roughly 2.8 cm x 2.8 cm around the ear.
With power focused into a smaller area, the power requirement is reduced by a factor of 1/n2, significantly improving power efficiency. In one embodiment, the above array can give the acoustic power of over 90 dB SPL.
Instead of using the bimorph devices, the above example can use an array of piezoelectric thin film devices.
In one embodiment, the interface unit can also include a pattern recognition device that identifies and locates the ear, or the ear canal. Then, if the ear or the canal can be identified, the beam is steered more accurately to the opening ofthe ear canal. Based on closed loop control, the propagation direction ofthe ultrasonic signals can be steered by the results ofthe pattern recognition approach.
One pattern recognition approach is based on thermal mapping to identify the entrance to the ear canal. Thermal mapping can be through infrared sensors. Another pattern recognition approach is based on a pulsed-infrared LED, and a reticon or CCD array for detection. The reticon or CCD array can have a broadband interference filter on top to filter light, which can be a piece of glass with coating.
Note that if the system cannot identify the location ofthe ear or the ear canal, the system can expand the cone, or decrease its directivity. For example, all array elements can emit the same ultrasonic signals, without delay, but with the frequency decreased.
Privacy is often a concern for users of cell phones. Unlike music or video players where users passively receive information or entertainment, with cell phones, there is a two-way communication. In most circumstances, cell phone users have gotten accustomed to people hearing what they have to say. At least, they can control or adjust their part ofthe communication. However, cell phone users typically do not want others to be aware of their entire dialogue. Hence, for many applications, at least the voice output portion ofthe cell phone should provide some level of privacy. With the directional speaker as discussed herein, the audio signals are directional, and thus the wireless communication system provides certain degree of privacy protection.
Fig. 6 shows one example ofthe interface unit 100 attached to a jacket 102 ofthe user. The interface unit 100 includes a directional speaker 104 and a microphone 106. The directional speaker 104 emits ultrasonic signals in the general direction towards an ear ofthe user. The ultrasonic signals are transformed by mixing or demodulating in the air between the speaker and ear. The directional ultrasonic signals confine most ofthe audio energy within a cone 108 that is pointed towards the ear ofthe user. The surface area ofthe cone 108 when it reaches the head of the user can be tailored to be smaller than the head of the user. Hence, the directional ultrasonic signals are able to provide certain degree of privacy protection.
In one embodiment, there is one or more additional speaker devices provided within, proximate to, or around the directional speaker. The user's head can scatter a portion ofthe received audio signals. Others in the vicinity ofthe user may be able to pick up these scattered signals. The additional speaker devices, which can be piezoelectric devices, transmit random signals to interfere or corrupt the scattered signals or other signals that may be emitted outside the cone 108 ofthe directional signals to reduce the chance of others comprehending the scattered signals.
Fig. 7 shows examples of mechanisms to couple an interface unit to a piece of clothing. For example, the interface unit can be integrated into a user's clothing, such as located between the outer surface ofthe clothing and its inner lining. To receive power or other information from the outside, the interface unit can have an electrical protrusion from the inside ofthe clothing. Instead of integrated into the clothing, in another embodiment, the interface unit can be attachable to the user's clothing. For example, a user can attach the interface unit to his clothing, and then turn it on. Once attached, the unit can be operated hands-free. The interface unit can be attached to a strap on the clothing, such as the shoulder strap of a jacket. The attachment can be through a clip, a pin or a hook. There can be a small pocket, such as at the collar bone area or the shoulder ofthe clothing, with a mechanism (e.g., a button) to close the opening ofthe pocket. The interface unit can be located in the pocket. In another example, Velcro can be on both the interface unit and the clothing for attachment purposes. The interface unit can also be attached by a band, which can be elastic (e.g., an elastic armband). Or, the interface unit can be hanging from the neck ofthe user with a piece of string, like an ornamental design on a necklace. In yet another example, the interface unit can have a magnet, which can be magnetically attached to a magnet on the clothing. Note that one or more of these mechanisms can be combined to further secure the attachment. In yet another example, the interface unit can be disposable. For example, the interface unit could be disposed of once it runs out of power.
Regarding the coupling between the interface unit and the base unit, fig. 8 shows examples of a number of coupling techniques. The interface unit may be coupled wireiessiy or tethered to the base unit through a wire. In the wireless embodiment, the interface unit may be coupled through Bluetooth, WiFi, Ultrawideband (UWB) or other wireless network/protocol.
Fig. 9 shows examples of additional attributes ofthe wireless communication system of the present invention. The system can include additional signal processing techniques. Typically, single-side band (SSB) or lower-side band (LSB) modulation can be used with or without compensation for fidelity reproduction. If compensation is used, a processor (e.g., digital signal processor) can be deployed based on known techniques. Other components/functions can also be integrated with the processor. This can be local oscillation for down or up converting and impedance matching circuitry. Echo cancellation techniques may also be included in the circuitry. However, since the speaker is directional, the echo cancellation circuitry may not be necessary. These other functions can also be performed by software (e.g., firmware or microcode) executed by the processor.
The base unit can have one or more antennae to communicate with base stations or other wireless devices. Additional antennae can improve antenna efficiency. In the case where the interface unit wireiessiy couples to the base unit, the antenna on the base unit can also be used to communicate with the interface unit. In this situation, the interface unit may also have more than one antenna.
The antenna can be integrated to the clothing. For example, the antenna and the base unit can both be integrated to the clothing. The antenna can be located at the back ofthe clothing.
The system can have a maximum power controller that controls the maximum amount of power delivered from the interface unit. For example, average output audio power can be set to be around 60dB, and the maximum power controller limits the maximum output power to be below 70dB. In one embodiment, this maximum power is in the interface unit and is adjustable.
The wireless communication system may be voice activated. For example, a user can enter, for example, phone numbers using voice commands. Information, such as phone numbers, can also be entered into a separate computer and then downloaded to the communication system. The user can then use voice commands to make connections to other phones.
The wireless communication system can have an in-use indicator. For example, if the system is in operation as a cell phone, and if the user is talking on the phone, there can be a light- emitting diode blinking at the iulerface unit. The in-use indicator allows others to be aware that the user is, for example, on the phone. In yet another embodiment, the base unit of the wireless communication system can also be integrated to the piece of clothing. The base unit can have a data port to exchange information and a power plug to receive power. Such port or ports can protrude from the clothing.
Fig. 10 shows examples of attributes ofthe power source. The power source may be a rechargeable battery or a non-rechargeable battery. As an example, a bimorph piezoelectric device, such as AT/R40-12P from Nicera, Nippon Ceramic Co., Ltd., can be used as a speaker device to form the speaker. It has a resistance of 1,000 ohms. Its power dissipation can be in the milliwatt range. A coin-type battery that can store a few hundred mAHours of energy has sufficient power to run the unit for a limited duration of time. Other types of batteries are also applicable.
The power source can be from a DC supply. The power source can be attachable, or integrated or embedded in a piece of clothing worn by the user. The power source can be a rechargeable battery. In one embodiment, for a rechargeable battery, it can be integrated in the piece of clothing, with its charging port exposed. The user can charge the battery on the road. For example, if the user is driving, the user can use a cigarette-lighter type charger to recharge the battery. In yet another embodiment, the power source is a fuel cell. The cell can be a cartridge of fuel, such methanol.
A number of embodiments have been described where the wireless communication system is a phone, particularly a cell phone that can be operated hands-free. In one embodiment, this can be considered as a hands-free mode phone. Fig. 11 A shows one embodiment where the phone can alternatively be a dual-mode phone. In a normal-mode phone, the audio signals are produced directly from a speaker integral with the phone (e.g., within its housing). Such a speaker is normally substantially non-directional, or does not generate audio signals through transforming ultrasonic signals in air. In a dual mode phone, one mode is the hands-free mode phone as described above, and the other mode is the normal-mode phone.
The mode selection process can be set by a switch on the phone. In one embodiment, mode selection can be automatic. Fig. 1 IB shows examples of different techniques to automatically select the mode of a dual mode phone. For example, if the phone is attached to the clothing, the directional speaker ofthe interface unit can be automatically activated, and the phone becomes the hands-free mode phone. In one embodiment, automatic activation can be achieved through a switch integrated to the phone. The switch can be a magnetically-activated switch. For example, when the interface unit is attached to clothing (for hands-free usage), a magnet or a piece of magnetizable material in the clothing can cause the phone to operate in the hands-free mode. When the phone is detached from clothing, the magnetically-activated switch can cause the phone to operate as a normal-mode phone. In another example, the switch can be mechanical. For example, an on/off button on the unit can be mechanically activated if the unit is attached. This can be done, for example, by a lever such that when the unit is attached, the lever will be automatically pressed. In yet another example, activation can be based on orientation. If the interface unit is substantially in a horizontal orientation (e.g., within 30 degrees from the horizontal), the phone will operate in the hands-free mode. However, if the unit is substantially in a vertical orientation (e.g., within 45 degrees from the vertical), the phone will operate as a normal-mode phone. A gyro in the interface unit can be used to determine the orientation ofthe interface unit.
A number of embodiments have been described where the wireless communication system is a phone with a directional speaker and a microphone. However, the present invention can be applied to other areas. Fig. 12 shows examples of other embodiments ofthe interface unit, and Fig. 13 shows examples of additional applications.
The interface unit can have two speakers, each propagating its directional audio signals towards one ofthe ears ofthe user. For example, one speaker can be on one shoulder ofthe user, and the other speaker on the other shoulder. The two speakers can provide a stereo effect for the user.
A number of embodiments have been described where the microphone and the speaker are integrated together in a single package. In another embodiment, the microphone can be a separate component and can be attached to the clothing as well. For wired connections, the wires from the base unit can connect to the speaker and at least one wire can split off and connect to the microphone at a location close to the head ofthe user.
The interface unit does not need to include a microphone. Such a wireless communication system can be used as an audio unit, such as a MP3 player, a CD player or a radio. Such wireless communication systems can be considered one-way communication systems. In another embodiment, the interface unit can be used as the audio output, such as for a stereo system, television or a video game player. For example, the user can be playing a video game. Instead of having the audio signals transmitted by a normal speaker, the audio signals, or a representation ofthe audio signals, are transmitted wireiessiy to a base unit or an interface unit. Then, the user can hear the audio signals in a directional manner, reducing the chance of annoying or disturbing people in his immediate environment. In another embodiment, the base unit and the interface unit are integrated together in a package, which again can be attached to the clothing by techniques previously described for the interface unit.
In yet another embodiment, the interface unit can include a monitor or a display. A user can watch television or video signals in the public, again with reduced possibility of disturbing people in the immediate surroundings because the audio signals are directional. For wireless applications, video signals can be transmitted from the base unit to the interface unit through UWB signals.
The base unit can also include the capability to serve as a computation system, such as in a personal digital assistant (PDA) or a notebook computer. For example, as a user is working on the computation system for various tasks, the user can simultaneously communicate with another person in a hands-free manner using the interface unit, without the need to take her hands off the computation system. Data generated by a software application the user is working on using the computation system can be transmitted digitally with the voice signals to a remote device (e.g., another base station or unit). In this embodiment, the directional speaker does not have to be integrated or attached to the clothing ofthe user. Instead, the speaker can be integrated or attached to the computation system, and the computation can function as a cell phone. Directional audio signals from the phone call can be generated for the user while the user is still able to manipulate the computation system with both of his hands. The user can simultaneously make phone calls and use the computation system. In yet another approach for this embodiment, the computation system is also enabled to be connected wireiessiy to a local area network, such as to a WiFi or WLAN network, which allows high-speed data as well as voice communication with the network. For example, the user can make voice over IP calls. In one embodiment, the high-speed data as well as voice communication permits signals to be transmitted wireiessiy at frequencies beyond 1 GHz. In yet another embodiment, the wireless communication system can be a personalized wireless communication system. The audio signals can be personalized to the hearing characteristics ofthe user ofthe system. The personalization process can be done periodically, such as once every year, similar to periodic re-calibration. Such re-calibration can be done by another device, and the results can be stored in a memory device. The memory device can be a removable media card, which can be inserted into the wireless communication system to personalize the amplification characteristics ofthe directional speaker as a function of frequency. The system can also include an equalizer that allows the user to personalize the amplitude ofthe speaker audio signals as a function of frequency.
The system can also be personalized based on the noise level in the vicinity ofthe user. The device can sense the noise level in its immediate vicinity and change the amplitude characteristics of the audio signals as a function of noise level.
The form factor ofthe interface unit can be quite compact. In one embodiment, it is rectangular in shape. For example, it can have a width ofabout "x", a length ofabout "2x", and a thickness that is less than "x". "X" can be 1.5 inches, or less than 3 inches. In another example, the interface unit has a thickness of less than 1 inch. In yet another example, the interface unit does not have to be flat. It can have a curvature to conform to the physical profile ofthe user.
A number of embodiments have been described with the speaker being directional. In one embodiment, a speaker is considered directional if the FWHM of its ultrasonic signals is less than about 1 radian or around 57 degrees. In another embodiment, a speaker is considered directional if the FWHM of its ultrasonic signals is less than about 30 degrees. In yet another embodiment, a speaker is transmitting from, such as, the shoulder ofthe user, or a speaker is transmitting signals towards a user's ear. The speaker is considered directional if in the vicinity ofthe user's ear or in the vicinity 6-8 inches away from the speaker, 75% ofthe power of its audio signals is within an area of less than 50 square inches. In a further embodiment, a speaker is considered directional if in the vicinity of the ear or in the vicinity a number of inches, such as 8 inches, away from the speaker, 75% ofthe power of its audio signals is within an area of less than 20 square inches. In yet a further embodiment, a speaker is considered directional if in the vicinity ofthe ear or in the vicinity a number of inches, such as 8 inches, away from the speaker, 75% ofthe power of its audio signals is within an area of less than 13 square inches. Also, referring back to Fig. 6, in one embodiment, a speaker can be considered a directional speaker if most ofthe power of its audio signals is propagating in one general direction, confined within a virtual cone, such as the cone 108 in Fig. 6, and the angle between the two sides or edges ofthe cone shown in Fig. 6, or the cross-sectional angle ofthe cone, is less than 60 degrees. In another embodiment, the angle between the two sides or edges ofthe cone, or the cross-sectional angle ofthe cone, is less than 45 degrees. In a number of embodiments described above, the directional speaker generates ultrasonic signals in the range of 40 kHz. One ofthe reasons to pick such a frequency is for power efficiency. However, to reduce leakage, cross talk or to enhance privacy, in one embodiment, the ultrasonic signals are between 200 kHz to 1 MHz. It can be generated by multilayer piezoelectric thin films, or other types of solid state devices. Since the carrier frequency is at a higher frequency range than 40 kHz, the absorption/attenuation coefficient by air is considerably higher. For example, at 500 kHz, in one calculation, the attenuation coefficient α can be about 4.6, implying that the ultrasonic wave will be attenuated by exp(-α*z) or 40 dB/m. As a result, the waves are more quickly attenuated, reducing the range of operation ofthe speaker in the propagation direction ofthe ultrasonic waves. On the other hand, privacy is enhanced and audible interference to others is reduced.
A number of embodiments of directional speakers have also been described where the resultant propagation direction ofthe ultrasonic waves is not orthogonal to the horizontal, but at, for example, 45 degrees. The ultrasonic waves can be at an angle so that the main beam ofthe waves is approximately pointed at an ear ofthe user. In one embodiment, the propagation direction ofthe ultrasonic waves is approximately orthogonal to the horizontal. Such a speaker does not have to be on a wedge or a step. It can be on a surface that is substantially parallel to the horizontal. For example, the speaker can be on the shoulder of a user, and the ultrasomc waves propagate upwards, instead of at an angle pointed at an ear ofthe user. If the ultrasonic power is sufficient, the waves would have sufficient acoustic power even when the speaker is not pointing exactly at the ear.
One approach to explain the sufficiency in acoustic power is that the ultrasonic speaker generates virtual sources in the direction of propagation. These virtual sources generate secondary acoustic signals in numerous directions, not just along the propagation direction. This is similar to the antenna pattern which gives non-zero intensity in numerous directions away from the direction of propagation. In one such embodiment, the acoustic power is calculated to be from 45 to 50 dB SPL if (a) the ultrasomc carrier frequency is 500 kHz; (b) the audio frequency is 1 kHz; (c) the emitter size ofthe speaker is 3 cm x 3 cm; (d) the emitter power (peak) is 140 dB SPL; (e) the emitter is positioned at 10 to 15 cm away from the ear, such as located on the shoulder ofthe user; and (f) with the ultrasonic beam pointing upwards, not towards the ear, the center ofthe ultrasonic beam is about 2 - 5 cm away from the ear.
In one embodiment, the ultrasonic beam is considered directed towards the ear as long as any portion ofthe beam, or the cone ofthe beam, is immediately proximate to, such as within 7cm of, the ear. The direction ofthe beam does not have to be directed at the ear. It can even be orthogonal to the ear, such as propagating up from one's shoulder, substantially parallel to the face ofthe person.
In yet another embodiment, the emitting surface ofthe ultrasonic speaker does not have to be flat. It can be designed to be concave or convex to eventually create a diverging ultrasonic beam. For example, if the focal length of a convex surface is f, the power ofthe ultrasonic beam would be 6 dB down at a distance of ffrom the emitting surface. To illustrate numerically, if f is equal to 5 cm, then after 50 cm, the ultrasonic signal would be attenuated by 20 dB.
A number of embodiments have been described where a device is attachable to the clothing worn by a user. In one embodiment, attachable to the clothing worn by a user includes wearable by the user. For example, the user can wear a speaker on his neck, like a pendant on a necklace. This also would be considered as attachable to the clothing worn by the user. From another perspective, the necklace can be considered as the "clothing" worn by the user, and the device is attachable to the necklace.
One or more ofthe above-described embodiments can be combined. For example, two directional speakers can be positioned one on each side of a notebook computer. As the user is playing games on the notebook computer, the user can communicate with other players using the microphone on the notebook computer and the directional speakers, again without taking his hands off a keyboard or a game console. Since the speakers are directional, audio signals are more confined to be directed to the user in front ofthe notebook computer.
Enhanced Hearing
A number of embodiments ofthe present invention pertain to a hearing enhancement system that enhances an individual's hearing, particularly for those with mild or moderate hearing loss. FIG. 14 shows one embodiment of a hearing enhancement system 2010 ofthe present invention. The hearing enhancement system 2010 includes an interface unit 2014, which includes a directional speaker 2016 and a microphone 2018. The embodiment may also include a base unit 2012, which has or, can couple to, a power source. The interface unit 2014 can electrically couple to the base unit 2012, In one embodiment, the base unit 2012 can be integrated within the interface unit 2014. The coupling can be in a wired (e.g., cable) or a wireless (e.g., Bluetooth technologies) manner.
FIG. 15 shows a person wearing an interface unit 2100 ofthe present invention on his jacket 2102. The interface unit 2100 can, for example, be the interface unit 2014 shown in FIG. 14. Again, the interface unit 2100 includes a directional speaker 2104 and a microphone 2106. The speaker 2104 can be in a line of sight of an ear ofthe user.
Consider the scenario where a friend is speaking to the user. In one approach, the microphone 2106 picks up the friend's speech, namely, her audio signals. A hearing enhancement system according to the invention can then use the audio signals to modulate ultrasound signals. Then, the directional speaker 2104 transmits the modulated ultrasonic signals in air towards the ear ofthe user. The transmitted signals are demodulated in air to create the output audio signals. Based on ultrasound transmission, the speaker 2.04 generates directional audio signals and sends them as a cone (virtual cone) 108 to the user's ear. In another approach, the directional speaker 2104 includes a physical cone or a horn that directly transmits directional audio signals. In yet another approach, the audio signals from the speaker can be steered to the ear or the ear canal, whose location can be identified through mechanisms, such as pattern recognition. A number of different embodiments ofthe directional speakers have been previously described in this application.
Typically, hearing of both ears decreases together. In a sense, this is similar to our need to wear glasses. Rarely would one eye of a person need glasses, and the other eye has 20/20 vision. As a result, there can be two interface units, one for the left ear and the other for the right ear. The left ear unit can be on the left shoulder, and the right ear unit can be on the right shoulder. These two interface units can be electrically coupled, or can be coupled to one base unit. Again, the coupling can be wired or wireless. In another approach, the interface unit can be worn by the user as a pendant on a necklace in front ofthe user. Output audio signals can then be propagated to both ears.
In one embodiment, the system is designed to operate in the frequency range between 500Hz to 8kHz. Typically, decreased in hearing is not the same across all audio frequencies. For example, in English, the user might be able to easily pick up the sound of vowels, but not the sound of consonants, such as the "S" and the "P". FIG. 16 shows different embodiments ofthe invention regarding frequency-dependent amplification ofthe received audio signals. Note that amplification is not limited to amph'fying the received audio signals directly. For example, in the embodiments using ultrasonic signals to generate output audio signals, amplification can mean the power level ofthe output audio signals being higher than the received audio signals. This can be through increasing the power ofthe ultrasonic signals.
One approach for frequency-dependent amplification assumes that the decreased in hearing typically starts at high frequencies, such as above 2 to 3 kHz. So, hearing may need more assistance at the high frequency range. In this approach, the embodiment amplifies the audio signals so that around the entrance ofthe ear, the signals can have sound pressure level ("SPL") ofabout 80dB from 2 kHz to 4 kHz. For frequencies below 2 kHz, the SPL is lower, such as, for frequencies lower than 500 Hz, the maximum SPL can be below 55dB. In one embodiment, the SPL ofthe output audio signals can be 70dB from 1.5 kHz to 4 kHz, and the 3 dB cutoff is also at 1.5 kHz. With a roll off being 12 dB/octave, at 750 Hz, the SPL becomes about 58 dB.
Another frequency-dependent amplification approach assumes that most information in the audio signals resides within a certain frequency band. For example, about 70% ofthe information in the audio signal!: ca be within the frequency range of 1 to 2 kHz. Since the ear canal remains open and the user may only be mildly or moderately hearing impaired, the user can be hearing the audio signals directly from his sender (i.e., without assistance provided by the hearing enhancement system). In this approach, the system filters audio signals in the identified frequency range, such as the 1 to 2 kHz range, and processes them for amplification and transmission to the user. For frequencies not within the frequency band, they are not processed for amplification. The user can pick them up directly from the sender. Low to mid frequencies, such as those below 2 kHz, are typically louder. Since the hearing enhancement system does not require having any hearing aid inserted into the ear, the low to mid frequencies can enter into the ear unaltered. Frequencies in the mid to high range, such as from 2000-3000 Hz, they will be in the natural resonance ofthe ear canal, which is typically around 2700 Hz. As a result, these frequencies can be increased by about 15 dB. With no hearing aid inserted into one ear, the audio signals do not experience any insertion loss, and there is also no occlusion effect due to the user's own voice.
In a third approach, amplification across frequencies is directly tailored to the hearing needs ofthe user. This can be done through calibration. This third approach can also be used in conjunction with either the first approach or the second approach.
FIG. 17 shows a number of embodiments regarding calibration of a user's hearing across various frequencies. Calibration enables the system to determine (e.g., estimate) the hearing sensitivity ofthe user. Through calibration, the user's hearing profile is generated. The user can perform calibration by himself. For example, the audio frequencies are separated into different bands. The system generates different SPL at each band to test the user's hearing. The specific power level that the user feels most comfortable would be the power level for that band for the user. After testing is done for all ofthe bands, based on the power levels for each band, the system creates the user's personal hearing profile. In this calibration process, the system can prompt the user and lead the user through an interactive calibration process. In another embodiment, calibration can be done remotely through a web site. The web site can guide the user through the calibration process. This can be done, for example, by the user being positioned proximate to a computer terminal that is connected through the Internet to the web site. The terminal has a speaker or headset that produces audio sounds as part ofthe calibration process. Instead of the user, this calibration process can also be done by a third party, such as an audiologist.
The user's hearing profile can be stored in the hearing enhancement system. If the calibration is done through a computer terminal, the hearing profile can be downloaded into the hearing enhancement system wireiessiy, such as through Bluetooth or infrared technology. The hearing profile can alternatively be stored in a portable media storage device, such as a memory stick. The memory stick could be inserted into the hearing enhancement system, or some other audio generating device, which desires to access the hearing profile and personalizes the system's amplification across frequencies to the user.
The system can also periodically alert the user for re-calibration. The period can be, for example, once a year. The calibration can also be done in stages so that it is less onerous and less obvious that a hearing evaluation is being performed.
Frequency-dependent amplification has the added advantage of power conservation because certain frequency bands may not need or may not have amplification.
In one embodiment, the user has the option of manually changing the amplification ofthe system. The system can also have a general volume controller that allows the user to adjust the output power ofthe speaker. This adjustment can also be across certain frequency bands.
Since the ear canal is open, the user can be hearing the audio signals both from the sender and the system. In one embodiment, to prevent echoing effect, signal processing speed ofthe system cannot be too low. Typically, the user would not be able to distinguish two identical sets of audio signals if the difference in arrival times of the two signals is below a certain delay time, such as 10 milliseconds. In one embodiment, the system's signal processing speed is faster than that certain delay time. One approach to transform the input audio signals to ultrasonic signals depends on analog signal processing.
Since the system might be on continuously for a long duration of time, and can be amplifying across a broad range ofthe audio frequencies, power consumption can be an issue. FIG. 18A shows a number of embodiments for managing power consumption ofthe system. One embodiment includes a manual on/off switch, which allows the user to manually turn the system off as he desires. The on/off switch can be on a base unit, an interface unit, or a remote device. This on/off switch can also be voice activated. For example, the system is trained to recognize specific recitation, such as specific sentences or phrases, and/or the user's voice. To illustrate, when the user says sentences like any ofthe following, the system would be automatically turned on: What did you say? What? Louder. You said what?
The system can be on-demand. In one embodiment, the system can identify noise (e.g., background noise), as opposed to audio signals with information. To illustrate, if the audio signals across broad audio frequency ranges are flat, the system could assume that the input audio signals are noise. In another approach, if the average SPL ofthe input audio signals is below a certain level, such as 40 dB, the system would assume that there are no audio signals worth amplifying. In any case, when the system recognizes that signals are not to be amplified, the system can then be deactivated, such as to be placed into a sleep mode, a reduced power mode or a standby mode.
With the system operating on-demand, when the sender stops talking for a duration of time, the system can be deactivated. This duration of time can be adjustable, and can be, for example, 10 seconds or 10 minutes. In another approach, only when the signal-to-noise ratio of the audio signals is above a preset threshold would the system be activated (i.e., awakened from the sleep mode, the reduced power mode or the standby mode).
Another approach to manage power consumption can make use of a directional microphone. This approach can improve the signal-to-noise ratio. The gain at specific directions of such a microphone can be 20 dB higher than omni-directional microphones. The direction of the directional microphone can vary with application. However, in one embodiment, the direction ofthe directional microphone can be pointing forward or outward from the front ofthe user. The assumption is that the user typically faces the person talking to him, and thus it is the audio signals from the person in front of him that are to be enhanced.
The system, namely, the interface unit, can have more than one directional microphone, each pointing in a different direction. FIG. 19A shows an interface unit 2202 with four directional microphones pointing in four orthogonal directions. With the microphones in symmetry, the user does not have to think about the orientation ofthe microphones if the user is attaching the interface unit to a specific location on his clothing.
FIGS. 19B-19C show interface units 2204 and 2206, each with two directional microphones pointing in two orthogonal directions. For the two interface units 2204 and 2206 shown in FIG. 19B- 19C, one unit can be on the left shoulder and the other unit on the right shoulder ofthe user, with the user's head in between the interface units in FIG. 19B and FIG. 19C.
The amplification ofthe system can also depend on the ambient power level, or the noise level ofthe environment ofthe system. One approach to measure the noise level is to measure the average SPL at gaps ofthe audio signals. For example, a person asks the user the following question, "Did you leave your heart in San Francisco?" Typically, there are gaps between every two words or between sentences or phrases. The system measures, for example, the root mean square ("rms") value ofthe power in each ofthe gaps, and can calculate another average among all ofthe rms values to determine the noise level. In one embodiment, the system increases the gain ofthe system so as to ensure that the average power ofthe output audio signals is higher than the noise level by a certain degree. For example, the average SPL ofthe output audio signals can be 1 OdB above the noise level.
In another embodiment, if the average power level ofthe environment or the ambient noise level is higher than a threshold value, signal amplification is reduced. This average power level can include the audio signals ofthe person talking to the user. The rationale is that if the environment is very noisy, it would be difficult for the user to hear the audio signals from the other person anyway. As a result, the system should not keep on amplifying the audio signals independent ofthe environment. For example, if the average power level ofthe environment is more than 75 dB, the amplification ofthe system is reduced, such as to 0 dB. Another power management approach is to increase the power ofthe audio signals. One embodiment to create more power is to increase the surface area ofthe medium responsible for generating the output audio signals. For example, if audio signals are generated by a piezoelectric film, one can increase the surface area ofthe film to increase the power ofthe signals. A number of embodiments are based on ultrasonic demodulation or mixing. To increase the output power of such embodiments, one can again increase the surface area ofthe medium generating the ultrasonic signals. As an example, a 1-cm diameter bimorph can give 140 dB ultrasonic SPL. The device may need about 0.1 W of input power. Ten such devices would increase output power by about 20 dB. Another approach to increase power is to increase the demodulation or mixing efficiency ofthe ultrasonic signals by having at least a portion ofthe transformation performed in a medium other than air. Depending on the medium, this may make the directional speaker more power efficient. Such approaches have previously been described in this application. The system (interface unit and /or the base unit) can include one or more rechargeable batteries. These batteries can be recharged by coupling the system to a battery re-charger. Another feature ofthe system that may be provided is one or more electrical connections on the system so as to facilitate electrical connection with a battery charger. For example, when the power source for the system is a rechargeable battery, the ability to charge the battery without removing the battery from the system is advantageous. Hence, in one embodiment, the system includes at least one connector or conductive element (e.g., terminal, pin, pad, trace, etc.) so that the electrical coupling between the rechargeable battery and the charger can be achieved. In this regard, the electrical connector or conductive element is provided on the system and electrically connected to the battery. The placement ofthe electrical connector or conductive element on the system serves to allow the system to be simply placed within a charger. Consequently, the electrical connector or conductive element can be in electrical contact with a counterpart or corresponding electrical connector or conductive element ofthe charger.
FIG. 18B shows an embodiment ofthe interface unit 2150 with an electrical connection 2152 and a cover 2154. The interface unit 2150 can be the interface unit 2014 shown in FIG. 14. The electrical connection 2152 can be a USB connector. With the cover 2154 removed, the connection 2152 can be used, for example, to couple to a battery charger to recharge the interface unit 2150.
In one embodiment, the charger can be considered a docking station, upon which the system is docked so that the battery within the system can be charged. Hence, the system can likewise include an electrical connector or conductive element that facilitates electrical connection to the docking station when docked.
With the ear canal remaining open, the user can still use a phone directly. However, in one embodiment, the system, which can include the base unit, can also have the electronics to serve as a cell phone. FIG. 20 shows such an embodiment. When there is an incoming phone call, the system can change its mode of operation and function as a cell phone. The system can alert the user of an incoming call. This can be through, for example, ringing, vibration or a blinking light. The user can pick up the call by, for example, pushing a button on the interface unit. Picking up the call can also be through an activation mechanism on the base unit or a remote control device. FIG. 21 is a flow diagram of call processing 2400 according to one embodiment ofthe invention. The call processing 2400 is performed using the system. For example, the system can be the system shown in FIG. 14.
The call processing 2400 begins with a decision 2402 that determines whether a call is incoming. When the decision 2402 determines that there is no incoming call, the call processing 2400 waits for such a call. Once the decision 2402 determines that a call is incoming, the system is activated 2408. Here, the wireless communications capability ofthe system is activated (e.g., powered-up, enabled, or woken-up). The user ofthe system is then notified 2410 ofthe incoming call. In one embodiment, the notification to the user ofthe incoming call can be achieved by an audio sound produced by the system (via a speaker). Alternatively, the user of the system could be notified by a vibration ofthe system, or a visual (e.g., light) indication provided by the system. Alternatively, the base unit could include a ringer that provides audio sound and/or or vibration indication to signal an incoming call.
Next, a decision 2412 determines whether the incoming call has been answered. When the decision 2412 determines that the incoming call has not been answered, the base unit can activate 2414 a voice message informing the caller to leave a message or instructing the caller as to the unavailability ofthe recipient.
On the other hand, when the decision 2412 determines that the incoming call is to be answered, the call can be answered 2416 at the base unit. Then, a wireless link is established 2418 between the interface unit and the base unit. The wireless link is, for example, a radio communication link such as utilized with Bluetooth or WiFi networks. Thereafter, communication information associated with the call can be exchanged 2420 over the wireless link. Here, the base unit receives the incoming call, and communicates wireiessiy to the interface unit such that communication information is provided to the user via the system. The user ofthe system is accordingly able to communicate with the caller by way ofthe system and, thus, in a hands-free manner.
A decision 2422 then determines whether the call is over (completed). When the decision 2422 determines that the call is not over, the call processmg 2400 returns to repeat the operation 2420 and subsequent operations so that the call can continue. On the other hand, when the decision 2422 determines that the call is over, then the system is deactivated 2424, and the wireless link and the call are ended 2426. The deactivation 2424 ofthe system can place the system in a reduced-power mode. For example, the deactivation 2424 can power-down, disable, or sleep me wireless communication capabilities (e.g., circuitry) ofthe system. Following the operation 2426, as well as following the operations 2406 and 2414, the call processing 2400 for the particular call ends.
If the system also functions as a phone, the system can have a directional microphone pointing at the head ofthe user. One such embodiment is shown in FIG. 19 A.
Operating the system as a phone can create different concern as opposed to operating the unit as a hearing enhancement system. Since the audio signals are transmitted in an open environment, people in the user's immediate neighborhood might pick up some ofthe audio signals. If the SPL is 80 dB when the signals reach the user's head, signals reflected from the head can be 60 dB. Such a level may be heard by people in the immediate vicinity ofthe user. The user might not want people to pick up what he is hearing. In other words, the user may prefer more privacy. FIG. 22 shows a number of embodiments regarding improving privacy of the present invention. The audio signal propagation angle can inherently improve privacy. The cone ofthe audio signals typically propagates from low to high in order to get to an ear ofthe user. For example, from the user's shoulder to an ear ofthe user, the elevation angle can be 45 degrees. One advantage of such a propagation direction is that most ofthe audio signals reflected from the head radiate towards the sky above the head. This reduces the chance of having the audio signals being eavesdropped particularly when the signal power is going down as the square ofthe propagation distance.
Privacy can be enhanced based on frequency-dependent amplification. Since certain audio frequencies may not be amplified, and may be relatively low in SPL, their reflected signals can be very low. This reduces the probability ofthe entire audio signals being heard by others.
Another approach to improve privacy is to reduce the highest power level ofthe output audio signals to below a certain threshold, such as 70dB. This level may be sufficient to improve the hearing of those who have mild hearing loss.
Yet another approach to enhance privacy is to further focus the beam ofthe audio signals. For the embodiment based on transforming ultrasonic frequencies, narrowing the cone can be done, for example, by increasing the carrier frequency ofthe audio signals. Typically, the higher the carrier frequency, the narrower the cone, such as a cone created by 100 kHz signals typically being narrower than a cone created by 40 kHz signals. Not only can the cone be narrowed, sidelobes can also be suppressed. Another approach to narrow the cone is to increase the gain of the cone or the horn that generates the audio signals.
A focused beam has the added advantage of better power conservation. With the audio signals restricted to a smaller cone, less power is needed to generate the audio signals.
In private, such as at home, hearing impaired people sometimes might have a tendency to increase the sound level of audio or video instruments a bit too high. On the other hand, in public, hearing impaired people sometimes might have difficulty hearing. In one embodiment, the system is further designed to pick up, capture or access audio signals from portable or nonportable instruments, with the interface unit serving as a personalized listening unit.
Audio signals from these instruments can be transmitted through wire to the system. The interface unit can provide an electrical input for connecting to the instrument by wires. If transmission is wireless, the system can be designed to include the electronics to capture wireless signals from the instruments through a wireless local area network, such as WiFi or Bluetooth. The audio signals from these instruments can be up-converted and transmitted as a WiFi signal to be picked up by the system. The system then down-converts the WiFi signal to re-generate the audio signals for the user. FIG. 23 shows examples of such other portable or non-portable instruments. The instruments can be used in a private environment, such as at home, or attached to the user. This can include entertainment units, ε ch as televisions, stereo systems, CD players, or radios. As an example, assume the user is working at the backyard and the stereo system is in the living room. Based on this technique, the user can enjoy the music without the need to crank up its volume. Private use can include a phone, which can be a desktop phone with a conference speaker or a cell phone. As yet another example, the system can function as the headset of a phone, and can be coupled to the phone in a wireless manner, such as through Bluetooth. Regarding public use, the user can be at a conference or a theater. The system can be coupled to the conference microphone or the theater speaker wireiessiy, and thus be capable of capturing and enhancing the audio signals therefrom.
In a number of embodiments described, the directional speaker generates ultrasonic signals in the range of 40 kHz. One ofthe reasons to pick such a frequency is for power efficiency. However, to reduce leakage, cross talk or to enhance privacy, in one embodiment, the ultrasonic signals are between 200 kHz to 1 MHz. It can be generated by multilayer piezoelectric thin films, or other types of solid state devices. Since the carrier frequency is at a higher frequency range than 40 kHz, the absorption/attenuation coefficient by air is considerably higher. On the other hand, privacy is enhanced and audible interference to others is reduced. A number of embodiments of directional speakers have also been described where the resultant propagation direction ofthe ultrasonic waves is not orthogonal to the horizontal, but at, for example, 45 degrees. The ultrasonic waves can be at an angle so that the main beam ofthe waves is approximately pointed at an ear ofthe user. In one embodiment, the propagation direction ofthe ultrasonic waves is approximately orthogonal to the horizontal. Such a speaker does not have to be on a wedge or a step. It can be on a surface that is substantially parallel to the horizontal. For example, the speaker can be on the shoulder of a user, and the ultrasonic waves propagate upwards, instead of at an angle towards an ear ofthe user. If the ultrasonic power is sufficient, the waves would have sufficient acoustic power even when the speaker is not pointing exactly at the ear.
In one embodiment, the ultrasomc beam is considered directed towards the ear as long as any portion ofthe beam, or the cone ofthe beam, is immediately proximate to, such as within 7cm of, the ear. The direction ofthe beam does not have to be directed at the ear. It can even be orthogonal to the ear, such as propagating up from one's shoulder, substantially parallel to the face of the person.
Portable Add-On A number of embodiments ofthe present invention pertain to a directional speaker for a portable electronic device. The directional speaker can be used with the electronic device to direct audio output in a directionally constrained manner. As a result, a certain degree of privacy with respect to the audio output is achieved for the user ofthe electronic device, yet the user need not wear a headset or ear phone, or have to hold a speaker against one's ear. The directional speaker can be integral with the electronic device. Alternatively, the directional speaker can be an attachment (or peripheral) to the electronic device.
The electronic device can be a computing device, such as a personal computer, a portable computer, or a personal digital assistant. The device can be a CD player, a portable radio, a communications device, or an electric musical instrument, such as an electric piano. One example of a communications device is a mobile telephone, such as a 2G, 2.5G or 3G phone.
FIG. 24A illustrates a mobile telephone 3100 with an integrated directional speaker according to one embodiment ofthe invention. The mobile telephone 3100 is, for example, a cellular phone. The mobile telephone 3100 includes a housing 3102 that provides an overall body for the mobile telephone 3100. The mobiletelephone 3100 includes a display 3104. The mobile telephone 3100 also includes a plurality of buttons 3106 that allow user input of alphanumeric characters or functional requests, and a navigational control 3108 that allows directional navigation with respect to the display 3104. To support wireless communications, the mobile telephone 3100 also includes an antenna 3110. In addition, the mobile telephone 3100 includes a microphone 3112 for voice pickup and an ear speaker 3114 for audio output. The ear speaker 3114 can also be referred to an earpiece.
Additionally, according to the invention, the mobile telephone 3100 also includes a directional speaker 3116. The directional speaker 3116 provides directional audio sound for the user ofthe mobile telephone 3100. The directional audio sound produced by the directional speaker 3116 allows the user ofthe mobile telephone 3100 to hear the audio sound even though neither ofthe speaker's ears is proximate to the mobile telephone 3100. However, the directional nature ofthe directional sound output is towards the user and thus provides privacy by restricting the audio sound to a confined directional area. In other words, bystanders in the vicinity ofthe user but not within the confined directional area would not be able to directly hear the audio sound produced by the directional speaker 3116. The bystanders might be able to hear a degraded version ofthe audio sound after it reflects from a surface. The reflected audio sound, if any, that reaches the bystander would be at a reduced decibel level (e.g., at least a 20 dB reduction) making it difficult for bystanders to hear and understand the audio sound.
FIG. 24B is a perspective view of a flip-type mobile telephone 3150 with an integrated directional speaker according to another embodiment ofthe invention. The mobile telephone 3150 is, for example, a cellular phone. The mobile telephone 3150 shown in FIG. 24B is similar to the mobile telephone 3100 illustrated in FIG. 24A. More particularly, the mobile telephone 3150 includes a housing 3152 that provides a body for the mobile telephone 3150. The mobile telephone 3150 includes a display 3154, a plurality of keys 3156, and a navigation control 3158. To support wireless commumcations, the mobile telephone 3150 also includes an antenna 3160. In addition, the mobile telephone 3150 includes a microphone 3162 for voice pickup and an ear speaker 3164 for audio output.
Moreover, according to the invention, the mobile telephone 3150 includes a directional speaker 3166. In this embodiment, the directional speaker 3166 is provided in a lower region of a lid portion 3168 of the housing 3152 of the mobile telephone 3150. The directional speaker 3166 directs audio output to the user ofthe mobile telephone 3150 in a directional manner. The directional nature ofthe directional sound output is towards the user and thus provides privacy by restricting the audio sound to a confined directional area.
The direction for the audio output by the directional speaker 3116, 3166 can be estimated and thus fixed in advance. Hence, in one embodiment, the directional speakers 3116, 3166 shown in FIGs. 24A and 24B can be primarily structurally fixed with respect to their directional audio output. For example, the angle and direction can be set such that the directional speaker 3116, 3166 would output audio in the direction ofthe user's ears assuming that the user holds the mobile telephone 3100, 3150 in front of them so as to view information on the display 3104, 3154.
In other embodiment, the directional speakers 3116, 3166 can be structurally movable so that a user is able to alter the direction ofthe directional audio output to suit his needs. The directional speakers 3116, 3166 can, for example, be repositionable to allow repositioning ofthe output direction for the directional speakers 3116, 3166. The directional speakers 3116, 3166 can, for example, be repositionable by being mounted on a pivot, flexible wire or other rotatable or flexible member.
In yet another embodiment, the mobile telephones 3100, 3150 include a knob or a switch that electronically controls the direction ofthe audio output. For example, assume the plurality of keys on the phone 3150 shown in FIG. 24B establishes the x-y plane, with x being approximately along the direction ofthe hinge ofthe phone. By turning the knob, a user can adjust the output direction ofthe audio signals from the directional speaker 3166 in the y-z plane. Furthermore, the placement of directional speaker 3116, 3166 with respect to its housing 3102, 3152, respectively, can vary with implementation. Typically, however, the placement is designed to facilitate directing the output audio in the direction of a person that is to hear the audio sounds. In any case, the placement ofthe directional speaker 3116 with respect to the housing 3102 shown in FIG. 24A and placement of the directional speaker 3166 with respect to the housing 3152 shown in FIG. 24B are merely representative placements, as various other placement are possible. For example, a directional speaker could be placed near the ear speaker, near the display, on the outer or back surface ofthe housing, etc.
FIG. 25 is a perspective view of a personal digital assistant 3200 with an integrated directional speaker according to one embodiment ofthe invention. The personal digital assistant 3200 includes a housing 3202 that provides a body for the personal digital assistant 3200. The personal digital assistant 3200 includes a display 3204, an input pad 3206, navigation buttons 3208, and other buttons 3210. The display 3204 presents information to be viewed by the user of the personal digital assistant 3200. The input pad 3206, for example, allows user to select soft buttons or enter characters through gestures. The navigation buttons 3208 allow a user to interact with information displayed by the display 3204. The buttons 3210 can provide various functions, such as initiating a particular operation, data entry, or item selection. Still further, the personal digital assistant 3200 includes a directional speaker 3212. The directional speaker 3212 provides directional audio output for the user ofthe personal digital assistant 3200. The audio output by the directional speaker 3212 is not only directed in a predetermined direction but also substantially confined to that predetermined direction. As a result, the audio output by the directional speaker 3212 is not easily heard by others but the user ofthe personal digital assistant 3200. The positioning ofthe directional speaker 3212 can be fixed or adjustable, as noted above with respect to FIGs. 24A and 24B. If adjustable, the direction ofthe audio output is able to be altered. Still further, the placement ofthe directional speaker 3212 shown in FIG. 25 is one possible embodiment; therefore, it should be recognized that the directional speaker 3212 can be positioned in any of a wide variety of places on the personal digital assistant 3200. However, in preferred embodiments, the directional speaker 3212 is placed on the front side ofthe housing 3202.
The personal digital assistant 3200 may or may not have wireless communication capabilities. However, if the personal digital assistant 3200 does have wireless communication capabilities, the personal digital assistant 3200 may also include one or more of a microphone and a traditional speaker. In yet another embodiment, the personal digital assistant 3200 also includes a camera. If the personal digital assistant 3200 has these components, then the user of the personal digital assistant 3200 can, for example, use the personal digital assistant 3200 as a video phone or participate in video conferences using the personal digital assistant 3200. By using the directional speaker 3212 instead of a traditional speaker, the audio output from the personal digital assistant 3200 can be directed primarily to the user ofthe personal digital assistant 3200. Hence, the audio output enjoys a certain level of privacy without requiring the user ofthe personal digital assistant 3200 to hold the personal digital assistant 3200 to her ear or to wear a headset. As a result, the user ofthe personal digital assist 3200 would be able to view the display 3204 while also hstening to audio output in a relatively private manner.
FIG. 26 is a block diagram of a wireless communication device 3300 according to one embodiment ofthe invention. The wireless communication device 3300 is, more generally, an electronic device with wireless communication capability. The wireless communication device 3300 can, for example, represent the mobile telephone 3100 shown in FIG. 24A, the mobile telephone 3150 shown in FIG. 24B, or the personal digital assistant 3200 shown in FIG. 25 (with such supporting wireless communication circuitry).
The wireless communication device 3300 includes a controller 3302 that controls overall operation for the wireless communication device 3300. A user input device 3304 can represent one or more buttons or a keypad that enables the user to interact with the wireless communication device 3300. A display device 3306 allows the controller 3302 to visually present information to the user ofthe wireless communication device 3300. The controller 3302 also couples to read-only memory (ROM) 3308 and random access memory (RAM) 3310. The wireless communication device 3300 also includes a wireless cui-imunication interface 3312 that enables the wireless communication device 3300 to couple to a wireless link 3314 so that information can be transmitted between the wireless communication device 3300 and another communication device.
The wireless commumcation device 3300 also includes a microphone 3316 and a directional speaker 3318. The microphone 3316 may be designed to pickup incoming audio signals with respect to a particular direction. The directional speaker 3318 is specifically designed to output audio sound in a confined direction. In one embodiment, the directional speaker 3318( outputs ultrasonic sound that become audio sound so that a user ofthe wireless communication device 3300 can hear the audio output. However, by using the directional speaker 3318, other persons (besides the user) in the vicinity ofthe wireless commumcation device 3300 would have difficulty hearing the audio output produced by the wireless communication device 3300.
Still further, the wireless commumcation device 3300 can also include a traditional speaker 3320 and a camera 3322. The traditional speaker 3320 can be used when the user ofthe wireless communication device 3300 is not concerned about privacy, desires others to hear the audio output, or is holding the device right next to one of her ears. The camera 3322 can allow the wireless communication device 3300 to transmit video (or at least still images) to other devices over the wireless link 3314.
As shown in FIG. 26, the microphone 3316, the directional speaker 3318, the traditional speaker 3320 or the camera 3322, to the extent provided, are a part of or integral to the wireless communication device 3300. However, it should be recognized that any ofthe microphone 3316, the directional speaker 3318, the traditional speaker 3320 or the camera 3322 could be provided external to the wireless communication device 3300 and coupled thereto in a wired or wireless manner.
FIG. 27A is a block diagram of a directional audio conversion apparatus 3400 according to one embodiment ofthe invention. The directional audio conversion apparatus 3400 transforms audio input signals into directional audio output signals. The directional audio conversion apparatus 3400 includes a pre-processor 3402 and an ultrasonic speaker 3406. The pre-processor 3402 can be implemented by hardware or software. In one embodiment, at least a portion o. the pre-processor 3402 can be internal to and thus part ofthe controller 3302 shown in FIG. 26. In another embodiment, the pre-processor 3402 can be separate circuitry, either within or external to the wireless communication device 3300. The separate circuitry can be an integrated circuit.
The ultrasonic speaker 3406 is one type of directional speaker (e.g., the directional speaker 3318). The pre-processor 3402 receives audio input signals 3408, and converts the audio input signals 3408 into ultrasonic drive signals 3410. The ultrasonic drive signals 3410 are supplied to the ultrasonic speaker 3406 to generate ultrasonic output 3412. The ultrasonic output 3412 is subsequently transformed, for example, by air to audio output 3414. Often it is desirable to make the frequency spectrum ofthe audio output 3414 as similar to the audio input 3408 as possible.
In one embodiment, to represent the different operations ofthe audio conversion apparatus 3400 mathematically, assume that the audio input is represented by f(t), the ultrasonic carrier signals by ωct, the drive signals by fι(t), the impulse response ofthe ultrasonic speaker or transducer by h(t), the ultrasonic output by g(t), and the audio output by y(t). Then, (If f(t) dt2)1 2 * cos ωct, represents one embodiment of pre-processing operations by the pre-processor to generate fι(t). This can be known as the basic pre-processing performed by a basic pre- processing circuit. Further, ft(t) ® h(t), represents the operation performed by the ultrasonic speaker to generate g(t), with the symbol ® denoting signal convolution operations. Finally, S dt2 [ g2(t) ], represents self-demodulation ofthe ultrasonic output g(t) by air to generate the audio output y(t).
The pre-processor can further perform a number of additional operations to modify the drive signals 3410 before feeding them to the speaker. One objective of such additional preprocessing is to make the frequency spectrum ofthe audio output signals 3414 to be as similar to that ofthe audio input 3408 as possible.
In FIG. 27B is a block diagram ofthe pre-processor 3402 according to one embodiment ofthe invention. The pre-processor 3402, in this embodiment, includes a basic pre-processing circuit 3450 and an estimation circuit 3452. The estimation circuit 3452 in a feedback loop formed by the basic pre-processing circuit 3450. In FIG. 27B, D(t - T) represents delaying the audio input 3408 by T, which is the total loop delay.
FIG. 27C shows one embodiment of an estimation circuit 3452. In this example, H(t) represents the estimated impulse response ofthe ultrasonic speaker, and G(t) represents the estimated ultrasonic output, both subject to finite transmission bandwidth ofthe system. LPFl and LPF2 represent low-pass filter 1 and low-pass filter 2, respectively.
The basic pre-processing circuit 3450 can be of different embodiments. Assume F(t) represents the audio input f(t), shifted by 90 degrees. For an amplitude modulated signal preprocessing scheme, various embodiments for the basic pre-processing circuit 3450 can perform any one of the following operations:
(1 + m * f(t)) * cos ωct, for double side band with large carrier; f(t) * cos ωct, for double side band suppressed carrier;
(1 + m * f(t)) * cos ωct - m * F(t) * sin ωct, for single side band large carrier; f(t) * cos ωct - F(t) * sin ωct, for single side band suppressed carrier; (1 + m * f(t))1 2 * cos ωct, for modified amplitude modulation; and
(e(t) + m * f(t))1 2 * cos ωct, for envelope modulation, where e(t) = LPF (f(t)), or the envelope of f(t).
For a phase modulated signal pre-processing scheme, various embodiments for the basic pre-processing circuit 3450 can perform any one ofthe following operations: cos ωct + cos (ωct + \\ f(t) dt2), for phase modulation with carrier; and cos (ωct + JJ f(t) dt2), for phase modulation with suppressed carrier.
FIG. 28 illustrates different embodiments of directional speaker characteristics according to the present invention. The directional speaker can, for example, be any ofthe directional speakers 3116, 3166, 3212, 3318 and 3406 illustrated in FIGs. 24A, 24B, 25, 26 and 27A respectively.
According to one embodiment, the directional speaker can be implemented using a piezoelectric thin film. The piezoelectric thin film can be deposited on a plate with many cylindrical tubes, for example, as previously described. A significant percentage ofthe power of the ultrasonic/audio output generated by the emitting surface ofthe directional speaker can, in effect, be confined in a cone (virtual or physical). Referring back to examples ofthe piezoelectric film previously described, the FWHM of the signal beam can be about 24 degrees. Assume that such a directional speaker is held by the user, such as in front ofthe user in one ofthe user's hands. The output from the speaker can be directed in the anticipated direction ofthe user's head, with the distance between the hand and the head being, for example, 10-30 inches. More than 75% ofthe power ofthe audio output generated by the emitting surface ofthe directional speaker is, in effect, confined in a virtual cone. The tip ofthe cone is at the speaker, and the mouth ofthe cone is at the location ofthe user's head. The diameter ofthe mouth ofthe cone, or the diameter ofthe cone in the vicinity of the user's, can be about 4 to 12 inches. In another embodiment, the ultrasonic frequency is at 100 KHz, with convex surfaces to expand the beam, for example, as to be described below. The emitting surface ofthe directional speaker is around 5 cm by 1 cm.
In one embodiment, the direction ofthe audio output from the directional speaker can be adjusted electronically. One approach is to attach the speaker to a base that can be rotated electronically. The orientation ofthe base can be set by turning a knob on, for example, the phone 3150. In another embodiment, the speaker is composed of a number of directional speakers. The phase among the signals from the directional speakers can be modified to adjust the direction ofthe resultant beam. This is similar to techniques used in a phase-array antenna to adjust the direction ofthe beam. In another embodiment, the directional speaker can make use of a curved emitting surface
(e.g., convex emitting surface) or a curved reflector. The curved emitting surface or reflector enable the width ofthe beam to be increased.
FIG. 29 is a flow diagram of audio signal processing 3600 according to one embodiment ofthe invention. Here, it is assumed that the wireless communication device contains not only a directional speaker but also a traditional speaker (e.g., ear speaker). The audio signal processing
3600 is, for example, performed by a wireless communication device. As an example, the controller 3302 ofthe wireless communication device 3300 illustrated in FIG. 26 can perform the audio signal processing 3600.
The wireless communication device can be a mobile telephone. Such a mobile telephone can have dual modes of operation, namely, a normal or traditional mode, and a two-way or directional-speaker mode. In a normal-mode, the audio sound is produced directly from a traditional (or standard) speaker (e.g., an ear speaker integral with the mobile telephone (e.g., within its housing). Such a speaker is substantially non-directional (and further does not generate audio sound through transforming ultrasonic signals in air). In the two-way mode, the audio sound is produced by a directional speaker. In the two-way mode, the mobile telephone is, for example, operating as a walkie-talkie, a dispatch type communicator, or a video phone. The mobile telephone may also have a speakerphone mode in which audio output is produced by a speaker that allows those in the vicinity ofthe mobile telephone to hear the audio output. The speaker in this case is more powerful than the ear speaker but also substantially non- directional. Mode selection, whether manual or automatic to be described, can also be used to select a speakerphone mode.
Referring back to FIG. 29, the audio signal processing 3600 initially receives 3602 incoming audio signals over a wireless communication path. Next, a decision 3604 determines whether a directional speaker is active. When the decision 3604 determines that the directional speaker is not active, then the incoming audio signals are output 3606 to the traditional speaker ofthe wireless communication device. When the wireless communication device is a mobile telephone, the traditional speaker is, for example, an ear speaker (earpiece). On the other hand, when the wireless communication device is a personal digital assistant or portable computer, the traditional speaker could simply be a standard audio speaker. On the other hand, when the decision 3604 determines that the directional speaker is active, then the incoming audio signals can be pre-processed 3608. As an example, the preprocessing can utilize the techniques described under FIGs. 27A-C. After the incoming audio signals are pre-processed 3608, the pre-processed signals are converted 3610 to ultrasound drive signals. Then, the directional speaker is driven 3612 in accordance with the ultrasound drive signals.
Following the operations 3606 and 3612, a decision 3614 determines whether there are more incoming audio signals to be processed at this time. When the decision 3604 determines that there are more incoming audio signals to be processed, then the audio signal processing 3600 returns to repeat the operation 3602 and subsequent operations so that the additional incoming audio signals can be similarly processed. Alternatively, when the decision 3614 determines that there are no more audio signals to be processed at this time, then the audio signal processing 3600 is complete and ends.
Other than the operations 3604 and 3606 (which are not necessary when speaker selection is not available), the directional audio conversion apparatus 3400 illustrated in FIG. 27A can also perform the audio signal processing 3600.
FIG. 30 is a flow diagram of speaker selection processing 3700 according to one embodiment ofthe invention. The speaker selection processing 3700 is, for example, performed by a wireless communication device. As an example, the controller 3302 ofthe wireless communication device 3300 illustrated in FIG. 26 can perform the speaker selection processing 3700.
The speaker selection processing 3700 begins with a decision 3702 that determines whether a manual speaker selection has been made. When the decision 3702 determines that a manual speaker selection has been made, then the selected speaker is activated 3704 in accordance with the manual request. The manual speaker selection can, for example, be made by a user in a variety of ways, such as by (a) a button on the device, (b) a user selection with respect to a user interface presented on a display, (c) a sensor in accordance with certain sensing conditions, or (d) other means.
On the other hand, when the decision 3702 determines that a manual speaker selection has not been made, then device condition information is obtained 3706. The device condition information can result from one or more sensors integral or coupled to the device. The appropriate speaker to be selected is then determined 3708 based upon the device condition information. For example, if the wireless communication device was placed against the user's ear, then a sensor could detect (e.g., estimate) such placement and, as a result, use an earpiece i type speaker. On the other hand, if the device is determined (e.g., estimated) to be at least a certain distance away from an object (such as the user's head or ear), then the directional speaker can be utilized. In any case, the appropriate speaker is then activated 3710. Following the operation 3704 or 3710, the selection processing 3700 is complete and ends.
FIG. 31 is a diagram indicating exemplary conditions that can be utilized to select the appropriate speaker. The speaker selection processing 3700 and the exemplary conditions shown in FIG. 31 assume that the wireless communication device has multiple speakers to be selected from, and at least one of which is a directional speaker and at least another of which a traditional speaker.
Assume again that the wireless communication device is a mobile phone. The mode selection between the normal or traditional mode, and the two-way or directional-speaker mode can be achieved manually or automatically. FIG. 31 shows examples of different techniques to select the mode for the mobile telephone. In one embodiment, mode selection can be achieved through a switch integrated to the mobile telephone. The switch can be electrical, mechanical or electro-mechanical. For example, a mechanical switch can be located right next to the traditional speaker. When the traditional speaker is against the user's ear, the switch will be pressed and the traditional speaker will be activated.
In another example, mode selection can be determined based on a distance. The mobile telephone can include a sensor to sense the distance the mobile telephone (e.g., its ear speaker region) is from a surface. For example, such a sensor can use a light beam (e.g., infrared beam) to sense the distance. When the distance is very short, then the normal mode can be automatically selected, and when the distance is greater than the short distance, then the mobile telephone is deemed not against the user's ear, so the two-way mode is automatically selected. One way to detect distance based on infrared beam is to measure the intensity of reflected beam. If the reflecting surface is very close to the infrared source, the intensity ofthe reflected beam would be high. However, if the reflecting surface is 12" or more away, the intensity would be relatively much lower. As a result, by measuring the intensity ofthe reflected beam, distances can be inferred.
In yet another example, mode selection can be based on orientation. If the mobile telephone is substantially in a vertical orientation (e.g., within 45 degrees from the vertical), the mobile telephone will operate in the two-way mode. However, if the mobile telephone is substantially in a horizontal orientation (e.g., within 30 degrees from the horizontal), the mobile telephone will operate in the normal mode. A gyro (gyroscope) in the mobile telephone can be used to determine the orientation ofthe mobile telephone. In still another example, mode selection can be based on usage. For example, if the mobile telephone is receiving user input via its integral keypad, acting as a video phone, or playing a video, then the mobile telephone can be set to operate in the two-way mode. FIG. 32A is a perspective view of a personal digital assistant 3900 according to another embodiment ofthe invention. The personal digital assistant 3900 is generally similar to the personal digital assistant 3200 shown in FIG. 25. However, the personal digital assistant 3900 further includes a card 3902 that is inserted into a card slot ofthe personal digital assistant 3900. The card 3902 is an add-on card that provides wireless communication capabilities as well as audio and video capabilities for the personal digital assistant 3900. More particularly, the card 3902 includes a directional speaker 3904, a camera 3906, a microphone 3908 and an antenna 3910. The directional speaker 3904 provides confined audio output in a particular direction as noted above with respect to other embodiments. The camera 3906 provides video input capabilities to the personal digital assistant 3900. The microphone 3908 allows audio input. The antenna 3910 is used for wireless communications. Hence, the card 3902 allows the personal digital assistant 3900, that otherwise does not support wireless commumcation or audio- video features, to operate as a video phone or participate in video conferences. In this regard, the user's audio output (voice) can be picked up by the microphone 3908, and the user's face or other desired picture or video can be acquired by the camera 3906. The user ofthe personal digital assistant 3900 can then hear incoming audio by way ofthe directional speaker 3904, which through its directional characteristics provides a certain degree of privacy to the user. Further, video input can be displayed on the display 3204 for the benefit ofthe user.
The card 3902 can include circuitry within the housing ofthe card 3902 to support the functionality offered by the card 3902. The circuitry can pertain to various discrete electronic devices andor integrated circuits. The circuitry can thus supplement the circuitry ofthe personal digital assistant 3900.
Although the card 3902 includes wireless communication capabilities, a microphone, a directional speaker and a camera, it should be understood that other cards that can be used in a similar manner need not support each of these items. For example, in one embodiment, the addon card could simply pertain to a directional speaker 3904 and its associated circuitry (e.g., audio conversion apparatus).
FIG. 32B is a perspective view of a personal digital assistant 3920 according to another embodiment ofthe invention. The personal digital assistant 3920 is also generally similar to the personal digital assistant 3200 shown in FIG. 25. However, the personal digital assistant 3920 further includes a card 3922 that is inserted into a card slot ofthe personal digital assistant 3920.
The card 3922 is an add-on card that provides directiona1 avdio capabilities for the personal digital assistant 3920. The card 3922 includes a directional speaker 3904. The directional speaker 3904 provides confined audio output in a particular direction as noted above with respect to other embodiments. The personal digital assistant 3920 may or may not already support various other communications capabilities such as audio or video input, wireless voice communications, and wireless data transfer. The card 3922 can include circuitry within the housing ofthe card 3922 to support the directional speaker 3924. The circuitry can pertain to various discrete electronic devices and/or integrated circuits. The circuitry can thus supplement the circuitry ofthe personal digital assistant 3900. Alternatively, the card 3922 may rely significantly on circuitry within the personal digital assistant 3920.
The card 3902, 3922 can also take various forms. In one example, the card 3902, 3922 is a rectangular card often know as a PC-CARD or PCMCIA card. In another example, the card 3902, 3922 is of a smaller scale than a PC-CARD or PCMCIA card, such as a mini-card. In yet another example, the card 3902, 3922 is a peripheral device that plugs directly into a peripheral port (e.g., USB or Fire Wire), or is a peripheral device that is tethered to the personal digital assistant through a wire such as shown in FIG. 33.
FIG. 33 is a perspective view of a mobile telephone 4000 and a peripheral attachment 4002. The mobile telephone 4000 includes a microphone 4004 and an ear speaker 4006. The peripheral device 4002 is an add-on to the mobile telephone 4000 to provide an external speaker arrangement for use by the user ofthe mobile telephone 4000. More particularly, the peripheral attachment 4002 includes a base 4008 that supports and positions a directional speaker 4010. The directional speaker 4010 has characteristics as noted above, namely, directionally constrained audio sound output. The base 4008 supports the directional speaker 4010. By repositioning the base 4008, the particular direction in which the constrained audio output is directed can be altered. The direction ofthe audio output can also be adjusted electronically by the techniques as described above.
The base 4008 is also connected to a cord 4012 that, in turn, has a connector 4014. The connector 4014 can plug into a receptacle 4016 ofthe mobile phone 4000. In one example, the receptacle 4016 pertains to a headset jack or external speaker connector associated with the mobile telephone 4000. The housing 4008 contains electronics to convert the standard audio signals ttia. would be delivered to the housing 4008 via the receptacle 4016 ofthe mobile telephone 4000. The electronic circuitry (e.g. pre-processing circuits in FIG. 27 A) would then convert the audio signals to ultiasonic drive signals that would be used to drive the directional speaker 4010. The power necessary for the electronic circuitry within the base 4008 can be supplied by a battery or by a connection to a power source. The connection can be to a separate power source or to the power source associated with the mobile telephone 4000. Such connection can be through the cord 4012 or another cord. In another example, the receptacle 4016 can pertain to a peripheral port (e.g., Universal Serial Bus (USB) or FireWire, etc.). If the port provides both data and power, the electronics within the base 4008 can be powered via the cable ofthe peripheral port. Still further, such ports can transmit data signal to the base 4008, which can produce the drive signal for the directional speaker 4010. In other words, at least a portion ofthe pre-processing operations can be performed by the mobile telephone 4000. In such an embodiment, the electronics required in the base 4008 can be reduced as compared to other embodiments because electronic capabilities (e.g., circuitry) in the mobile telephone 4000 can be used to perform some ofthe operations needed to operate the directional speaker 4010 of the peripheral attachment 4002.
FIG. 34 is a diagram depicting additional applications associated with the present invention.
A number of embodiments have been described where the portable electronic device with a directional speaker is a mobile telephone. However, the invention can be applied to various other applications, with a number of examples shown in FIG. 34. These various embodiments can be used separately or in combination. In one embodiment, the device can be an audio unit, such as a MP3 player, a CD player or a radio. Such systems can be considered one-way communication systems.
In another embodiment, the device can be an audio output device, such as for a stereo system, television or a video game player. In this embodiment, the device may not be portable. For example, the user can be playing a video game and instead of having the audio signals transmitted by a normal speaker, the audio signals, or a representation ofthe audio signals, are directed to a directional speaker. The user can then hear the audio signals in a directional manner, reducing the chance of annoying or disturbing people in his immediate environment.
In another embodiment, the device can, for example, be used for a hearing aid. Different embodiments on hearing enhancement through personalizing or tailoring to the hearing ofthe user have been described in this application.
In one embodiment, the wireless communication device can function both as a hearing aid and a cell phone. When there is no incoming call, the system functions as a hearing aid. On the other hand, when there is an incoming call, instead of capturing audio signals in its vicinity, the system transmits the incoming call through the directional speaker to be received by the user. In yet another embodiment, the device can include a monitor or a display. A user can watch television or video signals in the public, again with reduced possibility of disturbing people in the immediate surroundings because the audio signals are directional.
The device can also include the capability to serve as a computation system, such as in a personal digital assistant (PDA) or a notebook computer. For example, as a user is working on the computation system for various tasks, the user can simultaneously communicate with another person in a hands-free manner. Data generated by a software application the user is working on using the computation system can be transmitted digitally with the voice signals to a remote device.
In yet another embodiment, the device can be a personalized system. The system can selectively amplify different audio frequencies by different amounts based on user preference or user hearing characteristics. In other words, the audio output can be tailored to the hearing ofthe user. The personalization process can be done periodically, such as once every year, similar to periodic re-calibration. Such re-calibration can be done by another device, and the results can be stored in a memory device. The memory device can be a removable media card, which can be inserted into the system to personalize the amplification characteristics ofthe directional speaker as a function of frequency. The system can also include an equalizer that allows the user to personalize the amplitude ofthe speaker audio signals as a function of frequency.
The device can also be personalized based on the noise or sound level in the vicinity ofthe user. The device can sense the noise or sound level in its immediate vicinity and change the amplitude characteristics ofthe audio signals as a function ofthe noise or sound level. A number of embodiments have been described with the speaker being directional. In one embodiment, a speaker is considered a directional if it is driven by ultrasonic signals. Such a directional speaker is also referred to herein as an ultrasonic speaker. Typically, the ultrasonic speaker produces an ultrasonic output that is converted into an audio output by mixing in air. For example, the ultrasonic output results from modulating audio output with an ultrasonic carrier wave, and the ultrasonic output is thereafter self-demodulated through non-linear mixing in air to produce the audio signals.
The device is also applicable in a moving vehicle, such as a car, a boat or a plane. Again, a directional audio conversion apparatus can be integrated into or attachable to the moving vehicle. As an example, the moving vehicle can be a car. At the front panel or dashboard ofthe car, there can be a USB, PCMCIA or other types of interface port. The apparatus can be inserted into the port to generate directional audio signals.
In yet another embodiment, one or more directional speakers are incorporated into a moving vehicle. The speakers can be used for numerous applications, such as personal entertainment and commumcation applications, in the vehicle.
In one embodiment, the directional speaker emits ultrasonic beams. The frequency ofthe ultrasonic beams can be, for example, in the 40 kHz range, and the beams can be diverging. For example, a 3-cm (diameter) emitter generates an ultrasonic beam that diverges to a 30-cm (diameter) cone after propagating for a distance of 20 to 40 cm. With the diameter ofthe beams increased by 10 dB, the ultrasonic intensity is reduced by around 20 dB. In another embodiment, the frequency ofthe beams is at a higher range, such as in the 200 to 500 kHz range. Such higher frequency ultrasonic beams experience higher attenuation in air, such as in the 8 to 40 dB/m range depending on the frequency. In yet another embodiment, the beams with higher ultrasonic frequencies, such as 500 kHz, are diverging beams also. Such embodiments with higher frequencies and diverging beams are suitable to other applications also, such as in areas where the distance of travel is short, for example, 20 cm between the speaker and ear.
Regarding the location ofthe speaker, it can be mounted directly above where a user should be, such as on the rooftop ofthe vehicle above the seat. The speaker can be located closer to the back than the front ofthe seat because when a person sits, the person typically leans on the back ofthe seat. In another embodiment, the directional speaker is mounted slightly further away, such as at the dome light of a car, with ultrasonic beams directed approximately at the head rest of a user's seat inside the car. For example, one speaker is located in the vicinity of the corner ofthe dome-light that is closest to the driver, with the direction ofthe signals, pointing towards the approximate location of the head of the driver. Signals not directly received by the intended recipient, such as the driver, can be scattered by the driver and/or the seat fabrics thereby reducing the intensity ofthe reflected signals to be received by other passengers in the car.
Instead of emitting ultrasonic signals, in one embodiment, the speakers can emit audio beams, with any directivity depending on the physical structure ofthe speaker. For example, the speaker is a horn or cone or other similar structure. The directivity of such a speaker depends on the aperture size ofthe structure. For example, a 10-cm horn has a λ/D ofabout 1 at 3 kHz, and a λ/D ofabout 0.3 at 10 kHz. Thus, at low frequency, such an acoustic speaker offers relatively little directivity. Still, the intensity ofthe beams goes as 1 R2, with R being the distance measured from, for example, the apex ofthe horn. To achieve isolation, proximity becomes more relevant. In such an embodiment, the speaker is positioned close to the user. Assume that the speaker is placed directly behind the passenger's ears, such as around 10 to 15 cm away. The speaker can be in the head rest or head cushion ofthe user's seat. Or, the speaker can be in the user's seat, with the beam directed towards the user. If other passengers in the vehicle are spaced at least 1 meter away from the user, based on propagation attenuation (or attenuation as the signals travel in air), the sound isolation effect is around 16 to 20 dB. The structure ofthe horn or cone can provide additional isolation effect, such as another 6 to 10 dB.
In one embodiment, the user can control one or more attributes ofthe beams. For example, the user can control the power, direction, distance or coverage ofthe beams. Regarding the location ofthe controls, if the vehicle is a car, the controls can be on the dash board ofthe vehicle. In another embodiment, the controls are in the armrest ofthe seat the user is sitting on.
The controls can be mechanical. For example, the speaker is at the dome light, and there can be a rotational mechanism at the dome light area. The rotational mechanism allows the user to adjust the direction of beam as desired. In one embodiment, the rotational mechanism allows two-dimensional rotations. For example, the beams are emitting at a 30 degrees angle from the rooftop, and the rotational mechanism allows the beams to be rotated 180 degrees around the front side ofthe vehicle. In another embodiment, the elevation angle can also be adjusted, such as in the range of 20 to 70 degrees from the rooftop.
Another mechanical control can be used to turn the speaker off. For example, when the user stands up from the user's seat, after a preset amount of time, such as 3 seconds, the speaker is automatically turned off.
The controls can also be in a remote controller. The remote controller can use BlueTooth, WiFi, ultrasonic, or infrared or other wireless technologies. The remote controller can also include a fixed or detachable display. The remote controller can be a portable device.
Regarding other attributes ofthe beam, as to the power level ofthe signals, the sound level does not have to be too high. For example, the sound level can be about 60 dB SPL at 5 cm away from the speaker. The content ofthe signals from the speaker can be accessed in a number of ways. In one embodiment, the content, which can be from a radio station, is wireiessiy received by the speaker. For example, the content can be received through the Internet, a WiFi network, a WiMax network, a cell-phone network or other types of networks.
The speaker does not have to receive the content directly from the broadcaster, or the source. In one embodiment, the vehicle receives the content wireiessiy from the source, and then through a wired or a wireless connection, the vehicle transmits the content to the speaker.
In yet another embodiment, the content can be selected from a multimedia player, such as a CD player, from the vehicle. The multimedia player can receive from multiple channels to support multiple users in the vehicle. Again, the contents or channels can be received from a broadcast station and selected locally. Or, the content can be created on-demand and streamed to the user demanding it by a wireless server station. In yet another embodiment, the content can be downloaded to a multimedia player from a high-speed wireless network in its entirely before being played. Another type of control is to select the radio station or a piece of music on a multimedia player. Again, these types of selection control can be from a fixed location in the vehicle, such as there can be control knobs at the dashboard, console, arm rest, door or seat ofthe vehicle. Or, as another example, the selection controller can be in a portable device. A number of embodiments have been described regarding one speaker. In yet another embodiment, there can be more than one speaker for a user. The multiple speakers allow the creation of stereo or surround sound effects.
As described regarding the multimedia player, the player can receive from multiple channels to support multiple users in the vehicle. If there is more than one user in the vehicle, each user can have a directional speaker or a set of directional speakers. Regarding the locations ofthe speakers for multiple users, in one embodiment, they are centralized. All ofthe speakers are, for example, at the dome light of a vehicle. Each user has a corresponding set of directional beams, radiating from the dome towards the user. Or, the speakers can be distributed. Each user can have a speaker mounted, for example, on the rooftop above where the user should be seating, or in the user's headrest. Regarding control, each user can independently control the signals to that user. For example, a user's controller can control the user's own set of beams, or to select the content of what the user wants to hear. Each user can have a remote controller. In another embodiment, the controller for a user is located at the armrest, seat or door for that user.
Set Top Box
A number of embodiments ofthe invention pertain to a directional audio delivery device for an audio system. The audio system can be a stereo system, a DVD player, a compact disc player, a music amplifier or a musical instrument, a VCR, a television, a home-entertainment system, or other audio system. It typically delivers audio output based on, or pertaining to, certain audio signals. These audio signals can be generated by the audio system, or they can be transmitted to and received by the audio system. The reception by the audio system can be wireless or wireline, such as through cables. Without the directional audio delivery device, the audio system produces audio sound for the benefit of any persons in its general vicinity. The delivery device converts the audio signals into directional audio output that is substantially confined within a beam, with a beam width. The directional audio output is targeted to one or more persons who would like to hear the audio output. In one embodiment, these one or more persons can also control a number of attributes ofthe beam. Other persons in the same vicinity who are not desirous of hearing the audio output, would only hear a substantially lower level of < the audio output. Hence, they are less disturbed by the unwanted audio sounds.
The audio system with its corresponding directional audio delivery device can be known as a directional audio apparatus. The directional device can be incorporated into the audio system, or can be confined in a separate housing, such as in a set-top box. The set-top box can be tethered or wireiessiy coupled to the audio system. In this embodiment, if the corresponding audio signals are not generated by the audio system, but are received externally, the audio signals can be received either by the set-top box or by the audio system.
FIG. 35 is a block diagram of a directional audio apparatus 5100 with an audio system 5102 and a directional audio delivery device 5104, according to one embodiment ofthe invention.
FIG. 36A is a block diagram of a directional audio delivery device 5200 according to one embodiment ofthe invention. The directional audio delivery device 5200 is, for example, suitable for use as the directional audio delivery device 5104 illustrated in FIG. 35.
The directional audio delivery device 5200 includes audio conversion circuitry 5202 and a directional speaker 5204. The audio conversion circuitry 5202 receives audio signals (Audio- In). The reception can be from the audio system 5102, or can be from another device. The audio signals can be, for example, electrical signals from the audio system 5102, or audio waves wireiessiy transmitted to be received by the audio conversion circuitry. The received audio signals can then be pre-processed, and are then converted into ultrasonic signals that are supplied to the directional speaker 5204. In one embodiment, the directional speaker 5204 is an ultrasonic speaker that produces ultrasonic output to generate audio output. The ultrasonic output carries the audio output to be delivered in a directionally constrained manner. The directional speaker 5204 thus allows the audio output to be directionally constrained and delivered to desired areas. FIG. 36B is a block diagram of a directional audio delivery device 5220 according to another embodiment ofthe invention. The directional audio delivery device 5220 is, for example, suitable for use as the directional audio delivery device 5 :04 illustrated in FIG. 35. The directional audio delivery device 5220 includes audio conversion circuitry 5222, a beam-attribute control unit 5224 and a directional speaker 5226. The audio conversion circuitry 5222 converts the received audio signals into ultrasonic signals. The beam-attribute control unit 5224 controls one or more attributes ofthe audio output.
One attribute can be the beam direction. The beam-attribute control unit 5224 receives a beam attribute input, which in this example is related to the direction ofthe beam. This can be known as a direction input. The direction input provides information to the beam-attribute control unit 5224 pertaining to a propagation direction ofthe ultrasonic output produced by the directional speaker 5226. The direction input can be a position reference, such as a position for the directional speaker 5226 (relative to its housing), the position of a person desirous of hearing the audio sound, or the position of an external electronic device (e.g., remote controller). Hence, the beam-attribute control unit 5224 receives the direction input and determines the direction of the audio output.
Another attribute can be the desired distance traveled by the beam. This can be known as a distance input. In one embodiment, the ultrasonic frequency ofthe ultrasonic output can be adjusted. By controlling the ultrasonic frequency, the desired distance traveled by the beam can be adjusted. This will be further explained below. Thus, with the appropriate control signals, the directional speaker 5226 generates the desired audio output accordingly.
FIG. 37A is a diagram illustrating a representative arrangement 5300 suitable for use with the invention. The representative arrangement 5300 uses a directional audio apparatus 5302 to deliver audio output, which can be an ultrasonic cone 5304 (or beam) of ultrasonic output towards a first user (user-1). The directional audio apparatus 5302 can, for example, be the directional audio apparatus 5100, using any implementation of a directional audio delivery device. Note that in the representative arrangement 5300, a second user (user-2) and a third user (user-3) are also in the vicinity ofthe directional audio apparatus 5302. However, in this example, it is assumed that only the first user (and not the second and third users) is desirous of hearing the audio sound. As a result, the directional audio apparatus 5302 produces the ultrasonic output in a directionally constrained manner such that its cone 5304 (or beam) is directed towards the first user (user-1). After the ultrasomc output is mixed or demodulated in air, the resultant audio sound is delivered to the first user (user-1). Only the resultant audio sound of significantly lower level is received by the second user (user-2) and the third user (user- 3). Consequently, they are not disturbed by the audio output that is being heard by the first user (user-1).
Another way to control the audio output level to be received by other users is through the distance input. By controlling the distance the ultrasonic output travels, the directional audio delivery device 5302 can minimize the audio output that might reach other persons (i) positioned behind the first user (user-1) not shown in the figure, or (ii) positioned at a location that would receive the audio output upon its reflection from surfaces behind the first user (user-1). FIG. 37B is a diagram of a representative building layout 5320 illustrating one application ofthe present invention. The representative building layout 5320 is used to illustrate how a directional audio apparatus 5328 according to the invention can be utilized. The representative building layout 5320 includes a first room 5322, a second room 5324 and a third room 5326. The first room 5322 can, for example, be a family room. The first room 5322 includes a directional audio apparatus 5328. A first user (u-1), a second user (u-2) and a third user (u-3) are in the first room 5322. The directional audio apparatus 5328 can deliver audio
< sound in a directionally confined manner. The directional audio apparatus 5328 can, for example, be the directional audio apparatus 5100, using any implementation of a directional audio delivery device in the present invention.
As shown in FIG. 37B, the directional audio apparatus 5328 delivers a constrained cone 5330 (beam) of audio output or sound towards the first user (u-1). Note that the audio output is substantially constrained within the cone 5330. As a result, the second user (u-2) and the third user (u-3) do not hear the audio output produced by the directional audio apparatus 5328 in any significant way. Some ofthe sound from the cone 5330 might be reflected or dispersed off a rear wall, and received by the second and third users. If so, the sound would have attenuated to a substantially lower level. In one embodiment, the distance covered by the cone 5330 of sound can be adjusted. FIG. 38 is a flow diagram of directional audio delivery processing 5400 according to an embodiment ofthe invention. The directional audio delivery processing 5400 is, for example, performed by a directional audio delivery device, such as the directional audio delivery device 5104 illustrated in FIG. 35. More particularly, the directional audio delivery processing 5400 is particularly suitable for use by the directional audio delivery device 5220 illustrated in FIG. 36B. The directional audio delivery processing 5400 initially receives 5402 audio signals for directional delivery. The audio signals can be supplied by an audio system. In addition, a beam attribute input is received 5404. As previously noted, the beam attribute input is a reference or indication of one or more attributes regarding the audio output to be delivered. After the beam attribute input has been received 5404, one or more attributes ofthe beam is determined 5406 based on the attribute input. If the attribute is on the direction ofthe beam, the input can set the constrained delivery direction ofthe beam. The constrained delivery direction is the direction that the output is delivered. The audio signals that were received are converted 5408 to ultrasonic signals with appropriate attributes, which may include one or more ofthe determined attributes. Finally, the directional speaker is driven 5410 to generate ultrasonic output again with appropriate attributes. In the case where the direction ofthe beam is set, the ultrasonic output is directed in the constrained delivery direction. Following the operation 5410, the directional audio delivery processing 5400 is complete and ends. Note that the constrained delivery direction can be altered dynamically or periodically, if so desired. FIG. 39 shows examples of attributes 5500 ofthe constrained audio output according to the invention. The attributes can be for the beam-attribute control unit, 5224. One attribute, which has been previously described, is the direction 5502 ofthe beam. Another attribute can be the beam width, 5504. In other words, the width ofthe ultrasonic output can be controlled. In one embodiment, the beam width is the width ofthe beam at the desired position. For example, if the desired location is 10 feet directly in front of the directional audio apparatus, the beam width can be the width ofthe beam at that location. In another embodiment, the widti 5504 of the beam is defined as the width ofthe beam at its full-width-half-max (FWHM) position.
The desired distance 5506 to be covered by the beam can be set. In one embodiment, the rate of attenuation ofthe ultrasonic output/audio output can be controlled to set the desired distance. In another embodiment, the volume or amplification of the beam can be changed to control the distance to be covered. Through controlling the desired distance, other persons in the vicinity ofthe person to be receiving the audio signals (but not adjacent thereto) would hear little or no sound. If sound were heard by such other persons, its sound level would have been substantially attenuated (e.g., any sound heard would be faint and likely non-discemable). There can be more than one beam. Hence, one attribute ofthe beam is the number 5512 of beams present. Multiple beams can be utilized, such that multiple persons are able to receive the audio signals via the ultrasonic output by the directional speaker (or a plurality of directional speakers). Each beam can have its own attributes.
These attribute inputs can be provided either automatically, such as periodically, or manually, such as at the request of a user.
There can also be a dual mode operation, 5514 - - a directional mode and a normal mode. The directional audio apparatus can include a normal speaker. There are situations where a user, would prefer the audio output to be heard by every one in a room, for example. Under this situation, the user can deactivate the directional delivery mechanism ofthe apparatus, or can allow the directional audio apparatus to channel the audio signals to the normal speaker to generate the audio output. In one embodiment, a normal speaker generates its audio output based on audio signals, without the need for generating ultrasonic outputs. However, a directional speaker requires ultrasonic signals to generate its audio output.
There are also other types of beam attribute inputs. For example, the inputs can be the position 5508, and the size 5510 of the beam. The position input can pertain to the position of a person desirous of hearing the audio sound, or the position of an electronic device (e.g., remote controller). Hence, the beam-attribute control unit 5504 receives the beam position input and the beam size input, and then determines how to drive the directional speaker 5506 to output the audio sound to a specific position with the appropriate beam width. Then, the beam-attribute control unit 5504 produces drive signals, such as ultrasonic signals and other control signals.
The drive signals controls the directional speaker 5506 to generate the ultrasonic output towards a certain position with a particular beam size.
FIG. 40 is another representative building layout 5600 illustrating an application ofthe present invention. The representative building layout 5600 is generally similar to the representative building layout 5320 illustrated in FIG. 37B. In this example, the representative building layout 5600 includes a first room 5602, a second room 5604 and a third room 5606. Although a first user (u-1), a second user (u-2) and a third user (u-3) are all within the first room 5602, only the first user (u-1) and the second user (u-2) want to hear the audio sound from an audio system. Accordingly, the first room 5602 includes a directional audio apparatus 5608 to output a cone 5610 (or beam) of ultrasonic output towards the first user (u-1) and the second user (u-2). Note that the cone 5610 can have a greater width or footprint than does the cone 5330 illustrated in FIG. 37B so that it substantially encompasses both the first user (u-1) and the second user (u-2). Nevertheless, the third user (u-3) is not significantly disturbed by the audio sound that the first and second users hear by way ofthe ultrasonic output from the directional audio apparatus 5608.
Note that the cone 5610 or the beam does not have to propagate directly to the first (u-1) and the second user (u-2). In one embodiment, the beam can propagate towards the ceiling of the building, which reflects the beam back towards the floor to be received by the users. One advantage of such an embodiment is to lengthen the propagation distance to broaden the width of the beam when it reaches the users. Another feature of this embodiment is that the users do not have to be in the line-of-sight ofthe directional audio apparatus.
FIG. 41 is a flow diagram of directional audio delivery processing 5700 according to another embodiment ofthe invention. The directional audio delivery processing 5700 is, for example, performed by the directional audio delivery device 5104 illustrated in FIG. 35. More particularly, the directional audio delivery processing 5700 is particularly suitable for use by the directional audio delivery device 5220 illustrated in FIG. 36B.
The directional audio delivery processing 5700 receives 5702 audio signals for directional delivery. The audio signals are provided by an audio system. In addition, two beam attribute inputs are received, and they are a position input 5704, and a beam size input 5706. Next, the directional audio delivery processing 5700 determines 5708 a delivery direction and a beam size based on the position input and the beam size input. The desired distance to be covered by the beam can also be determined. The audio signals are then converted 5710 to ultrasonic signals, with the appropriate attributes. For example, the frequency and/or the power level ofthe ultrasonic signals can be generated to set the desired travel distance ofthe beam. Thereafter, a directional speaker (e.g., ultrasonic speaker) is driven 5712 to generate ultrasonic output in accordance with, for example, the delivery direction and the beam size. In other words, when driven 5712, the directional speaker produces ultrasonic output (that carries the audio sound) towards a certain position, with a certain beam size at that position. In one embodiment, the ultrasonic signals are dependent on the audio signals, and the delivery direction and the beam size are used to control the directional speaker. In another embodiment, the ultiasonic signals can be dependent on not only the audio signals but also the delivery direction and the beam size. Following the operation 5712, the directional audio delivery processing 5700 is complete and ends.
FIG. 42A is a flow diagram of directional audio delivery processing 5800 according to yet another embodiment ofthe invention. The directional audio dehvery processing 5800 is, for example, suitable for use by the directional audio delivery device 5104 illustrated in FIG. 35. More particularly, the directional audio delivery processing 5800 is particularly suitable for use by the directional audio delivery device 5220 illustrated in FIG. 36B, with the beam attribute inputs being beam position and beam size received from a remote device. The directional audio delivery processing 5800 initially activates a directional audio apparatus that is capable of constrained directional delivery of audio sound. A decision 5804 determines whether a beam attribute input has been received. Here, the audio apparatus has associated with it a remote control device, and the remote control device can provide the beam attributes. Typically, the remote control device enables a user positioned remotely (e.g., but in line-of-sight) to change settings or characteristics ofthe audio apparatus. One beam attribute is the desired location ofthe beam. Another attribute is the beam size. According to the invention, a user ofthe audio apparatus might hold the remote contiol device and signal to the directional audio apparatus a position reference. This can be done by the user, for example, through selecting a button on the remote control device. This button can be the same button for setting the beam size because in transmitting beam size information, location signals can be relayed as well. The beam size can be signaled in a variety of ways, such as via a button, dial or key press, using the remote control device. When the decision 5804 determines that no attributes have been received from the remote contiol device, the decision 5804 can just wait for an input.
When the decision 5804 determines that a beam attribute input has been received from the remote control device, control signals for the directional speaker are determined 5806 based on the attribute received. If the attribute is a reference position, a delivery direction can be determined based on the position reference. If the attribute is for a beam size adjustment, control signals for setting a specific beam size are determined. Then, based on the control signals determined, the desired ultiasonic output that is constrained is produced 5812. Next, a decision 5814 determines whether there are additional attribute inputs. For example, an additional attribute input can be provided to incrementally increase or decrease the beam size. The user can adjust the beam size, hear the effect and further adjust it, in an iterative manner. When the decision 5814 determines that there are additional attribute inputs, appropriate contiol signals are determined 5806 to adjust the ultrasonic output accordingly. When the decision 5814 determines that there are no additional inputs, the directional audio apparatus can be deactivated. When the decision 5816 determines that the audio system is not to be deactivated, then the directional audio delivery processing 5800 returns to continuously output the constrained audio output. On the other hand, when the decision 5816 determines that the directional audio apparatus is to be deactivated, then the directional audio delivery processing 5800 is complete and ends.
Besides directionally constraining audio sound that is to be delivered to a user, the audio sound can optionally be additionally altered or modified in view ofthe user's hearing characteristics or preferences, or in view ofthe audio conditions in the vicinity ofthe user.
FIG. 42B is a flow diagram of an environmental accommodation process 5840 according to one embodiment ofthe invention. The environmental accommodation process 5840 determines 5842 environmental characteristics. In one implementation, the environmental characteristics can pertain to measured sound (e.g., noise) levels at the vicinity ofthe user. The sound levels can be measured by a pickup device (e.g., microphone) at the vicinity ofthe user. The pickup device can be at the remote device held on by the user. In another implementation, the environmental characteristics can pertain to estimated sound (e.g., noise) levels at the vicinity ofthe user. The sound levels at the vicinity ofthe user can be estimated, based on a position of the user/device and the estimated sound level for the particular environment. For example, sound level in a department store is higher than the sound level in the wilderness. The position ofthe user can, for example, be determined by Global Positioning System (GPS) or other triangulation techniques, such as based on infrared, radio-frequency or ultrasound frequencies with at least three non-collinear receiving points. There can be a database with information regarding typical sound levels at different locations. The database can be retrieved to access the estimated sound level based on the specific location.
After the environmental accommodation process 5840 deteπnines 5842 the environmental characteristics, the audio signals are modified based on the environmental characteristics. For example, if the user were in an area with a lot of noise (e.g., ambient noise), such as at a confined space with various persons or where construction noise is present, the audio signals could be processed to attempt to suppress the unwanted noise, and/or the audio signals (e.g., in a desired frequency range) could be amplified. One approach to suppress the unwanted noise is to intioduce audio outputs that are opposite in phase to the unwanted noise so as to cancel the noise. In the case of amplification, if noise levels are excessive, the audio output might not be amplified to cover the noise because the user might not be able to safely hear the desired audio output. In other words, there can be a limit to the amount of amplification and there can be negative amplification on the audio output (even complete blockage) when excessive noise levels are present. Noise suppression and amplification can be achieved through conventional digital signal processing, amplification and/or filtering techniques. The environmental accommodation process 5840 can, for example, be performed periodically or if there is a break in audio signals for more than a preset amount of time. The break may signify that there is a new audio stream. A user might have a hearing profile that contains the user's hearing characteristics. The audio sound provided to the user can optionally be customized or personalized to the user by altering or modifying the audio signals in view ofthe user's hearing characteristics. By customizing or personalizing the audio signals to the user, the audio output can be enhanced for the benefit or enjoyment ofthe user. FIG. 42C is a flow diagram of an audio personalization process 5860 according to one embodiment ofthe invention. The audio personalization process 5860 retrieves 5862 an audio profile associated with the user. The hearing profile contains information that specifies the user's hearing characteristics. For example, the hearing characteristics may have been acquired by the user taking a hearing test. Then, the audio signals are modified 5864 or pre-processed based on the audio profile associated with the user. The hearing profile can be supplied to a directional audio delivery device performing the personalization process 5860 in a variety of different ways. For example, the audio profile can be electronically provided to the directional audio delivery device inrough a network. As another example, the audio profile can be provided to the directional audio delivery device by way of a removable data storage device (e.g., memory card). Additional details on audio profiles and personalization to enhance hearing can be found in other sections of this patent application.
The environmental accommodation process 5840 and/or the audio personalization process 5860 can optionally be performed together with any ofthe directional audio delivery devices or processes discussed above. For example, the environmental accommodation process 5840 and/or the audio personalization process 5860 can optionally be performed together with any ofthe directional audio delivery processes 5400, 5700 or 5800 embodiments discussed above with respect to FIGs. 38, 41 and 42. The environmental accommodation process 5840 and/or the audio personalization process 5860 typically would precede the operation 5408 in FIG. 38, the operation 5710 in FIG. 41 and/or the operation 5812 in FIG. 42A. FIG. 43 A is a perspective diagram of an ultrasonic transducer 5900 according to one embodiment ofthe invention. The ultrasonic transducer 5900 can implement the directional speakers discussed herein. The ultrasonic transducer 5900 produces the ultrasonic output utilized as noted above. In one embodiment, the ultrasomc transducer 5900 includes a plurality of resonating tubes 5902 covered by a piezoelectric thin-film, such as PVDF, that is under tension, as described in other part of this application.
Mathematically, the resonance frequency f of each eigen mode (n,s) of a circular membrane can be represented by: f(n,s) = o n,s)/(27ia) * ^S/m) where a is the radius of the circular membrane,
S is the uniform tension per unit length of boundary, and
M is the mass ofthe membrane per unit area.
For different eigen modes ofthe tube structure shown in FIG. 43 A, o(0,0) = 2.4 α(0,l) = 5.52 o(0,2) = 8.65
Assume α(0,0) to be the fundamental resonance frequency, and is set to be at 50 kHz. Then, α(0,l) is 115 kHz, and α(0,2) is 180 kHz etc. The n = 0 modes are all axisymmetric modes. In one embodiment, by driving the thin-film at the appropriate frequency, such as at any ofthe axisymmetric mode frequencies, the structure resonates, generating ultrasonic waves at that frequency.
Instead of using a membrane over the resonating tubes, in another embodiment, the ultiasonic transducer is made of a number of speaker elements, such as unimorph, bimorph or other types of multilayer piezoelectric emitting elements. The elements can be mounted on a solid surface to form an array. These emitters can operate at a wide continuous range of frequencies, such as from 40 to 200 kHz.
One embodiment to contiol the distance of propagation ofthe ultrasonic output is by changing the carrier frequency, such as from 40 to 200 kHz. Frequencies in the range of 200 kHz have much higher acoustic attenuation in air than frequencies around 40 kHz. Thus, the ultrasonic output can be attenuated at a much faster rate at higher frequencies, reducing the potential risk of ultrasonic hazard to health, if any. Note that the degree of attenuation can be changed continuously, such as based on multi-layer piezoelectric thin-film devices by continuously changing the carrier frequency. In another embodiment, the degree of isolation can be changed more discreetly, such as going from one eigen mode to another eigen mode ofthe tube resonators with piezoelectric membranes.
FIG. 43B is a diagram that illustrates the ultrasomc transducer 5900 generating its beam 5904 of ultrasonic output.
The width ofthe beam 5904 can be varied in a variety of different ways. For example, a reduced area or one segment ofthe transducer 5900 can be used to decrease the width ofthe beam 5904. In the case of a membrane over resonating tubes, there can be two concentric membranes, an inner one 5910 and an outer one 5912, as shown in FIG. 43C. One can turn on the inner one only, or both at the same time with the same frequency, to contiol the beam width. FIG. 43D illustrates another embodiment 5914, with the transducer segmented into four quadrants. The membrane for each quadrant can be individually controlled. They can be turned on individually, or in any combination to contiol the width ofthe beam. In the case of directional speakers using an array of bimorph elements, reduction ofthe number of elements can be used to reduce the size ofthe beam width. Another approach is to activate elements within specific segments to control the beam width. In yet another embodiment, the width ofthe beam can be broadened by increasing the frequency ofthe ultrasonic output. To illustrate this embodiment, the dimensions ofthe directional speaker are made to be much larger than the ultrasonic wavelengths. As a result, beam divergence based on aperture diffraction is relatively small. One reason for the increase in beam width in this embodiment is due to the increase in attenuation as a function ofthe ultiasonic frequency. Examples are shown in FIGs. 43E-43G, with the ultrasonic frequencies being 40 kHz, 100 kHz and 200 kHz, respectively. These figures illustrate the audio output beam patterns computed by integrating the non-linear KZK equation based on an audio frequency at 1 kHz. The emitting surface ofthe directional speaker is assumed to be a planar surface of 20 cm by 10 cm. Such equations are described, for example, in "Quasi-plane waves in the nonlinear acoustics of confined beams," by E. A. Zabolotskaya and R. V. Khokhov, which appeared in Sov. Phys. Acoust., Vol.15, pp.35-40, 1969; and "Equations of nonlinear acoustics," by V. P. Kuznetsov, which appeared in Sov. Phys. Acoust., Vol.16, pp.467-470, 1971.
In the examples shown in FIGs. 43E-43G, the acoustic attenuations are assumed to be 0.2 per meter for 40 kHz, 0.5 per meter for 100 kHz and 1.0 per meter for 200 kHz. The beam patterns are calculated at a distance of 4 m away from the emitting surface and normal to the axis of propagation. The x-axis ofthe figures indicates the distance ofthe test point from the axis (from -2 m to 2 m), while the y-axis ofthe figures indicates the calculated acoustic pressure in dB SPL ofthe audio output at the +eεt point. The emitted power for the three examples are normalized so that the received power for the three audio outputs on-axis are roughly the same (e.g. at 56 dB SPL 4 m away). Comparing the figures, one can see that the lowest carrier frequency (40 kHz in FIG. 43E) gives the narrowest beam and the highest carrier frequency (200 kHz in FIG. 43G) gives the widest beam. One explanation can be that higher acoustic attenuation reduces the length ofthe virtual array of speaker elements, which tends to broaden the beam pattern. Anyway, in this embodiment, a lower carrier frequency provides better beam isolation, with privacy enhanced. As explained, the audio output is in a constrained beam for enhanced privacy. Sometimes, although a user would not want to disturb other people in the immediate neighborhood, the user may want the beam to be wider or more divergent. A couple may be sitting together to watch a movie. Their enjoyment would be reduced if one of them cannot hear the movie because the beam is too narrow. In a number of embodiments to be described below, the width ofthe beam can be expanded in a controlled manner based on curved structural surfaces or other phase-modifying beam forming techniques.
FIG. 44A illustrates one approach to diverge the beam based on an ultrasonic speaker with a convex emitting surface. The surface can be structurally curved in a convex manner to produce a diverging beam. The embodiment shown in FIG.44A has a spherical-shaped ultrasonic speaker 6000, or an ultrasonic speaker whose emitting surface of ultrasonic output is spherical in shape. In the spherical arrangement 6000, a spherical surface 6002 has a plurality of ultrasonic elements 6004 affixed (e.g. bimorphs) or integral thereto. The ultrasonic speaker with a spherical surface 6002 forms a spherical emitter that outputs an ultiasonic output within a cone (or beam) 6006. Although the cone will normally diverge due to the curvature ofthe spherical surface 6002, the cone 6006 remains directionally constrained.
In an embodiment where speaker elements are affixed or coupled to a spherical surface, each ultrasonic element 6004 is oriented to point towards the center of a sphere of which the spherical surface 6002 is a part of. In one embodiment where elements are integral to a spherical or curved surface, there can be a plurality of resonating tubes 6026, as shown in FIG. 44B. The length-wise axis of each resonating cavity 6026 points to the center ofthe sphere of which the spherical surface 6002 is a part of. The resonating tubes 6026 can be formed in a single fabrication step so as to ensure their uniformity. This can be done, for example, by form- pressing all ofthe holes at the same time. In the embodiment where the ultrasonic speaker includes resonating tubes, there is a thin- film piezoelectric membrane mounted on one side ofthe tubes. It can be either on the convex side 6034 or the concave side 6036, as shown in FIG. 44B. In the embodiment 6010 shown in FIG. 44B, the membrane is assumed to be mounted on the concave side. After the membrane is mounted, vacuum can be formed to have the membrane press onto the tubes. Voltages can be applied to the membrane to generate the ultrasonic output. This creates an emitting surface that is structurally curved in a concave manner. As shown in FIG. 44B, the beam produced 6040 initially converges and then diverges.
The degree of divergence is determined, for example, by the curvature ofthe surface 6002 or 6036. In one embodiment, referring back to FIG. 44A, the radius ofthe spherical surface is about 40 cm, its height 6006 is about 10 cm and its width 6008 is about 20 cm.
Diverging beams can also be generated even if the emitting surface ofthe ultiasonic speaker is a planar surface. For example, as shown in FIG. 44C, a convex reflector 6050 can be used to reflect the beam 5904 into a diverging beam 5918 (and thus with an increased beam width). In this embodiment, the ultrasonic speaker can be defined to include the convex reflector 6050.
Another way to modify the shape of a beam, such as diverging or converging the beam, is through controlling phases. In one embodiment, the directional speaker includes a number of speaker elements, such as bimorphs. The phase shifts to individual elements ofthe speaker can be individually controlled. With the appropriate phase shift, one can generate ultiasonic outputs with a quadratic phase wave-front to produce a converging or diverging beam. For example, the phase of each emitting element is modified by k*^/ (2F0), where (a) r is the radial distance of the emitting element from the point where the diverging beam seems to originate from, (b) F0 is the desired focal distance, (c) k— the propagation constant ofthe audio frequency f~is equal to 27rf / Co, where Co is the acoustic velocity. In yet another example, beam width can be changed by modifying the focal length or the focus ofthe beam, or by de-focusing the beam. This can be done electronically through adjusting the relative phases ofthe ultrasonic signals exciting different directional speaker elements.
Curved surfaces can also be segmented to contiol the beam width or beam propagating direction. FIG. 45 A illustrates a cylindrical-shaped ultiasonic speaker 6100 according to an embodiment ofthe invention. In this embodiment, the emitting surface ofthe directional speaker is cylindrical in shape and is segmented. In the cylindrical arrangement 6100, a cylindrical surface 6102 has a plurality of ultrasonic elements 6104 affixed (e.g., bimorphs) or integral thereto (e.g., tubes covered by a membrane). Each ultrasonic element 6104 is oriented horizontally on, but pointed towards the center line of, a cylinder of which the cylindrical surface 6102 is a part of. In the case of elements being resonating tubes, the length-wise axis of each tube is horizontal and points towards the center line ofthe cylinder of which the cylindrical surface is a part of. Again, although the cone of ultrasonic output 6106 will normally diverge, the cone remains directionally constrained. In one embodiment, the radius 6108 ofthe cylindrical surface is about 40 cm, its height 6110 is about 10 cm and its width 6112 is about 20 cm.
In the speaker embodiment shown in FIG. 45 A, the transducer surface 6102 can be segmented, such as into three separate controllable segments 6102, 6104 and 6106. Each ofthe segments can be selectably activated to control the direction and or width ofthe ultrasonic output. For the embodiment where the speaker is made of tubes covered by membranes, each segment can have its own membrane. To generate the widest beam, all three segments are activated simultaneously by signals with substantially the same frequencies, phases and amplitudes.
FIG. 45B shows another example of segmenting the emitting surface according to the present invention. The transducer surface 6140 has a curved configuration 6142 that includes four controllable segments 6144, 6146, 6148 and 6150. Each ofthe segments ofthe curved configuration 6142 can be selectably activated to control the direction and/or width ofthe ultrasonic output. For example, the ultrasonic output from the segment 6144 resides within the constrained region 6152. The ultrasonic output by the segment 6146 resides within the constrained area 6154. The ultrasonic output by the segment 6148 resides within the constrained area 6156. The ultrasonic output from the segment 6150 resides within the constrained area 6158. By selectively controlling the selectable segments ofthe curved configuration 6142, the width ofthe ultrasonic output (and thus the resulting audio output) can be controlled.
Segmenting the transducer surface shown in FIG. 45B can be done by turning on elements in the different segments. To illustrate, referring to FIG. 44A, a subset ofthe ultrasonic elements 6004 can be activated. For example, the spherical emitter is shown as having sixty-four (64) ultiasonic elements 6004, which can be bimorph devices. A smaller beam could be emitted if, for example, only the interior sixteen (16) ultrasonic elements were utilized.
Still further, the propagation direction ofthe ultiasonic beam, such as the beam 6006 in FIG. 44A, the beam 6040 in FIG. 44B or the beam 6106 in FIG. 45 A, can be changed by electrical and/or mechanical mechanisms, o illustrate based on the spherical-shaped ultiasonic speaker shown in FIG. 44A, a user can physically reposition the spherical surface 6002 to change its beam's orientation or direction. Alternatively, a motor can be mechanically coupled to the spherical surface 6002 to change its orientation or the propagation direction ofthe ultrasonic output. Li yet another embodiment, the direction ofthe beam can be changed electronically based on phase array techniques.
The movement ofthe spherical surface 6002 to adjust the delivery direction can track user movement. This tracking can be performed dynamically. This can be done through different mechanisms, such as by GPS or other triangulation techniques. The user's position is fed back to or calculated by the directional audio apparatus. The position can then become a beam attribute input. The beam-attribute contiol unit would convert the input into the appropriate control signals to adjust the dehvery direction ofthe audio output. The movement of the spherical surface 6002 can also be in response to a user input. In other words, the movement or positioning ofthe beam 1006 can be done automatically or at the instruction ofthe user. FIGs.46 A and 46B are perspective diagrams of one embodiment of directional audio apparatus that provides directional audio output to interested users. FIG. 46A illustrates a directional audio apparatus 6200 that includes an entertainment center, such as a television 6202, a set-top box 6204 and a directional speaker 6206. The television 6202 displays video that is supplied, for example, by a satellite link or a cable line via the set-top box 6204. Typically, the set-top box 6204 operates to decode the encoded video and audio content transmitted over the satellite link or cable line. Once decoded, the appropriate audio and video signals are delivered to the television 6202. The television 6202 may include conventional or normal speakers to provide audio output. These speakers typically do not produce audio output through generating ultrasonic signals to be converted into the audio frequency range by air. Nevertheless, the audio apparatus 6200 includes the directional speaker 6206. The directional speaker 6206 provides delivery of audio signals in a constrained direction. Further, the directionally-constiained audio outputs can be controlled as to the target distance for its users as well as for the width ofthe resulting audio beam. The directional speaker 6206 generates ultrasonic output by way of an emitter surface 6208. The emitter surface 6208 can include a single or multiple segments of groups of ultrasonic or speaker elements. Furthermore, the directional speaker 6206 is mounted to the set-top box 6204 such that its position can be adjusted with respect to the set-top box 6204 as well as the television 6202. For example, the directional speaker 6206 can be rotated to cause a change in the direction in which the directionally-constiained audio output outputs are delivered. In one embodiment, a user of the audio system 6200 can manually position (e.g., rotate) the directional speaker 6206 to adjust the delivery direction. In another embodiment, the directional speaker 6206 can be positioned (e.g., rotated) by way of an electrical motor provided within the set-top box 6204 or the directional speaker 6206. Such an electrical motor can be controlled by a conventional control circuit and can be instructed by one or more buttons provided on the set-top box 6204, the directional speaker 6206 or a remote contiol device.
FIG. 46B is a diagram of another directional audio apparatus 6220 in a set-top box environment according to another embodiment ofthe invention. The audio apparatus 6220 includes an entertainment system, such as a television 6222, a set-top box 6224 and a directional speaker 6226. The set-top box 6224 is typically coupled to a satellite link or a cable line to receive audio and video signals. The set-top box 6224 decodes the audio and video signals and supplies the resulting audio and video signals to the television 6222. The television 6222 displays the video signals and may use its conventional speakers to output audio sound. However, when directional delivery of audio sound is desired, the conventional speakers ofthe television 6222 are not utilized. Instead, the directional speaker 6226 is utilized. The directional speaker 6226, for example, can be activated by a button, switch or other means. Once activated, the directional speaker 6226 outputs the audio signals in a directionally constrained manner. In one approach, the television 6222 has an audio-output connection that is connected to the set-top box 6224. If conventional speakers are preferred, the signal line from the audio-output connection is electrically disconnected, and normal audio output is directly from the television 6222. However, if directionally-constiained audio output is desired, audio signals from the television 6222 is channeled to the set-top box 6224, and normal audio output from the television 6222 is de-activated. In yet another embodiment, the volume control in the television 6222 can be turned down also if directionally-constiained audio outputs are preferred.
Still further, the set-top box 6224 and/or the directional speaker 6226 can permit control over the distance and/or width of the audio output to be tiansmitted to the one or more interested users. In this embodiment, the position ofthe directional speaker 6226 is fixed relative to the set-top box 6224. In one embodiment, the directional speaker 6226 is affixed to the set-top box 6224. In another embodiment, the directional speaker 6226 is integral with the set-top box 6224. In any case, the direction for the directionally-constrained audio output outputs can be electrically controlled through a variety of different techniques. One technique is to activate only certain segments ofthe emitting surface 6228 ofthe directional speaker 6226. Another technique is to utilize beam-steering operations based on phase control inputs.
The directional audio apparatuses 6200 and 6220 illustrated in FIGs. 46A and 46B can utilize the various methods and processes discussed above. The set-top boxes with directional speakers shown in FIGs. 46 A and 46B are able to transform conventional audio systems in televisions into audio systems having directional audio delivery as explained in the present invention.
To illustrate, the directional speaker with the emitting surface 6140 shown in FIG. 45B can be used as the emitting surface 6228 for the directional speaker 6226 illustrated in FIG. 46B. For example, initially only the segment 6146 is in operation. The user signals the set-top box that its beam width should be increased. Then the segment 6148 can be additionally activated, thereby increasing the width or area associated with the ultrasonic output (and thus resulting audio outputs). In yet another application, non-adjacent segments can be simultaneously activated to generate multiple separate beams. For example, a user can signal the set-top box to activate the two outer most beams, 6152 and 6158. This will generate two separate beams for two separate users. Then, a person located in the middle between the two users would only hear a substantially reduced output level.
In another example, more than one user are sitting close to the television 6200 in FIG. 46A. It would be advantageous to have a wider beam that covers a shorter distance. One embodiment uses a directional speaker 6206 that operates at a higher frequency, such as the one shown in FIG. 43G, working at 200 kHz. The beam width is broader than the version shown in FIG. 43 E, but the beam covers a shorter distance due to higher attenuation.
FIG. 47 is a perspective diagram of a remote control device 6300 according to one embodiment ofthe invention. The remote contiol device 6300 is one embodiment for a directional audio apparatus. The remote control device 6300 has a top surface 6302 with a plurality of buttons 6304 as is common with remote controllers. Some of these buttons 6304 can correspond to various options a user might request of a directional audio apparatus via a remote control device. Examples of these options include start, stop, play, channels, volume, etc. In one embodiment, the remote contiol device 6300 also includes options for the beam attribute inputs, such as 3 discrete sizes of beam width (large, medium and small), and 3 discrete distance coverage (long, medium and short).
The remote control device 6300 can also include a directional speaker 6306 that produces directional audio delivery to one or at most a few users desirous of hearing the audio output. The directional speaker 6306 can be substantially flush or recessed with respect to the top surface 6302. In any case, a grating 6308 can optionally be provided over the directional speaker 6306. Still further, the directional speaker can be mounted at an angle with respect to the top surface 6302, or can be movably mounted with respect to the top surface 6302 so that the direction of delivery can be manipulated. Alternatively, a thin layer of material (e.g., plastic housing) can cover the directional speaker 6306 to provide protection, if required, yet still allow sound to pass through. Additional details on the directional speaker 6306 can be found in other areas in this application. A wireless link window 6310 provides a window through which the remote control device 6300 is able to communicate in a wireless manner (e.g., radio or optical) with an audio system, which may or may not have directional audio capability. Audio signals can then be received and directed to one or at most a few users proximate to the remote control device 6300 via the directional speaker 6306.
Depending on the power level ofthe ultrasonic signals, sometimes, it might be beneficial to reduce its level in free space to prevent any potential health hazards, if any. FIGs. 48A-48B show two such embodiments that can be employed, for example, for such a purpose. FIG. 43 A illustrates a directional speaker with a planar emitting surface 6404 of ultiasonic output. The dimension ofthe planar surface can be much bigger than the wavelength ofthe ultrasonic signals. For example, the ultiasonic frequency is 100 kHz and the planar surface dimension is 15 cm, which is 50 times larger than the wavelength. With a much bigger dimension, the ultiasonic waves emitting from the surface are controlled so that they do not diverge significantly within the enclosure 6402. In the example shown in FIG. 48A, the directional audio delivery device 6400 includes an enclosure 6402 with at least two reflecting surfaces for the ultrasonic waves. The emitting surface 6404 generates the ultiasonic waves, which propagate in a beam 6406. The beam reflects within the enclosure 6402 back and forth at least once by reflecting surfaces 6408. After the multiple reflections, the beam emits from, the enclosure at an opening 6410 as the output audio 6412. The dimensions ofthe opening 6410 can be similar to the dimensions ofthe emitting surface 6404. In one embodiment, the last reflecting surface can be a concave or convex surface 6414, instead of a planar reflector, to generate, respectively, a converging or diverging beam for the output audio 6412. Also, at the opening 6410, there can be an ultrasonic absorber to further reduce the power level ofthe ultrasonic output in free space.
FIG. 48B shows another embodiment of a directional audio dehvery device 6450 that allows the ultiasonic waves to bounce back and forth at least once by ultrasonic reflecting surfaces before emitting into free space. In FIG. 48B, the directional speaker has a concave emitting surface 6460. As explained by FIG. 44B, the concave surface first focuses the beam and then diverges the beam. For example, the focal point 6464 ofthe concave surface 6460 is at the mid-point ofthe beam path within the enclosure. Then with the last reflecting surface 6462 being flat, convex or concave, the beam width at the opening 6466 ofthe enclosure can be not much larger than the beam width right at the concaved emitting surface 6460. However, at the emitting surface 6460, the beam is converging. While at the opening 6466, the beam is diverging. The curvatures ofthe emitting and reflecting surfaces can be computed according to the desired focal length or beam divergence angle similar to techniques used in optics, such as in telescopic structures.
More than one directional audio delivery device can be employed to provide stereo effects. FIG. 49 shows one such embodiment as illustrated by a building layout 6500. An audio system 1506 is coupled to two directional audio delivery devices 6502 and 6504 that are spaced apart. In one approach, the audio system transmits different types of audio signals, either wireline or wireiessiy, to the two directional audio delivery devices 6502 and 6504. For example, the different types of audio signals can represent a left channel and a right channel. The two directional audio delivery devices 6502 and 6504 generate two directionally- constiained audio output beams 6510 and 6512 that are directed towards and received by a user 6508. Note that the number of directional audio delivery devices does not have to be limited to two. For example, a surround sound arrangement can be achieved through more than two directional audio delivery devices.
A number of attributes ofthe constrained audio outputs can be adjusted, either by a user or automatically and dynamically based on certain monitored or tracked measurements, such as the position of the user.
One adjustable attribute is the direction ofthe constrained audio outputs. It can be controlled, for example, by (a) activating different segments of a planar or curved speaker surface, (b) using a motor, (c) manually moving the directional speaker, or (d) through phase array beam steering techniques. Another adjustable attribute is the width of the beam ofthe constrained audio outputs. It can be controlled, for example, by (a) modifying the frequency ofthe ultrasonic signals, (b) activating one or more segments ofthe speaker surface, (c) using phase array beam forming techniques, (d) employing curved speaker surfaces to diverge the beam, (e) changing the focal point ofthe beam, or (f) de-focusing the beam. The degree of isolation or privacy can also be controlled independent of the beam width.
For example, one can have a wider beam that covers a shorter distance through increasing the frequency ofthe ultrasonic signals. Isolation or privacy can also be controlled through, for example, (a) phase array beam forming techniques, (b) adjusting the focal point ofthe beam, or (c) de-focusing the beam. The volume ofthe audio output can be modified through, for example, (a) changing the amplitude ofthe ultrasonic signals driving the directional speakers, (b) modifying the ultrasonic frequency to change its distance coverage, or (c) activating more segments of a planar or curved speaker surface.
The audio output can also be personalized or adjusted based on the audio conditions of the areas surrounding the directional audio apparatus. Signal pre-processing techniques can be applied to the audio signals for such personalization and adjustment.
Ultiasonic hazards, if any, can be minimized by increasing the path lengths ofthe ultrasonic waves from the directional speakers before the ultrasonic waves emit into free space. There can also be an ultiasonic absorber to attenuate the ultrasonic waves before they emit into free space. Another way to reduce potential hazard, if any, is to increase the frequency ofthe ultiasonic signals to reduce their distance coverage.
Stereo effects can also be introduced by using more than one directional audio delivery devices that are spaced apart. This will generate multiple and different constiained audio outputs to create stereo effects for a user.
Directionally-constrained audio output outputs are not limited to be generated by set-top boxes. They can also be generated from a remote control.
Numerous embodiments ofthe present invention have been applied to an indoor environment, using building layouts. However, many embodiments ofthe present invention are perfectly suitable for outdoor applications also. For example, a user can be sitting inside a patio reading a book, while listening to music from a directional audio apparatus ofthe present invention. The apparatus can be in the outside, 10 meters away from the user. Due to the directionally constiained nature ofthe audio output, sound can still be localized within the direct vicinity ofthe user. As a result, the degree of noise pollution to the user's neighbors is significantly reduced.
Also, an existing audio system can be modified with one ofthe described set-top boxes to generate directionally-constrained audio output outputs. A user can select either directionally constiained or normal audio outputs from the audio system, as desired.
Wireless Audio
A number of embodiments ofthe invention pertain to techniques for providing wireless delivery of audio sounds from audio systems, which can be stationary, to personal audio devices, which, typically, are portable. These techniques can permit users ofthe personal audio device to be mobile yet still acquire the audio sounds. Based on different embodiments, audio systems can be readily adapted to provide the wireless delivery of audio sounds. These techniques can also optionally provide customization (or personalization) ofthe audio sounds to user's hearing and/or modification ofthe audio sounds in view of environmental conditions.
According to one aspect ofthe invention, audio output from an audio system can be delivered to one or more persons desirous of hearing the audio output. Each person has a personal audio device. The device causes audio sound corresponding to audio output from the audio system to be output personally, in a directionally constrained manner. Consequently, other persons not desirous of hearing the audio output do not receive substantial amounts ofthe audio sounds. Thus, they are less disturbed by the unwanted audio sounds. According to another aspect ofthe invention, a wireless adapter can serve as an after market modification to an audio system. The wireless adapter enables audio signals output by the audio system to be wireiessiy transmitted to one or more personal audio devices. Each personal audio device produces audio sound for its user.
FIG. 50 is a block diagram of a remote audio delivery system 7100 according to one embodiment ofthe invention. The remote audio delivery system 7100 includes an audio system 7102 that produces an audio output. The audio system 7102 is, for example, a television, a Compact Disc (CD) player, Digital Versatile Disk (DVD) player, a stereo, a computer with speakers etc. In one embodiment, the audio system 7102 can also be referred to as an entertainment system. In another embodiment, the audio system 7102 is stationary. In any case, the audio output from the audio system 7102 is supplied to a wireless transmission apparatus 7104. In one implementation, the wireless transmission apparatus 7104 is coupled to an audio output port (e.g., terminal, connector, receptacle, etc.) ofthe audio system 7102. The coupling can be directly to the audio output port ofthe audio system 7102 or can be coupled to the audio output port by way of a cable. In one embodiment, the wireless transmission apparatus 7104 can also be referred to as a wireless audio adapter because it is able to adapt the audio system 7102 for wireless audio delivery without requiring changes to the audio system 7102.
The wireless transmission apparatus 7104 receives the audio output from the audio system 7102 and transmits the audio output over a wireless channel 7105 (or wireless link) to a wireless receiver 7106 of a personal audio device 7107. The wireless channel 105 is typically a short range wireless link that is not in the audio frequency ranges, for example, such as available using Bluetooth, WiFi or other dedicated frequency (e.g., 900 MHz, 2.4 GHz) techniques. The wireless receiver 7106 receives the audio output that is tiansmitted by the wireless transmission apparatus 7104 over the wireless channel 7105. The received audio output is then supplied to control circuitry 7108. The contiol circuitry 7108 converts the received audio output into speaker drive signals. The speaker drive signals are then used to activate a directional speaker 7110 which produces output sound. The output sound from the directional speaker 7110 is directionally confined for enhanced privacy. Optionally, as discussed in detail below, the control circuitry 7108 can also provide customization or personalization to the person and/or the environment. The directionally confined output sound produced by the directional speaker 7110 allows the user ofthe personal audio device 7107 to hear the audio sound even though neither ofthe user's ears touches or coupled against the directional speaker 7110. However, the directional nature ofthe output sound is towards the user (e.g., user's ear(s)) and thus provides privacy by restricting the output sound to a confined directional area. In other words, bystanders in the vicinity of the personal audio device but not within the confined directional area would not be able to directly hear the output sound, or to hear a significant portion ofthe output sound, produced by the directional speaker 7110. The bystanders might be able to hear a degraded version ofthe output sound after it reflects from a surface. The reflected output sound, if any, that reaches the bystander would be at a reduced decibel level (e.g., at least a 20 dB reduction) making it difficult for bystanders to hear and understand the output sound.
In one embodiment, the directional speaker 7110 is an ultiasonic speaker, and the control circuitry 7208 converts the received audio output into ultrasonic drive signals that are used to drive the ultrasonic speaker. The ultrasonic drive signals are supplied to the ultrasonic speaker to generate ultrasonic output. The ultrasonic output is subsequently transformed, for example, by air, into audio output. In one embodiment, the frequency spectrum ofthe resulting audio output (after such transformation) is similar to the audio output from the audio system 7102. In another embodiment, the frequency spectrum ofthe resulting audio output is altered so as to provide customized hearing (e.g., enhanced hearing), or to adapt to environmental conditions or physical conditions ofthe user. FIG. 51 is a block diagram of a remote audio delivery system 7200 according to another embodiment ofthe invention. The remote audio delivery system 7200 includes an audio system 7202 and a wireless tiansmitter 7204. In one embodiment, the wireless tiansmitter 7204 can also be referred to as a wireless audio adapter. It is able to adapt the audio system 7202 for wireless audio delivery without requiring physical changes to the audio system 7202. In one implementation, the wireless transmitter 7204 is coupled to the audio system 7202 via an audio output port ofthe audio system 7202. Such coupling can be achieved by a connector alone or in combination with a cable. In another embodiment, the wireless tiansmitter 7204 is integral and thus part ofthe audio system so that no connector or cable is necessary. The audio system 7202 and the wireless transmitter 7204 together form a wireless audio delivery system. Audio output from the audio system 7202 is supplied to the wireless transmitter 7204 via the audio output port ofthe audio system 7202 or other means. Then, the wireless transmitter 7204 transmits the audio output over a wireless channel (wireless link) 7205 to a wireless receiver 7206 of a personal audio device 7207. The received audio output at the wireless receiver 7206 is then supplied to control circuitry 7208. The contiol circuitry 7208 can receive user information pertaining to the user from a data storage device 7202. For example, the user information can pertain to an audio profile associated with the user. An audio profile contains or is based on hearing characteristics of an associated user. The user information can be stored in a data storage device 7210. The data storage device 7210 can be a dedicated or removable data storage medium. Examples of removable data storage medium include a memory card (Flash memory card, memory stick, credit card with data storage, PC card (PCMCIA), etc.).
The control circuitry 7208 produces speaker drive signals that are used to drive a speaker 7212. In this embodiment, the speaker drive signals are produced by the control circuitry 7208 based upon not only the received audio output but also the user information. In other words, the control circuitry 7208 can modify the drive signals being supplied to the speaker 7212 based upon the user information. As such, the audio sound being produced by the speaker 7212 can be customized for (or personalized to) the user. For example, when the user information pertains to hearing characteristics and or user preferences ofthe user, the control circuitry 7208 is able to produce customized drive signals for the speaker 7212 such that the resulting audio output by the speaker 7212 is customized for the hearing characteristics and/or user preferences ofthe user. The remote audio delivery system 7200 shown in FIG. 51 makes use of customization of the audio output at the personal audio device 7207. Note that, as shown in FIG. 51, the personal audio device 7207 can include the wireless receiver 7206, the contiol circuitry 7208, the data storage device 7210 and the speaker 7212. Nevertheless, it should be noted that the customization could also be performed elsewhere. For example, the audio system 7202 or the wireless transmitter 7204 can further include control circuitry (not shown) that would obtain user information and then customize audio output prior to its transmission to the personal audio device 7207. Such an implementation could provide centralized customization ofthe audio output for one or more personal audio devices.
FIG. 52 is a block diagram of a remote audio delivery system 7300 according to yet another embodiment ofthe invention. The remote audio delivery system 7300 includes an audio system 7302, a wireless network 7304, and personal audio devices 7306 and 7308. The wireless network 7304 can be a wireless local area network, such as a Bluetooth or WiFi network. Here, the remote audio delivery system 7300 illustrates that the audio system 7302 can supply audio output to one or more personal audio devices 7306 and 7308 over a wireless network 7304. The wireless network 7304 can, for example, be used in the vicinity of a home or business. The audio output from the audio system 7302 can be broadcast, multicast or unicast over the wireless network 7304. In other words, the audio output from the audio system 7302 can be directed to one or more ofthe personal audio devices 7306 and 7308. In one implementation, a different network address is associated with each ofthe personal audio devices, and thus the audio output can be transmitted to the appropriate one or more ofthe personal audio devices via the wireless network 7304 using the associated network addresses. Although FIG. 52 illustrates only the personal audio devices 7306 and 7308, it should be understood that the remote audio delivery system 7300 can support many personal audio devices, and such personal audio devices can be ofthe same type or of different types. As described above, the wireless audio adapter 7204 can be matched to the personal audio device 7207. In other words, each wireless audio adapter can have a corresponding personal audio device.
In other embodiments, wireless signals from a wireless audio adapter 7204 can be received by multiple personal audio devices. This can be done, for example, by broadcasting the signal and requesting all the personal audio devices to tune to the broadcast wireless channel.
The broadcast can be performed in the analog domain or in the digital domain. For the latter case, the broadcast can be performed in Layer 3 (e.g. IP multicast) or Layer 2 (e.g. IEEE 802.11). If personal customization ofthe receiver is desired, each personal audio device 7207 can be first initialized with the wireless audio adapter 7204. The initializing process can be performed by requiring each audio device to transmit, wireiessiy or through a wired connection, an identifier to the adapter. Then the adaptor transmits the personalization information to the corresponding personal audi device according to the identifier. After the personalization information is received, the personal audio device can be configured accordingly and then start to receive the audio output. In yet another embodiment, a personal audio device can be configured to be selected by a specific wireless audio adapter or an audio system. Such configurations would be applicable for after-market sales. They can be achieved through a number of approaches. For example, there can be switches on both the device and the adapter, or both can have a number of channels. These switches or channels can be changed by users. When both set of switches or channels are matched, then the device is configured for the wireless audio adapter. Another approach is based on the media address control (MAC) layer address, IP address or TCP or UDP port numbers. For example, the personal audio device and the wireless audio adapter can agree on a specific TCP or UDP port number. They can then be configured to receive packets or signals from that port only. The personal audio device and the wireless audio adapter can also be identified by their specific IP addresses, or MAC layer addresses.
FIG. 53 is a diagram of a building layout 7400 illustrating use of different embodiments ofthe present invention. The building layout 7400 illustrates a representative floor plan having a first room 7402, second room 7404 and a third room 7406. The first room 7402 includes an audio system (AS) 7408 that includes a wireless transmission apparatus 7410, or a wireless audio adapter, coupled to the audio system 7408. The audio system 7408 can use a traditional speaker and/or a directional speaker to direct audio sound to one or more of a first user (u-1) and a second user (u-2) located within the first room. Further, using the wireless audio adapter 7410, the audio output from the audio system 7408 can also be tiansmitted over a wireless channel (link) to one or more other users that are relatively nearby the wireless transmission apparatus 7410. In other words, the type ofthe wireless channel sets the range. Typically, the range is relatively short, such as less than 400 meters. Hence, using the wireless channel, any one or more ofthe third user (u-3), a fourth user (u-4) and a fifth user (u-5) are able to hear the audio output by way of a personal audio device that receives the audio output over a wireless channel. As shown in FIG. 4, the fifth user (u-5) has a personal audio device 7412 attached or proximate thereto. In one embodiment, the fifth user (u-5) wears the portable audio device, and is able to hear the audio output from the audio system 7408 even though the fifth user (u-5) is, for example, outside ofthe building, such as in the backyard. The personal audio device 7412 thus allows a remote user (e.g., u-5) to hear the audio output from the audio system 7408 even though they are not within the same room or building as the audio system 7408. So long as the remote user is within communication range ofthe wireless channel, the user can hear the audio output even as the remote user moves around. Since the third user (u-3) and the fourth user (u-5) do not have personal audio devices, these users will not hear the audio output from the audio system 7408 unless the audio output from the traditional speaker (if any) at the audio system 408 permeates the entire building layout 7400 shown in FIG. 53. In one embodiment, the personal audio devices can be wearable by users. Additional details on personal audio devices have been described in other sections of this patent application. Besides directionally constraining audio sound that is to be delivered to a user, the audio sound can optionally be additionally altered or modified in view ofthe user's hearing characteristics or preferences, or in view ofthe environment in the vicinity ofthe user. FIG. 54 is a flow diagram of a remote audio delivery process 7500 according to one embodiment ofthe invention. The remote audio delivery process 7500 is, for example, performed by a remote audio delivery system, such as the remote audio delivery system 7100, 7200, or 7300.
The remote audio dehvery process 7500 begins with audio signals being received 7502 at a wireless audio adapter or a wireless tiansmission apparatus. Typicallvj-however, prior to receiving 7502 the audio signals, the wireless audio adapter would have been attached to the audio system that initially provides the audio signals. In any case, the audio signals that are received 7502 are thereafter wireiessiy transmitted 7504 to a personal audio device. Typically, the audio signals are wireiessiy received by a predetermined personal audio device. In other words, the wireless audio adapter can be configured to transmit audio signals to be wireiessiy received by a predetermined personal audio device. However, the audio signals may be transmitted to a plurality of predetermined personal audio devices. To direct the audio signals to be received by the appropriate one or more personal audio devices, a number of methods can be used, for example, predetermined frequencies, encoding and/or network identifiers (e.g., addresses). After the audio signals are wireiessiy tiansmitted 7504, the audio signals are received 7506 at the personal audio device. At this point, additional processing can be performed to enhance the resulting audio sound that will eventually be delivered to a user ofthe personal audio device. A decision 7508 determines whether user personalization is to be performed. When the decision 7508 determines that user personalization is to be performed, then the audio signals are modified 7510 based on user information. For example, the user information can be provided by a data storage device, such as the data storage device 7212 as illustrated in FIG. 51. In one implementation, the user information is related to an audio profile that pertains to the hearing characteristics ofthe user. In another implementation, the user information is related to the physical conditions ofthe user. Such physical conditions can be detected by a sensor, which can be embedded in the personal audio device, or wireiessiy coupled to the personal audio device. As an example, if the user is sleeping, the volume ofthe output sound should be reduced or even turned off. Deteπnining physical conditions can be dynamically performed. For example, a sensor can keep track ofthe user's heart beat and identify patterns accordingly. Following the modifying 7510 or directly following the decision 7508 when user personalization is not to be performed, a decision 7512 determines whether environmental adjustments are to be performed. When the decision 7512 determines that environmental adjustments are to be performed, the audio signals are modified 7514 based on environmental characteristics. Such environmental characteristics can be detected or sensed by the personal audio device, which can include one or more environmental sensors. As an example, the environmental sensor(s) can measure ambient or background noise. The environmental characteristics could also be wireiessiy transmitted to the personal audio device.
Following the modifying 7514 based on environmental characteristics or directly following the decision 7512 when no environmental adjustments are to be made, the audio signals are converted 7516 to ultiasonic drive signals. The ultiasonic drive signals are then used to drive 7518 a directional speaker that, in turn, outputs ultrasonic sound in a directionally constrained manner. The ultiasonic sound is directed to the user ofthe personal audio device and interacts with air such that audio sound is present when the acoustic output from the directional speaker is in the vicinity ofthe head (or ears) ofthe user. However, since the ultrasonic (and resulting audio) sound produced is directionally constrained, it is delivered in a targeted way to the user. Thus, other users in the vicinity ofthe user will not hear any substantial amount ofthe audio sound, and therefore will not be disturbed thereby.
FIG. 55 A is a flow diagram of an environmental accommodation process 7600 according to one embodiment ofthe invention. The environmental accommodation process 7600 determines 7602 environmental characteristics. In one implementation, the environmental characteristics can pertain to measured sound (e.g., noise) levels at the vicinity ofthe user. The sound levels can be measured by a pickup device (e.g., microphone) at the vicimty ofthe user. The pickup device can be incorporated in the personal audio device. In another implementation, the environmental characteristics can pertain to estimated sound (e.g., noise) levels at the vicinity ofthe user. The sound levels at the vicinity ofthe user can be estimated based on a position of the user/device and a linking of position with an estimated sound level for the particular environment. The position ofthe user can, for example, be determined by GPS or network triangulation. After the environmental accommodation process 7600 determines 7602 the environmental characteristics, the audio signals are modified based on the environmental characteristics. For example, if the user were in an area with a lot of noise (e.g., ambient noise), such as a confined space with various persons or where construction noise is present, the audio signals could be processed to attempt to suppress (or cancel) the unwanted noise and/or the audio signals (e.g., in a desired frequency range) could be amplified. In the case of amplification, if noise levels are excessive, the amplification might not occur as the user might not be able to safely hear the desired audio signals. In other words, there can be a limit to the amount of amplification and there can be negative amplification (even complete blockage) when excessive noise levels are present. Noise suppression and amplification can be achieved through conventional digital signal processing, amplification and/or filtering. The environmental accommodation process 7600 can, for example, be performed periodically or for every new audio stream.
A user might have a hearing profile that contains the user's hearing characteristics. Hence, the audio sound provided to the user can optionally be customized or personalized to the user by altering or modifying the audio signals in view ofthe user's hearing characteristics. By customizing or personalizing the audio signals to the user, the audio output can be enhanced for the benefit ofthe user. Additional details on hearing enhancement are described in other sections of this patent application.
FIG. 55B is a flow diagram of audio personalization process 7620 according to one embodiment ofthe invention. The audio personalization process 7620 retrieves 7622 an audio profile associated with the user. The hearing profile contains information that specifies the user's hearing characteristics. For example, the hearing characteristics may have been acquired by the user taking a hearing test. Then, the audio signals are modified 7624 based on the audio profile associated with the user.
The hearing profile can be supplied to a personal audio device or to a directional audio delivery system that performs the personalization process 7620 in a variety of different ways. For example, the audio profile can be electronically provided to the device or the directional audio delivery system through a network. As another example, the audio profile can be provided by way of a removable data storage device (e.g., memory card). Additional details on audio profiles and personalization can be found in other sections of this patent application. The environmental accommodation process 7600 and or the audio personalization process 7620 can optionally be performed together with any ofthe processes to produce the directionally confined output sound, as discussed above. For example, the environmental accommodation process 7600 and/or the audio personalization process 7620 can optionally be performed together with any ofthe remote audio delivery systems 7100, 7200 or 7300 embodiments discussed above with respect to FIGs. 50, 51 or 52, or the remote audio delivery process 7500 discussed above in FIG. 54. With respect to the remote audio delivery process 7500 shown in FIG. 54, the environmental accommodation process 7600 or the audio personalization process 7620 can be performed at the operation 7514 or the operation 7510, respectively. FIG. 56A is a perspective diagram of an ultrasonic transducer 7700 according to one embodiment ofthe invention. The ultiasonic transducer 7700 can implement a directional speaker as discussed herein. The ultiasonic transducer 7700 produces the ultiasonic sound utilized as noted above.
FIG. 56B is a diagram that illustrates the ultiasonic transducer 7700 with its beam 7704 being produced to output ultiasonic sound. The beam 7704 can have its attributes, such as its beam width, varied in a variety of different ways. Additional details on the ultrasonic transducer 7700 can be found in other sections of this patent application.
An audio system ofthe present invention can include or couple to a set top box that includes the wireless audio adapter or permits attachment thereto. A set-top box enables a television set to receive and decode digital television broadcasts. Typically, the set-top box is positioned proximate to the television set.
FIG. 57 is a perspective diagram of an audio system that provides directional audio delivery to interested users. The figure illustrates an audio system 7800 that includes a television 7802, a set-top box 7804 and a directional speaker 7806. The directional speaker 7806 provides dehvery of audio signals in a constrained direction. Further, the directionally constrained audio signals can be controlled as to the target distance for its users as well as for the width ofthe resulting audio signals. The directional speaker 7806 outputs ultrasonic sound by way of an emitter surface 7808. The emitter surface 7808 can be comprised of a single or multiple ultrasonic transducers. Furthermore, in one embodiment, the directional speaker 7806 is mounted to the set-top box 7804 such that it is able to be rotated with respect to the set-top box 7804 as well as the television 7802. The rotation ofthe directional speaker 7806 causes a change in the direction in which the directionally constrained audio signals are delivered. Additional details on such or different set-top boxes can be found in other sections of this patent application. Besides the ability ofthe audio system 7800 to include optionally directional speaker
7806, the audio system 7800 illustrated in FIG. 57 can utilize the various methods and processes discussed above to provide wireless audio delivery to personal audio devices. More particularly, the set-top box 7804 can also include a wireless audio adapter as discussed above. For example, in one embodiment, the set-top box 7804 can include the wireless transmission apparatus 7104 (and possibly the audio system 7102). In another embodiment, the set-top box 7804 can include the wireless transmitter 7204 (and possibly the audio system 7202) ofthe remote audio delivery system 7200. Optionally, the set-top box with directional speakers shown in FIG. 57 is able to transform conventional televisions into televisions whose audio systems have directional audio delivery (as well as wireless delivery to personal audio devices). In one embodiment, the ultrasonic beam is considered directed towards the ear as long as any portion ofthe beam, or the cone ofthe beam, is immediately proximate to, such as within 7cm of, the ear. The direction of the beam does not have to be directed at the ear. It can even be orthogonal to the ear, such as propagating up from one's shoulder, substantially parallel to the face ofthe person.
In another implementation, the audio system 7102 is stationary - meaning that the audio system 7102, although movable, generally remain in a fixed location.
The various embodiments, implementations and features ofthe invention noted above can be combined in various ways or used separately. Those skilled in the art will understand from the description that the invention can be equally applied to or used in other various different settings with respect to various combinations, embodiments, implementations or features provided in the description herein.
The invention can be implemented in software, hardware or a combination of hardware and software. A number of embodiments ofthe invention can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data which can thereafter be read by a computer system. Examples of the computer readable medium include read-only memory, random-access memory, CD- ROMs, magnetic tape, optical data storage devices, and carrier waves. The computer readable medium can also be distributed over network-coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
The advantages ofthe invention are numerous. Different embodiments or implementations may yield different advantages.
Numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will become obvious to those skilled in the art that the invention may be practiced without these specific details. The description and representation herein are the common meanings used by those experienced or skilled in the art to most effectively convey the substance of their work to others skilled in the art. In other instances, well-known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects ofthe present invention.
In the foregoing description, reference to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment ofthe invention. The appearances ofthe phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mυ+ sUy exclusive of other embodiments. Further, the order of blocks in process flowcharts or diagrams representing one or more embodiments ofthe invention do not inherently indicate any particular order nor imply any limitations in the invention.
The many features and advantages ofthe present invention are apparent from the written description and, thus, it is intended by the appended claims to cover all such features and advantages ofthe invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation as illustrated and described. Hence, all suitable modifications and equivalents may be resorted to as falling within the scope ofthe invention.

Claims

1. An electronic device wherein the improvement comprises a directional speaker that produces directionally-constrained audio output signals and directs the audio output signals in a predetermined direction for a user.
2. An electronic device as recited in claim 1 wherein the directional speaker is attachable to the clothing worn by the user; the directional speaker generates ultrasonic signals that are transformed in air to produce the audio output signals; and the device further comprises: a microphone; and a base unit coupled to both the speaker and the microphone to allow the user to use the device to communicate wireiessiy with a communication device; wherein the audio output signals from the speaker are directed towards the user's ear from the worn position ofthe speaker; the device can be operated hands-free; and the directionally-constrained audio output signals allow communication with enhanced privacy.
3. An electronic device as recited in claim 1 wherein the device is a hearing enhancement system for the user; and the device further includes a microphone; the microphone receives audio input signals, which are transformed into ultrasonic signals; the speaker transmits the ultrasonic signals; at least a portion ofthe ultiasonic signals is tiansformed in air to produce the audio output signals; the speaker directs the audio output signals towards the user's ear from the worn position ofthe speaker; and a portion ofthe audio input signals is amplified more than another portion to enhance the hearing ofthe user.
4. An electronic device as recited in claim 1 wherein the device is a peripheral device for a computing device; and the directionally-constiained audio output signals are directed in the predetermined direction for the user ofthe computing device.
5. An electronic device as recited in claim 1 further comprising a set-top box that receives incoming encoded signals and provides decoded audio signals; and audio conversion circuitry that produces ultrasonic signals based on the decoded audio signals provided by said set-top box; wherein the device is for a home entertainment system; the directional speaker outputs an ultiasonic output based on the ultiasonic signals; and at least a portion ofthe ultrasonic signals is transformed in air to produce the audio output signals.
6. An electronic device as recited in claim 1 further comprising: a conventional audio device that produces conventional audio output signals; wherein an attribute input is received by the device to select either the directional speaker or the conventional audio device to generate audio output signals.
7. An electronic device as recited in claim 1 wherein the audio output signals are in a beam; a beam attribute input is received by the device to determine an attribute ofthe audio output signals; and the beam attribute can be one ofthe beam width, the beam direction, the degree of isolation or privacy, and the volume ofthe audio output signals.
8. An electronic device as recited in claim 1 wherein an audio profile associated with the user is received, the audio profile including at least one attribute related to the hearing ofthe user; and the audio output signals produced are personalized for the user based on the audio profile.
9. An electronic device as recited in claim 1 wherein at least one characteristic that is related to the environment ofthe device is received; and the audio output signals produced are modified according to the at least one environmental characteristic.
10. An electronic device as recited in claim 1 wherein the device is in a remote control of an audio system; wireiessiy signals from the audio system is received by the remote control; and at least one attribute ofthe directionally-constrained audio output signals depends on the wireless signals.
11. An electronic device as recited in claim 1 wherein the directionally-constrained audio output signals are in a diverging beam, and the beam diverges depending on the directional speaker having a curved surface, or the directional speaker including a plurality of speaker elements with different driving signals to control the phases ofthe outputs from the elements.
12. An electronic device as recited in claim 1 wherein the speaker has more than one segment to emit the audio output signals, which are in a beam; and the segments can be individually controlled for emitting the audio output signals to affect either the width or the direction of the beam.
13. An electronic device as recited in claim 1 wherein the audio output signals are based on ultrasonic signals; the audio output signals are in a beam; and the frequency ofthe ultrasonic signals can be modified to contiol the width ofthe beam.
14. An electronic device as recited in claim 1 wherein the audio output signals are based on ultiasonic signals; and the ultrasonic signals are reflected by at least two reflecting surfaces before being emitted into the free space as directionally-constrained audio output signals for the user.
15. An electronic device as recited in claim 1 further comprising another directional speaker that produces directionally constrained audio output signals and directs the audio output signals in a predetermined direction for the user; wherein the two directional speakers can create a stereo effect for the user.
16. An electronic device as recited in claim 1 wherein the device includes a wireless receiver configured to receive wireless signals from a wireless tiansmitter; the wireless transmitter is in an audio system; and the wireless signals are related to audio signals that the audio system can output directly.
17. An electromc device as recited in claim 1 wherein the device includes a wireless receiver configured to receive wireless signals from a wireless audio adapter; the wireless audio adapter transmitter is attached to an audio system; the wireless signals are related to audio signals that the audio system can output directly; and the wireless audio adapter is an after market product for the audio system.
18. A system for enhancing an audio system, the audio system delivers audio outputs to an audio output terminal, said system comprising: a wireless transmitter that connects to the audio output terminal and wirelesiΛy transmits the audio outputs provided by the audio system; and a personal electronic device usable by a user, said personal electronic device including at least: a wireless receiver capable of receiving the audio outputs transmitted by said wireless tiansmitter; a data store for storing information ofthe user; a controller operatively connected to said data store and said wireless receiver, said controller operates to customize the audio outputs by modifying the audio outputs received by said wireless receiver based on the stored user information; and a speaker operatively connected to said controller, said speaker produces customized audio output signals in accordance with the customization performed by the controller on the audio outputs.
PCT/US2004/011972 2003-04-15 2004-04-15 Directional speakers WO2004093488A2 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US46257003P 2003-04-15 2003-04-15
US60/462,570 2003-04-15
US46922103P 2003-05-12 2003-05-12
US60/469,221 2003-05-12
US49344103P 2003-08-08 2003-08-08
US60/493,441 2003-08-08

Publications (2)

Publication Number Publication Date
WO2004093488A2 true WO2004093488A2 (en) 2004-10-28
WO2004093488A3 WO2004093488A3 (en) 2005-03-24

Family

ID=33303910

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/011972 WO2004093488A2 (en) 2003-04-15 2004-04-15 Directional speakers

Country Status (2)

Country Link
US (8) US20040208325A1 (en)
WO (1) WO2004093488A2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005076661A1 (en) 2004-02-10 2005-08-18 Mitsubishi Denki Engineering Kabushiki Kaisha Mobile body with superdirectivity speaker
GB2413917A (en) * 2004-05-06 2005-11-09 Gen Electric Reducing auditory perception of noise associated with a medical imaging process
WO2006049645A1 (en) * 2004-10-29 2006-05-11 Sony Ericsson Mobile Communications Ab Mobile terminals including compensation for hearing impairment and methods and computer program products for operating the same
WO2011091797A2 (en) 2010-01-27 2011-08-04 Micro Balle Aps Hearing aid device and method
US8588454B2 (en) 2011-02-09 2013-11-19 Blackberry Limited Module for containing an earpiece for an audio device
US8995683B2 (en) 2006-12-29 2015-03-31 Google Technology Holdings LLC Methods and devices for adaptive ringtone generation
WO2017003472A1 (en) * 2015-06-30 2017-01-05 Harman International Industries, Incorporated Shoulder-mounted robotic speakers
US10134416B2 (en) 2015-05-11 2018-11-20 Microsoft Technology Licensing, Llc Privacy-preserving energy-efficient speakers for personal sound
WO2024044835A1 (en) * 2022-08-30 2024-03-07 Zerosound Systems Inc. Directional sound apparatus and method

Families Citing this family (256)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8784211B2 (en) 2001-08-03 2014-07-22 Igt Wireless input/output and peripheral devices on a gaming machine
US7927212B2 (en) * 2001-08-03 2011-04-19 Igt Player tracking communication mechanisms in a gaming machine
US7112138B2 (en) 2001-08-03 2006-09-26 Igt Player tracking communication mechanisms in a gaming machine
US8210927B2 (en) 2001-08-03 2012-07-03 Igt Player tracking communication mechanisms in a gaming machine
JP3553916B2 (en) * 2001-10-19 2004-08-11 松下電器産業株式会社 Mobile phone
US8109629B2 (en) 2003-10-09 2012-02-07 Ipventure, Inc. Eyewear supporting electrical components and apparatus therefor
US20040208325A1 (en) * 2003-04-15 2004-10-21 Cheung Kwok Wai Method and apparatus for wireless audio delivery
US8849185B2 (en) * 2003-04-15 2014-09-30 Ipventure, Inc. Hybrid audio delivery system and method therefor
US20050037823A1 (en) * 2003-05-28 2005-02-17 Nambirajan Seshadri Modular wireless headset and/or headphones
US20050136839A1 (en) * 2003-05-28 2005-06-23 Nambirajan Seshadri Modular wireless multimedia device
US7129824B2 (en) * 2003-08-28 2006-10-31 Motorola Inc. Tactile transducers and method of operating
US7092002B2 (en) * 2003-09-19 2006-08-15 Applied Minds, Inc. Systems and method for enhancing teleconferencing collaboration
US8023984B2 (en) * 2003-10-06 2011-09-20 Research In Motion Limited System and method of controlling transmit power for mobile wireless devices with multi-mode operation of antenna
US11630331B2 (en) 2003-10-09 2023-04-18 Ingeniospec, Llc Eyewear with touch-sensitive input surface
US20050113115A1 (en) * 2003-10-31 2005-05-26 Haberman William E. Presenting broadcast received by mobile device based on proximity and content
US20050096042A1 (en) * 2003-10-31 2005-05-05 Habeman William E. Broadcast including content and location-identifying information for specific locations
KR200355341Y1 (en) * 2004-04-02 2004-07-06 주식회사 솔리토닉스 Mobile-communication terminal board with ultrasonic-speaker system
US8401212B2 (en) 2007-10-12 2013-03-19 Earlens Corporation Multifunction system and method for integrated hearing and communication with noise cancellation and feedback management
US11829518B1 (en) 2004-07-28 2023-11-28 Ingeniospec, Llc Head-worn device with connection region
US7867160B2 (en) 2004-10-12 2011-01-11 Earlens Corporation Systems and methods for photo-mechanical hearing transduction
US7668325B2 (en) 2005-05-03 2010-02-23 Earlens Corporation Hearing system having an open chamber for housing components and reducing the occlusion effect
US11644693B2 (en) 2004-07-28 2023-05-09 Ingeniospec, Llc Wearable audio system supporting enhanced hearing support
US8295523B2 (en) 2007-10-04 2012-10-23 SoundBeam LLC Energy delivery and microphone placement methods for improved comfort in an open canal hearing aid
US8456506B2 (en) 2004-08-03 2013-06-04 Applied Minds, Llc Systems and methods for enhancing teleconferencing collaboration
US7855726B2 (en) * 2004-08-03 2010-12-21 Applied Minds, Inc. Apparatus and method for presenting audio in a video teleconference
EP1779704A1 (en) 2004-08-18 2007-05-02 Micro Ear Technology, Inc. Wireless communications adapter for a hearing assistance device
KR20060022053A (en) * 2004-09-06 2006-03-09 삼성전자주식회사 Audio-visual system and tuning method thereof
DE102004047650B3 (en) * 2004-09-30 2006-04-13 W.L. Gore & Associates Gmbh Garment with inductive coupler and inductive garment interface
EP1800291B1 (en) * 2004-10-04 2012-09-05 Volkswagen Aktiengesellschaft Device for the acoustic communication and/or perception in a motor vehicle
US11852901B2 (en) 2004-10-12 2023-12-26 Ingeniospec, Llc Wireless headset supporting messages and hearing enhancement
US20060122504A1 (en) * 2004-11-19 2006-06-08 Gabara Thaddeus J Electronic subsystem with communication links
US20140240526A1 (en) * 2004-12-13 2014-08-28 Kuo-Ching Chiang Method For Sharing By Wireless Non-Volatile Memory
WO2006104887A2 (en) * 2005-03-25 2006-10-05 Schulein Robert B Audio and data communications system
US8081964B1 (en) 2005-03-28 2011-12-20 At&T Mobility Ii Llc System, method and apparatus for wireless communication between a wireless mobile telecommunications device and a remote wireless display
US20060221233A1 (en) * 2005-04-01 2006-10-05 Freimann Felix Audio Modifications in Digital Media Decoders
US20060236354A1 (en) * 2005-04-18 2006-10-19 Sehat Sutardja Wireless audio for entertainment systems
US20060239474A1 (en) * 2005-04-20 2006-10-26 Stephen Simms Gigbox: a music mini-studio
JP2006304165A (en) * 2005-04-25 2006-11-02 Yamaha Corp Speaker array system
US20070104334A1 (en) * 2005-05-26 2007-05-10 Dallam Richard F Ii Acoustic landscape
US20060270373A1 (en) * 2005-05-27 2006-11-30 Nasaco Electronics (Hong Kong) Ltd. In-flight entertainment wireless audio transmitter/receiver system
US9774961B2 (en) 2005-06-05 2017-09-26 Starkey Laboratories, Inc. Hearing assistance device ear-to-ear communication using an intermediate device
US8041066B2 (en) * 2007-01-03 2011-10-18 Starkey Laboratories, Inc. Wireless system for hearing communication devices providing wireless stereo reception modes
US7931537B2 (en) * 2005-06-24 2011-04-26 Microsoft Corporation Voice input in a multimedia console environment
EP1753210A3 (en) 2005-08-12 2008-09-03 LG Electronics Inc. Mobile communication terminal providing memo function
US20090202096A1 (en) * 2005-08-29 2009-08-13 William Frederick Ryann Wireless earring assembly
US11733549B2 (en) 2005-10-11 2023-08-22 Ingeniospec, Llc Eyewear having removable temples that support electrical components
US8014542B2 (en) * 2005-11-04 2011-09-06 At&T Intellectual Property I, L.P. System and method of providing audio content
US9190069B2 (en) * 2005-11-22 2015-11-17 2236008 Ontario Inc. In-situ voice reinforcement system
US20070135091A1 (en) * 2005-12-08 2007-06-14 Wassingbo Tomas K Electronic equipment with call key lock and program for providing the same
US7660602B2 (en) * 2005-12-22 2010-02-09 Radioshack Corporation Full-duplex radio speaker system and associated method
SG134198A1 (en) * 2006-01-11 2007-08-29 Sony Corp Display unit with sound generation system
SG134188A1 (en) * 2006-01-11 2007-08-29 Sony Corp Display unit with sound generation system
US8284713B2 (en) * 2006-02-10 2012-10-09 Cisco Technology, Inc. Wireless audio systems and related methods
TW200731743A (en) * 2006-02-15 2007-08-16 Asustek Comp Inc Mobile device capable of adjusting volume dynamically and related method
US8027638B2 (en) * 2006-03-29 2011-09-27 Micro Ear Technology, Inc. Wireless communication system using custom earmold
US8199919B2 (en) 2006-06-01 2012-06-12 Personics Holdings Inc. Earhealth monitoring system and method II
US8917876B2 (en) 2006-06-14 2014-12-23 Personics Holdings, LLC. Earguard monitoring system
US8208642B2 (en) 2006-07-10 2012-06-26 Starkey Laboratories, Inc. Method and apparatus for a binaural hearing assistance system using monaural audio signals
KR100796623B1 (en) * 2006-07-12 2008-01-22 네오피델리티 주식회사 Necklace type detachable three dimensional sound reproduction apparatus
US7800482B1 (en) * 2006-07-25 2010-09-21 Costin Darryl J High intensity small size personal alarm
US8041025B2 (en) * 2006-08-07 2011-10-18 International Business Machines Corporation Systems and arrangements for controlling modes of audio devices based on user selectable parameters
US8396229B2 (en) * 2006-08-07 2013-03-12 Nuvo Group Ltd. Musical maternity belt
US20080109404A1 (en) * 2006-11-03 2008-05-08 Sony Ericsson Mobile Communications Ab Location dependent music search
US20080153537A1 (en) * 2006-12-21 2008-06-26 Charbel Khawand Dynamically learning a user's response via user-preferred audio settings in response to different noise environments
US8000479B2 (en) * 2007-01-19 2011-08-16 Edward H. Suber, III Wireless speaker adapter
US20080240477A1 (en) * 2007-03-30 2008-10-02 Robert Howard Wireless multiple input hearing assist device
WO2008135887A1 (en) * 2007-05-03 2008-11-13 Koninklijke Philips Electronics N.V. Stereo sound rendering system
WO2008153589A2 (en) * 2007-06-01 2008-12-18 Personics Holdings Inc. Earhealth monitoring system and method iv
US8457617B2 (en) * 2007-08-30 2013-06-04 Centurylink Intellectual Property Llc System and method for a wireless device locator
GB0718362D0 (en) * 2007-09-20 2007-10-31 Armour Home Electronics Ltd Wireless communication device and system
US8145277B2 (en) 2007-09-28 2012-03-27 Embarq Holdings Company Llc System and method for a wireless ringer function
US8538345B2 (en) * 2007-10-09 2013-09-17 Qualcomm Incorporated Apparatus including housing incorporating a radiating element of an antenna
US8224305B2 (en) * 2007-10-31 2012-07-17 Centurylink Intellectual Property Llc System and method for extending conference communications access to local participants
JP5171220B2 (en) * 2007-11-15 2013-03-27 キヤノン株式会社 Recording system, recording method, and host device
US20090156249A1 (en) * 2007-12-12 2009-06-18 John Ruckart Devices and computer readable media for use with devices having audio output within a spatially controlled output beam
TWM337942U (en) * 2007-12-26 2008-08-01 Princeton Technology Corp Audio generating module
US8008564B2 (en) * 2008-02-01 2011-08-30 Sean Asher Wilens Harmony hat
US20090257603A1 (en) * 2008-04-09 2009-10-15 Raymond Chan Clip-on recording device
US20090312849A1 (en) * 2008-06-16 2009-12-17 Sony Ericsson Mobile Communications Ab Automated audio visual system configuration
US8396239B2 (en) 2008-06-17 2013-03-12 Earlens Corporation Optical electro-mechanical hearing devices with combined power and signal architectures
KR101568451B1 (en) 2008-06-17 2015-11-11 이어렌즈 코포레이션 Optical electro-mechanical hearing devices with combined power and signal architectures
DK2301261T3 (en) 2008-06-17 2019-04-23 Earlens Corp Optical electromechanical hearing aids with separate power supply and signal components
WO2010022456A1 (en) * 2008-08-31 2010-03-04 Peter Blamey Binaural noise reduction
WO2010033933A1 (en) 2008-09-22 2010-03-25 Earlens Corporation Balanced armature devices and methods for hearing
JPWO2010041394A1 (en) * 2008-10-06 2012-03-01 パナソニック株式会社 Sound playback device
US8818466B2 (en) * 2008-10-29 2014-08-26 Centurylink Intellectual Property Llc System and method for wireless home communications
US20100304795A1 (en) * 2009-05-28 2010-12-02 Nokia Corporation Multiple orientation apparatus
US20100303265A1 (en) * 2009-05-29 2010-12-02 Nvidia Corporation Enhancing user experience in audio-visual systems employing stereoscopic display and directional audio
CN102598712A (en) 2009-06-05 2012-07-18 音束有限责任公司 Optically coupled acoustic middle ear implant systems and methods
US9544700B2 (en) 2009-06-15 2017-01-10 Earlens Corporation Optically coupled active ossicular replacement prosthesis
EP2443843A4 (en) 2009-06-18 2013-12-04 SoundBeam LLC Eardrum implantable devices for hearing systems and methods
CN102640435B (en) 2009-06-18 2016-11-16 伊尔莱茵斯公司 Optical coupled cochlea implantation system and method
WO2011005500A2 (en) 2009-06-22 2011-01-13 SoundBeam LLC Round window coupled hearing systems and methods
CN102598715B (en) 2009-06-22 2015-08-05 伊尔莱茵斯公司 optical coupling bone conduction device, system and method
US8666088B2 (en) * 2009-06-24 2014-03-04 Ford Global Technologies Tunable, sound enhancing air induction system for internal combustion engine
WO2010151636A2 (en) 2009-06-24 2010-12-29 SoundBeam LLC Optical cochlear stimulation devices and methods
WO2010151647A2 (en) 2009-06-24 2010-12-29 SoundBeam LLC Optically coupled cochlear actuator systems and methods
GB0912774D0 (en) 2009-07-22 2009-08-26 Sensorcom Ltd Communications system
EP2462752B1 (en) 2009-08-03 2017-12-27 Imax Corporation Systems and method for monitoring cinema loudspeakers and compensating for quality problems
US20110096941A1 (en) * 2009-10-28 2011-04-28 Alcatel-Lucent Usa, Incorporated Self-steering directional loudspeakers and a method of operation thereof
US9420385B2 (en) 2009-12-21 2016-08-16 Starkey Laboratories, Inc. Low power intermittent messaging for hearing assistance devices
WO2011117903A2 (en) * 2010-03-24 2011-09-29 Raniero, Ilaria Directional-sound-diffusion alarm clock and further applications
US8503708B2 (en) 2010-04-08 2013-08-06 Starkey Laboratories, Inc. Hearing assistance device with programmable direct audio input port
WO2011139772A1 (en) * 2010-04-27 2011-11-10 James Fairey Sound wave modification
EP2580922B1 (en) 2010-06-14 2019-03-20 Turtle Beach Corporation Improved parametric signal processing and emitter systems and related methods
KR101702330B1 (en) * 2010-07-13 2017-02-03 삼성전자주식회사 Method and apparatus for simultaneous controlling near and far sound field
WO2012088187A2 (en) 2010-12-20 2012-06-28 SoundBeam LLC Anatomically customized ear canal hearing apparatus
US8854985B2 (en) * 2010-12-31 2014-10-07 Yossef TSFATY System and method for using ultrasonic communication
US10039672B2 (en) * 2011-03-23 2018-08-07 Ali Mohammad Aghamohammadi Vibro-electro tactile ultrasound hearing device
CN102762074A (en) * 2011-04-25 2012-10-31 昆山广兴电子有限公司 Heat radiation system for portable electronic device
US8918197B2 (en) 2012-06-13 2014-12-23 Avraham Suhami Audio communication networks
US8849791B1 (en) 2011-06-29 2014-09-30 Amazon Technologies, Inc. Assisted shopping
US8630851B1 (en) 2011-06-29 2014-01-14 Amazon Technologies, Inc. Assisted shopping
DE102011079609A1 (en) * 2011-07-22 2013-01-24 Schaeffler Technologies AG & Co. KG Phaser
US9271068B2 (en) * 2011-09-13 2016-02-23 Tara Chand Singhal Apparatus and method for a wireless extension collar device for altering operational mode of mobile and fixed end-user wireless devices by voice commands
WO2013042316A1 (en) * 2011-09-22 2013-03-28 パナソニック株式会社 Directional loudspeaker
TWI457008B (en) * 2011-10-13 2014-10-11 Acer Inc Stereo device, stereo system and method of playing stereo sound
CN103108197A (en) 2011-11-14 2013-05-15 辉达公司 Priority level compression method and priority level compression system for three-dimensional (3D) video wireless display
CN103138807B (en) * 2011-11-28 2014-11-26 财付通支付科技有限公司 Implement method and system for near field communication (NFC)
US20130177164A1 (en) * 2012-01-06 2013-07-11 Sony Ericsson Mobile Communications Ab Ultrasonic sound reproduction on eardrum
US9036831B2 (en) 2012-01-10 2015-05-19 Turtle Beach Corporation Amplification system, carrier tracking systems and related methods for use in parametric sound systems
US9829715B2 (en) 2012-01-23 2017-11-28 Nvidia Corporation Eyewear device for transmitting signal and communication method thereof
FR2986897A1 (en) * 2012-02-10 2013-08-16 Peugeot Citroen Automobiles Sa Method for adapting sound signals to be broadcast by sound diffusion system of e.g. smartphone, in passenger compartment of car, involves adapting sound signals into sound diffusion system as function of sound correction filter
US20140364171A1 (en) * 2012-03-01 2014-12-11 DSP Group Method and system for improving voice communication experience in mobile communication devices
US9264791B1 (en) 2012-03-28 2016-02-16 Ari W. Polivy Portable audio speaker system that attaches to clothing or other structures via magnet
US8958580B2 (en) 2012-04-18 2015-02-17 Turtle Beach Corporation Parametric transducers and related methods
US20130322674A1 (en) * 2012-05-31 2013-12-05 Verizon Patent And Licensing Inc. Method and system for directing sound to a select user within a premises
US9268522B2 (en) 2012-06-27 2016-02-23 Volkswagen Ag Devices and methods for conveying audio information in vehicles
US9119012B2 (en) 2012-06-28 2015-08-25 Broadcom Corporation Loudspeaker beamforming for personal audio focal points
US8934650B1 (en) 2012-07-03 2015-01-13 Turtle Beach Corporation Low profile parametric transducers and related methods
US8428665B1 (en) * 2012-07-27 2013-04-23 Signal Essence, LLC Holder for portable communication device
CN103634720A (en) * 2012-08-21 2014-03-12 联想(北京)有限公司 Playing control method and electronic equipment
US9491548B2 (en) * 2012-08-24 2016-11-08 Convey Technology, Inc. Parametric system for generating a sound halo, and methods of use thereof
US9529431B2 (en) 2012-09-06 2016-12-27 Thales Avionics, Inc. Directional sound systems including eye tracking capabilities and related methods
US8879760B2 (en) * 2012-09-06 2014-11-04 Thales Avionics, Inc. Directional sound systems and related methods
US9578224B2 (en) 2012-09-10 2017-02-21 Nvidia Corporation System and method for enhanced monoimaging
EP2897379A4 (en) * 2012-09-14 2016-04-27 Nec Corp Speaker device and electronic equipment
KR102006734B1 (en) * 2012-09-21 2019-08-02 삼성전자 주식회사 Method for processing audio signal and wireless communication device
US9678713B2 (en) 2012-10-09 2017-06-13 At&T Intellectual Property I, L.P. Method and apparatus for processing commands directed to a media center
US9232310B2 (en) 2012-10-15 2016-01-05 Nokia Technologies Oy Methods, apparatuses and computer program products for facilitating directional audio capture with multiple microphones
US8750541B1 (en) * 2012-10-31 2014-06-10 Google Inc. Parametric array for a head-mountable device
US9137314B2 (en) 2012-11-06 2015-09-15 At&T Intellectual Property I, L.P. Methods, systems, and products for personalized feedback
US8774855B2 (en) 2012-11-09 2014-07-08 Futurewei Technologies, Inc. Method to estimate head relative handset location
US9466872B2 (en) 2012-11-09 2016-10-11 Futurewei Technologies, Inc. Tunable dual loop antenna system
US9277321B2 (en) * 2012-12-17 2016-03-01 Nokia Technologies Oy Device discovery and constellation selection
US9807495B2 (en) * 2013-02-25 2017-10-31 Microsoft Technology Licensing, Llc Wearable audio accessories for computing devices
EP2965312B1 (en) * 2013-03-05 2019-01-02 Apple Inc. Adjusting the beam pattern of a speaker array based on the location of one or more listeners
US20140269196A1 (en) * 2013-03-15 2014-09-18 Elwha Llc Portable Electronic Device Directed Audio Emitter Arrangement System and Method
US20140269207A1 (en) * 2013-03-15 2014-09-18 Elwha Llc Portable Electronic Device Directed Audio Targeted User System and Method
US10181314B2 (en) * 2013-03-15 2019-01-15 Elwha Llc Portable electronic device directed audio targeted multiple user system and method
US10291983B2 (en) * 2013-03-15 2019-05-14 Elwha Llc Portable electronic device directed audio system and method
US10575093B2 (en) 2013-03-15 2020-02-25 Elwha Llc Portable electronic device directed audio emitter arrangement system and method
US20140269214A1 (en) * 2013-03-15 2014-09-18 Elwha LLC, a limited liability company of the State of Delaware Portable electronic device directed audio targeted multi-user system and method
US9886941B2 (en) 2013-03-15 2018-02-06 Elwha Llc Portable electronic device directed audio targeted user system and method
US9877135B2 (en) 2013-06-07 2018-01-23 Nokia Technologies Oy Method and apparatus for location based loudspeaker system configuration
US9332344B2 (en) 2013-06-13 2016-05-03 Turtle Beach Corporation Self-bias emitter circuit
US20140369538A1 (en) * 2013-06-13 2014-12-18 Parametric Sound Corporation Assistive Listening System
US8988911B2 (en) 2013-06-13 2015-03-24 Turtle Beach Corporation Self-bias emitter circuit
KR102109739B1 (en) * 2013-07-09 2020-05-12 삼성전자 주식회사 Method and apparatus for outputing sound based on location
US8761431B1 (en) 2013-08-15 2014-06-24 Joelise, LLC Adjustable headphones
US9059669B2 (en) * 2013-09-05 2015-06-16 Qualcomm Incorporated Sound control for network-connected devices
DE102013219636A1 (en) * 2013-09-27 2015-04-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. DEVICE AND METHOD FOR TRANSFERRING A SOUND SIGNAL
US10063982B2 (en) * 2013-10-09 2018-08-28 Voyetra Turtle Beach, Inc. Method and system for a game headset with audio alerts based on audio track analysis
NL2011583C2 (en) * 2013-10-10 2015-04-13 Wwinn B V Module, system and method for detecting acoustical failure of a sound source.
FR3012007B1 (en) * 2013-10-11 2017-02-10 Matthieu Gomont ACCOUSTIC DEVICE FOR USE BY A USER USING DIRECTIVE TRANSDUCERS
US10477327B2 (en) 2013-10-22 2019-11-12 Gn Hearing A/S Private audio streaming at point of sale
EP2866470B1 (en) 2013-10-22 2018-07-25 GN Hearing A/S Private audio streaming at point of sale
EP2882203A1 (en) * 2013-12-06 2015-06-10 Oticon A/s Hearing aid device for hands free communication
KR102111708B1 (en) * 2014-01-10 2020-06-08 삼성전자주식회사 Apparatus and method for reducing power consuption in hearing aid
US9560449B2 (en) 2014-01-17 2017-01-31 Sony Corporation Distributed wireless speaker system
US10935788B2 (en) 2014-01-24 2021-03-02 Nvidia Corporation Hybrid virtual 3D rendering approach to stereovision
US9866986B2 (en) 2014-01-24 2018-01-09 Sony Corporation Audio speaker system with virtual music performance
US9779593B2 (en) 2014-08-15 2017-10-03 Elwha Llc Systems and methods for positioning a user of a hands-free intercommunication system
US20160118036A1 (en) * 2014-10-23 2016-04-28 Elwha Llc Systems and methods for positioning a user of a hands-free intercommunication system
US9565284B2 (en) 2014-04-16 2017-02-07 Elwha Llc Systems and methods for automatically connecting a user of a hands-free intercommunication system
US9131068B2 (en) 2014-02-06 2015-09-08 Elwha Llc Systems and methods for automatically connecting a user of a hands-free intercommunication system
US9232335B2 (en) 2014-03-06 2016-01-05 Sony Corporation Networked speaker system with follow me
US10034103B2 (en) 2014-03-18 2018-07-24 Earlens Corporation High fidelity and reduced feedback contact hearing apparatus and methods
US10003379B2 (en) 2014-05-06 2018-06-19 Starkey Laboratories, Inc. Wireless communication with probing bandwidth
US20170098350A1 (en) 2015-05-15 2017-04-06 Mick Ebeling Vibrotactile control software systems and methods
US9679546B2 (en) * 2014-05-16 2017-06-13 Not Impossible LLC Sound vest
US9786201B2 (en) * 2014-05-16 2017-10-10 Not Impossible LLC Wearable sound
US9900723B1 (en) 2014-05-28 2018-02-20 Apple Inc. Multi-channel loudspeaker matching using variable directivity
US9420362B1 (en) 2014-06-20 2016-08-16 Google Inc. Peripheral audio output device
US9392389B2 (en) 2014-06-27 2016-07-12 Microsoft Technology Licensing, Llc Directional audio notification
WO2016011044A1 (en) 2014-07-14 2016-01-21 Earlens Corporation Sliding bias and peak limiting for optical hearing devices
TWI544807B (en) * 2014-07-18 2016-08-01 緯創資通股份有限公司 Displayer device having speaker module
US9232366B1 (en) 2014-10-15 2016-01-05 Motorola Solutions, Inc. Dual-watch collar-wearable communication device
US9648419B2 (en) 2014-11-12 2017-05-09 Motorola Solutions, Inc. Apparatus and method for coordinating use of different microphones in a communication device
US9924276B2 (en) 2014-11-26 2018-03-20 Earlens Corporation Adjustable venting for hearing instruments
CN104703107B (en) * 2015-02-06 2018-06-08 哈尔滨工业大学深圳研究生院 A kind of adaptive echo cancellation method in digital deaf-aid
US10142271B2 (en) * 2015-03-06 2018-11-27 Unify Gmbh & Co. Kg Method, device, and system for providing privacy for communications
US9973561B2 (en) * 2015-04-17 2018-05-15 International Business Machines Corporation Conferencing based on portable multifunction devices
US9508336B1 (en) * 2015-06-25 2016-11-29 Bose Corporation Transitioning between arrayed and in-phase speaker configurations for active noise reduction
US9640169B2 (en) 2015-06-25 2017-05-02 Bose Corporation Arraying speakers for a uniform driver field
DK3139627T3 (en) * 2015-09-02 2019-05-20 Sonion Nederland Bv Hearing device with multi-way sounders
KR102429409B1 (en) 2015-09-09 2022-08-04 삼성전자 주식회사 Electronic device and method for controlling an operation thereof
US10264383B1 (en) 2015-09-25 2019-04-16 Apple Inc. Multi-listener stereo image array
US10034081B2 (en) 2015-09-28 2018-07-24 Samsung Electronics Co., Ltd. Acoustic filter for omnidirectional loudspeaker
US10469942B2 (en) 2015-09-28 2019-11-05 Samsung Electronics Co., Ltd. Three hundred and sixty degree horn for omnidirectional loudspeaker
US20170095202A1 (en) 2015-10-02 2017-04-06 Earlens Corporation Drug delivery customized ear canal apparatus
RU2711094C2 (en) 2015-12-08 2020-01-15 ФОРД ГЛОУБАЛ ТЕКНОЛОДЖИЗ, ЭлЭлСи Vehicle driver attention system and method
US9900735B2 (en) 2015-12-18 2018-02-20 Federal Signal Corporation Communication systems
US11350226B2 (en) 2015-12-30 2022-05-31 Earlens Corporation Charging protocol for rechargeable hearing systems
US10178483B2 (en) 2015-12-30 2019-01-08 Earlens Corporation Light based hearing systems, apparatus, and methods
US10492010B2 (en) 2015-12-30 2019-11-26 Earlens Corporations Damping in contact hearing systems
US9693168B1 (en) 2016-02-08 2017-06-27 Sony Corporation Ultrasonic speaker assembly for audio spatial effect
US9826332B2 (en) 2016-02-09 2017-11-21 Sony Corporation Centralized wireless speaker system
US9924291B2 (en) 2016-02-16 2018-03-20 Sony Corporation Distributed wireless speaker system
US9906981B2 (en) 2016-02-25 2018-02-27 Nvidia Corporation Method and system for dynamic regulation and control of Wi-Fi scans
US9826330B2 (en) 2016-03-14 2017-11-21 Sony Corporation Gimbal-mounted linear ultrasonic speaker assembly
US9693169B1 (en) 2016-03-16 2017-06-27 Sony Corporation Ultrasonic speaker assembly with ultrasonic room mapping
CN105874538A (en) * 2016-04-05 2016-08-17 张阳 Household music control method and system
CN110170175A (en) * 2016-04-15 2019-08-27 深圳市大疆创新科技有限公司 Remote controler
US10273141B2 (en) * 2016-04-26 2019-04-30 Taiwan Semiconductor Manufacturing Co., Ltd. Rough layer for better anti-stiction deposition
CN108605067B (en) * 2016-04-29 2021-06-08 华为技术有限公司 Method for playing audio and mobile terminal
CN106101350B (en) * 2016-05-31 2019-05-17 维沃移动通信有限公司 A kind of mobile terminal and its call method
US9794724B1 (en) 2016-07-20 2017-10-17 Sony Corporation Ultrasonic speaker assembly using variable carrier frequency to establish third dimension sound locating
CN106357348B (en) * 2016-08-16 2019-02-12 北京小米移动软件有限公司 Adjust the method and device of ultrasonic wave transmission power
CN109952771A (en) 2016-09-09 2019-06-28 伊尔兰斯公司 Contact hearing system, device and method
US9924286B1 (en) 2016-10-20 2018-03-20 Sony Corporation Networked speaker system with LED-based wireless communication and personal identifier
US10075791B2 (en) 2016-10-20 2018-09-11 Sony Corporation Networked speaker system with LED-based wireless communication and room mapping
US9854362B1 (en) 2016-10-20 2017-12-26 Sony Corporation Networked speaker system with LED-based wireless communication and object detection
WO2018093733A1 (en) 2016-11-15 2018-05-24 Earlens Corporation Improved impression procedure
US10271132B2 (en) 2016-11-28 2019-04-23 Motorola Solutions, Inc. Method to dynamically change the directional speakers audio beam and level based on the end user activity
US10110982B2 (en) * 2017-01-20 2018-10-23 Bose Corporation Fabric cover for flexible neckband
CN107071119B (en) * 2017-04-26 2019-10-18 维沃移动通信有限公司 A kind of sound removing method and mobile terminal
US10535360B1 (en) * 2017-05-25 2020-01-14 Tp Lab, Inc. Phone stand using a plurality of directional speakers
CN107105369A (en) * 2017-06-29 2017-08-29 京东方科技集团股份有限公司 Sound orients switching device and display system
CN107580289A (en) * 2017-08-10 2018-01-12 西安蜂语信息科技有限公司 Method of speech processing and device
US10629190B2 (en) 2017-11-09 2020-04-21 Paypal, Inc. Hardware command device with audio privacy features
EP3522568B1 (en) * 2018-01-31 2021-03-10 Oticon A/s A hearing aid including a vibrator touching a pinna
US10625669B2 (en) * 2018-02-21 2020-04-21 Ford Global Technologies, Llc Vehicle sensor operation
WO2019173470A1 (en) 2018-03-07 2019-09-12 Earlens Corporation Contact hearing device and retention structure materials
WO2019199680A1 (en) 2018-04-09 2019-10-17 Earlens Corporation Dynamic filter
US10777048B2 (en) 2018-04-12 2020-09-15 Ipventure, Inc. Methods and apparatus regarding electronic eyewear applicable for seniors
CN108631884B (en) * 2018-05-15 2021-02-26 浙江大学 Sound wave communication method based on nonlinear effect
CN108981131B (en) * 2018-06-06 2020-07-03 珠海格力电器股份有限公司 Method for reducing noise of loudspeaker in air conditioner remote control process
US10510220B1 (en) 2018-08-06 2019-12-17 International Business Machines Corporation Intelligent alarm sound control
US11254542B2 (en) 2018-08-20 2022-02-22 Otis Elevator Company Car door interlock
US10587951B1 (en) * 2018-09-13 2020-03-10 Plantronics, Inc. Equipment including down-firing speaker
US10623859B1 (en) 2018-10-23 2020-04-14 Sony Corporation Networked speaker system with combined power over Ethernet and audio delivery
US10553194B1 (en) 2018-12-04 2020-02-04 Honeywell Federal Manufacturing & Technologies, Llc Sound-masking device for a roll-up door
US10728655B1 (en) 2018-12-17 2020-07-28 Facebook Technologies, Llc Customized sound field for increased privacy
US11140477B2 (en) * 2019-01-06 2021-10-05 Frank Joseph Pompei Private personal communications device
US10957299B2 (en) 2019-04-09 2021-03-23 Facebook Technologies, Llc Acoustic transfer function personalization using sound scene analysis and beamforming
US11212606B1 (en) 2019-12-31 2021-12-28 Facebook Technologies, Llc Headset sound leakage mitigation
US11743640B2 (en) 2019-12-31 2023-08-29 Meta Platforms Technologies, Llc Privacy setting for sound leakage control
US11443737B2 (en) 2020-01-14 2022-09-13 Sony Corporation Audio video translation into multiple languages for respective listeners
DE102020201320B3 (en) * 2020-02-04 2021-06-17 Volkswagen Aktiengesellschaft Device for generating acoustic signals selectively for certain people in a motor vehicle
KR102389356B1 (en) * 2020-09-01 2022-04-21 재단법인 대구경북첨단의료산업진흥재단 Non-wearing hearing device for the hearing-impaired person and method for operating thereof
CN112738335B (en) * 2021-01-15 2022-05-17 重庆蓝岸通讯技术有限公司 Sound directional transmission method and device of mobile terminal and storage medium
US11792565B2 (en) * 2021-04-27 2023-10-17 Advanced Semiconductor Engineering, Inc. Electronic module
CN113438548B (en) * 2021-08-30 2021-10-29 深圳佳力拓科技有限公司 Digital television display method and device based on video data packet and audio data packet
CN113747303B (en) * 2021-09-06 2023-11-10 上海科技大学 Directional sound beam whisper interaction system, control method, control terminal and medium
US20230206734A1 (en) * 2021-12-23 2023-06-29 Solmark International, Inc. Catalytic converter alarm system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4823908A (en) * 1984-08-28 1989-04-25 Matsushita Electric Industrial Co., Ltd. Directional loudspeaker system
US6169813B1 (en) * 1994-03-16 2001-01-02 Hearing Innovations Incorporated Frequency transpositional hearing aid with single sideband modulation
US6363139B1 (en) * 2000-06-16 2002-03-26 Motorola, Inc. Omnidirectional ultrasonic communication system
US6445804B1 (en) * 1997-11-25 2002-09-03 Nec Corporation Ultra-directional speaker system and speaker system drive method
US6643377B1 (en) * 1998-04-28 2003-11-04 Canon Kabushiki Kaisha Audio output system and method therefor

Family Cites Families (101)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CH517679A (en) * 1968-03-08 1972-01-15 Basf Ag Process for the production of 2,3,6-trimethylphenol
US3974335A (en) * 1974-06-06 1976-08-10 Richard Besserman Hearing test by telephone including recorded results
DE2435944C3 (en) * 1974-07-25 1985-07-18 Poensgen, Karl Otto, 8000 München Hi-Fi speaker box
US3942139A (en) * 1974-11-08 1976-03-02 Westinghouse Electric Corporation Broadband microwave bulk acoustic delay device
US4128738A (en) * 1976-09-28 1978-12-05 Gallery Thomas W Compact transmission line loudspeaker system
JPS5851514Y2 (en) * 1979-01-10 1983-11-24 松下電工株式会社 Variable direction mounting device
IT1125861B (en) * 1979-11-26 1986-05-14 Nuovo Pignone Spa PERFECTED DEVICE TO VARY THE COMBUSTION POSITION OF THE COMB IN TEXTILE MACHINES FOR SPONGE FABRICS
US4476571A (en) * 1981-06-15 1984-10-09 Pioneer Electronic Corporation Automatic sound volume control device
US6778672B2 (en) * 1992-05-05 2004-08-17 Automotive Technologies International Inc. Audio reception control arrangement and method for a vehicle
US4622440A (en) * 1984-04-11 1986-11-11 In Tech Systems Corp. Differential hearing aid with programmable frequency response
US4625318A (en) * 1985-02-21 1986-11-25 Wang Laboratories, Inc. Frequency modulated message transmission
US4955729A (en) * 1987-03-31 1990-09-11 Marx Guenter Hearing aid which cuts on/off during removal and attachment to the user
JPH01109898A (en) 1987-10-22 1989-04-26 Matsushita Electric Ind Co Ltd Remote controller position detector for stereo
US5111506A (en) * 1989-03-02 1992-05-05 Ensonig Corporation Power efficient hearing aid
US5495534A (en) * 1990-01-19 1996-02-27 Sony Corporation Audio signal reproducing apparatus
US5666424A (en) * 1990-06-08 1997-09-09 Harman International Industries, Inc. Six-axis surround sound processor with automatic balancing and calibration
US6279946B1 (en) * 1998-06-09 2001-08-28 Automotive Technologies International Inc. Methods for controlling a system in a vehicle using a transmitting/receiving transducer and/or while compensating for thermal gradients
US5313663A (en) * 1992-05-08 1994-05-17 American Technology Corporation Ear mounted RF receiver
US5835732A (en) * 1993-10-28 1998-11-10 Elonex Ip Holdings, Ltd. Miniature digital assistant having enhanced host communication
JP3306600B2 (en) * 1992-08-05 2002-07-24 三菱電機株式会社 Automatic volume control
US5526411A (en) * 1992-08-13 1996-06-11 Radio, Computer & Telephone Corporation Integrated hand-held portable telephone and personal computing device
US5682157A (en) * 1992-10-19 1997-10-28 Fasirand Corporation Frequency-alternating synchronized infrared
US5357578A (en) * 1992-11-24 1994-10-18 Canon Kabushiki Kaisha Acoustic output device, and electronic apparatus using the acoustic output device
JPH06197293A (en) * 1992-12-25 1994-07-15 Toshiba Corp Speaker system for television receiver
US5764782A (en) * 1993-03-23 1998-06-09 Hayes; Joseph Francis Acoustic reflector
US5481616A (en) * 1993-11-08 1996-01-02 Sparkomatic Corporation Plug-in sound accessory for portable computers
JPH07264280A (en) * 1994-03-24 1995-10-13 Matsushita Electric Ind Co Ltd Cordless telephone set
US5828768A (en) * 1994-05-11 1998-10-27 Noise Cancellation Technologies, Inc. Multimedia personal computer with active noise reduction and piezo speakers
US5819183A (en) * 1994-06-20 1998-10-06 Microtalk Technologies Low-feedback compact wireless telephone
US5802190A (en) * 1994-11-04 1998-09-01 The Walt Disney Company Linear speaker array
GB9425577D0 (en) * 1994-12-19 1995-02-15 Power Jeffrey Acoustic transducers with controlled directivity
US5588041A (en) * 1995-01-05 1996-12-24 Motorola, Inc. Cellular speakerphone and method of operation thereof
US5517257A (en) 1995-03-28 1996-05-14 Microsoft Corporation Video control user interface for interactive television systems and method for controlling display of a video movie
US5870484A (en) * 1995-09-05 1999-02-09 Greenberger; Hal Loudspeaker array with signal dependent radiation pattern
US5777665A (en) * 1995-09-20 1998-07-07 Videotronic Systems Image blocking teleconferencing eye contact terminal
US6058315A (en) 1996-03-13 2000-05-02 Motorola, Inc. Speaker assembly for a radiotelephone
US5793875A (en) * 1996-04-22 1998-08-11 Cardinal Sound Labs, Inc. Directional hearing system
US6034689A (en) 1996-06-03 2000-03-07 Webtv Networks, Inc. Web browser allowing navigation between hypertext objects using remote control
US5864671A (en) 1996-07-01 1999-01-26 Sun Microsystems, Inc. Hybrid memory access protocol for servicing memory access request by ascertaining whether the memory block is currently cached in determining which protocols to be used
US6577738B2 (en) * 1996-07-17 2003-06-10 American Technology Corporation Parametric virtual speaker and surround-sound system
US5819783A (en) * 1996-11-27 1998-10-13 Isi Norgren Inc. Modular 3-way valve with manual override, lockout, and internal sensors
US6275596B1 (en) * 1997-01-10 2001-08-14 Gn Resound Corporation Open ear canal hearing aid system
US6011855A (en) 1997-03-17 2000-01-04 American Technology Corporation Piezoelectric film sonic emitter
US7376236B1 (en) 1997-03-17 2008-05-20 American Technology Corporation Piezoelectric film sonic emitter
US6151398A (en) * 1998-01-13 2000-11-21 American Technology Corporation Magnetic film ultrasonic emitter
US6275231B1 (en) * 1997-08-01 2001-08-14 American Calcar Inc. Centralized control and management system for automobiles
US6243472B1 (en) * 1997-09-17 2001-06-05 Frank Albert Bilan Fully integrated amplified loudspeaker
US6959220B1 (en) * 1997-11-07 2005-10-25 Microsoft Corporation Digital audio signal filtering mechanism and method
JPH11164384A (en) 1997-11-25 1999-06-18 Nec Corp Super directional speaker and speaker drive method
US6163711A (en) * 1997-12-01 2000-12-19 Nokia Mobile Phones, Ltd Method and apparatus for interfacing a mobile phone with an existing audio system
US6041657A (en) * 1997-12-23 2000-03-28 Caterpillar, Inc. Outdoor noise testing system
GB9727357D0 (en) * 1997-12-24 1998-02-25 Watson Michael B Transducer assembly
CN101031162B (en) * 1998-01-16 2012-09-05 索尼公司 Speaker apparatus
JP3267231B2 (en) * 1998-02-23 2002-03-18 日本電気株式会社 Super directional speaker
US6671494B1 (en) * 1998-06-18 2003-12-30 Competive Technologies, Inc. Small, battery operated RF transmitter for portable audio devices for use with headphones with RF receiver
US6259731B1 (en) * 1998-07-14 2001-07-10 Ericsson Inc. System and method for radio-communication using frequency modulated signals
US20030118198A1 (en) * 1998-09-24 2003-06-26 American Technology Corporation Biaxial parametric speaker
US6512826B1 (en) * 1998-11-30 2003-01-28 Westech Korea Inc. Multi-directional hand-free kit
US6535612B1 (en) * 1998-12-07 2003-03-18 American Technology Corporation Electroacoustic transducer with diaphragm securing structure and method
KR20000042498A (en) 1998-12-22 2000-07-15 노윤성 Method for testing the auditory acuity of person by using computer
US7391872B2 (en) * 1999-04-27 2008-06-24 Frank Joseph Pompei Parametric audio system
US6442278B1 (en) 1999-06-15 2002-08-27 Hearing Enhancement Company, Llc Voice-to-remaining audio (VRA) interactive center channel downmix
US6484040B1 (en) * 1999-07-20 2002-11-19 Ching Yuan Wang Wireless mobile phone combining with car hi-fi speakers
US6584205B1 (en) 1999-08-26 2003-06-24 American Technology Corporation Modulator processing for a parametric speaker system
US7016504B1 (en) 1999-09-21 2006-03-21 Insonus Medical, Inc. Personal hearing evaluator
US6594367B1 (en) * 1999-10-25 2003-07-15 Andrea Electronics Corporation Super directional beamforming design and implementation
US6322521B1 (en) * 2000-01-24 2001-11-27 Audia Technology, Inc. Method and system for on-line hearing examination and correction
US6453045B1 (en) * 2000-02-04 2002-09-17 Motorola, Inc. Telecommunication device with piezo-electric transducer for handsfree and private operating modes
KR20010091117A (en) 2000-03-13 2001-10-23 윤호섭 A volume control mechanism for audio
US6826117B2 (en) * 2000-03-22 2004-11-30 Summit Safety, Inc. Tracking, safety and navigation system for firefighters
US20060233404A1 (en) 2000-03-28 2006-10-19 American Technology Corporation. Horn array emitter
US6631196B1 (en) * 2000-04-07 2003-10-07 Gn Resound North America Corporation Method and device for using an ultrasonic carrier to provide wide audio bandwidth transduction
DE10023585B4 (en) * 2000-05-13 2005-04-21 Daimlerchrysler Ag Display arrangement in a vehicle
AU2001273209A1 (en) * 2000-07-03 2002-01-30 Audia Technology, Inc. Power management for hearing aid device
US6895261B1 (en) * 2000-07-13 2005-05-17 Thomas R. Palamides Portable, wireless communication apparatus integrated with garment
JP3745602B2 (en) * 2000-07-27 2006-02-15 インターナショナル・ビジネス・マシーンズ・コーポレーション Body set type speaker device
JP2002057588A (en) * 2000-08-08 2002-02-22 Niles Parts Co Ltd Car audio system and plug transmitter used for the audio system
US20020048385A1 (en) * 2000-09-11 2002-04-25 Ilan Rosenberg Personal talking aid for cellular phone
US7200237B2 (en) * 2000-10-23 2007-04-03 Apherma Corporation Method and system for remotely upgrading a hearing aid device
US20020090099A1 (en) * 2001-01-08 2002-07-11 Hwang Sung-Gul Hands-free, wearable communication device for a wireless communication system
US20020090103A1 (en) * 2001-01-08 2002-07-11 Russell Calisto Personal wearable audio system
US20020141599A1 (en) * 2001-04-03 2002-10-03 Philips Electronics North America Corp. Active noise canceling headset and devices with selective noise suppression
US20020149705A1 (en) * 2001-04-12 2002-10-17 Allen Paul G. Contact list for a hybrid communicator/remote control
US6498970B2 (en) * 2001-04-17 2002-12-24 Koninklijke Phillips Electronics N.V. Automatic access to an automobile via biometrics
US6913578B2 (en) * 2001-05-03 2005-07-05 Apherma Corporation Method for customizing audio systems for hearing impaired
US7013009B2 (en) * 2001-06-21 2006-03-14 Oakley, Inc. Eyeglasses with wireless communication features
US6795879B2 (en) * 2001-08-08 2004-09-21 Texas Instruments Incorporated Apparatus and method for wait state analysis in a digital signal processing system
DE10140646C2 (en) * 2001-08-18 2003-11-20 Daimler Chrysler Ag Method and device for directional audio irradiation
WO2003032678A2 (en) 2001-10-09 2003-04-17 Frank Joseph Pompei Ultrasonic transducer for parametric array
US7027768B2 (en) * 2001-10-12 2006-04-11 Bellsouth Intellectual Property Corporation Method and systems using a set-top box and communicating between a remote data network and a wireless communication network
US20030174242A1 (en) * 2002-03-14 2003-09-18 Creo Il. Ltd. Mobile digital camera control
US20040114772A1 (en) * 2002-03-21 2004-06-17 David Zlotnick Method and system for transmitting and/or receiving audio signals with a desired direction
US7328151B2 (en) 2002-03-22 2008-02-05 Sound Id Audio decoder with dynamic adjustment of signal modification
US20040052387A1 (en) * 2002-07-02 2004-03-18 American Technology Corporation. Piezoelectric film emitter configuration
US6591085B1 (en) * 2002-07-17 2003-07-08 Netalog, Inc. FM transmitter and power supply/charging assembly for MP3 player
IL152439A0 (en) 2002-10-23 2003-05-29 Membrane-less microphone capable of functioning in a very wide range of frequencies and with much less distortions
US20040114770A1 (en) 2002-10-30 2004-06-17 Pompei Frank Joseph Directed acoustic sound system
US20040204168A1 (en) * 2003-03-17 2004-10-14 Nokia Corporation Headset with integrated radio and piconet circuitry
US7945064B2 (en) * 2003-04-09 2011-05-17 Board Of Trustees Of The University Of Illinois Intrabody communication with ultrasound
US20040208325A1 (en) 2003-04-15 2004-10-21 Cheung Kwok Wai Method and apparatus for wireless audio delivery
US8849185B2 (en) 2003-04-15 2014-09-30 Ipventure, Inc. Hybrid audio delivery system and method therefor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4823908A (en) * 1984-08-28 1989-04-25 Matsushita Electric Industrial Co., Ltd. Directional loudspeaker system
US6169813B1 (en) * 1994-03-16 2001-01-02 Hearing Innovations Incorporated Frequency transpositional hearing aid with single sideband modulation
US6445804B1 (en) * 1997-11-25 2002-09-03 Nec Corporation Ultra-directional speaker system and speaker system drive method
US6643377B1 (en) * 1998-04-28 2003-11-04 Canon Kabushiki Kaisha Audio output system and method therefor
US6363139B1 (en) * 2000-06-16 2002-03-26 Motorola, Inc. Omnidirectional ultrasonic communication system

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1715717A4 (en) * 2004-02-10 2009-04-08 Honda Motor Co Ltd Mobile body with superdirectivity speaker
WO2005076661A1 (en) 2004-02-10 2005-08-18 Mitsubishi Denki Engineering Kabushiki Kaisha Mobile body with superdirectivity speaker
EP1715717A1 (en) * 2004-02-10 2006-10-25 HONDA MOTOR CO., Ltd. Mobile body with superdirectivity speaker
US7268548B2 (en) 2004-05-06 2007-09-11 General Electric Company System and method for reducing auditory perception of noise associated with a medical imaging process
GB2413917B (en) * 2004-05-06 2007-10-03 Gen Electric System and method for reducing auditory perception of noise associated with a medical imagine process
GB2413917A (en) * 2004-05-06 2005-11-09 Gen Electric Reducing auditory perception of noise associated with a medical imaging process
WO2006049645A1 (en) * 2004-10-29 2006-05-11 Sony Ericsson Mobile Communications Ab Mobile terminals including compensation for hearing impairment and methods and computer program products for operating the same
US7613314B2 (en) 2004-10-29 2009-11-03 Sony Ericsson Mobile Communications Ab Mobile terminals including compensation for hearing impairment and methods and computer program products for operating the same
US8995683B2 (en) 2006-12-29 2015-03-31 Google Technology Holdings LLC Methods and devices for adaptive ringtone generation
WO2011091797A2 (en) 2010-01-27 2011-08-04 Micro Balle Aps Hearing aid device and method
US8588454B2 (en) 2011-02-09 2013-11-19 Blackberry Limited Module for containing an earpiece for an audio device
US10134416B2 (en) 2015-05-11 2018-11-20 Microsoft Technology Licensing, Llc Privacy-preserving energy-efficient speakers for personal sound
WO2017003472A1 (en) * 2015-06-30 2017-01-05 Harman International Industries, Incorporated Shoulder-mounted robotic speakers
US10257637B2 (en) 2015-06-30 2019-04-09 Harman International Industries, Incorporated Shoulder-mounted robotic speakers
WO2024044835A1 (en) * 2022-08-30 2024-03-07 Zerosound Systems Inc. Directional sound apparatus and method

Also Published As

Publication number Publication date
US20080279410A1 (en) 2008-11-13
US20090298430A1 (en) 2009-12-03
US20050009583A1 (en) 2005-01-13
US7587227B2 (en) 2009-09-08
US20040208325A1 (en) 2004-10-21
US7801570B2 (en) 2010-09-21
WO2004093488A3 (en) 2005-03-24
US20040208324A1 (en) 2004-10-21
US8208970B2 (en) 2012-06-26
US20040209654A1 (en) 2004-10-21
US20040208333A1 (en) 2004-10-21
US20070287516A1 (en) 2007-12-13
US7388962B2 (en) 2008-06-17
US8582789B2 (en) 2013-11-12
US7269452B2 (en) 2007-09-11

Similar Documents

Publication Publication Date Title
US11869526B2 (en) Hearing enhancement methods and systems
WO2004093488A2 (en) Directional speakers
US10817251B2 (en) Dynamic capability demonstration in wearable audio device
EP0563194B1 (en) Hearing aid system
US6738485B1 (en) Apparatus, method and system for ultra short range communication
US9756159B2 (en) Handphone
CN102355748A (en) Method for determining a processed audio signal and a handheld device
US10271132B2 (en) Method to dynamically change the directional speakers audio beam and level based on the end user activity
US10959009B2 (en) Wearable personal acoustic device having outloud and private operational modes
US20140233754A1 (en) Headphone system with retractable microphone
JP4170143B2 (en) Hearing aid system
WO2002078390A2 (en) A method and system for transmitting and/or receiving audio signals with a desired direction

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 20048103888

Country of ref document: CN

122 Ep: pct application non-entry in european phase