Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20020037087 A1
Publication typeApplication
Application numberUS 09/755,412
Publication dateMar 28, 2002
Filing dateJan 5, 2001
Priority dateJan 5, 2001
Also published asUS6895098, US6910013, US20020090098, WO2001020965A2, WO2001020965A3
Publication number09755412, 755412, US 2002/0037087 A1, US 2002/037087 A1, US 20020037087 A1, US 20020037087A1, US 2002037087 A1, US 2002037087A1, US-A1-20020037087, US-A1-2002037087, US2002/0037087A1, US2002/037087A1, US20020037087 A1, US20020037087A1, US2002037087 A1, US2002037087A1
InventorsSylvia Allegro, Michael Buchler
Original AssigneeSylvia Allegro, Michael Buchler
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Method for identifying a transient acoustic scene, application of said method, and a hearing device
US 20020037087 A1
Abstract
The invention relates first of all to a method for identifying a transient acoustic scene, said method including the extraction, during an extraction phase, of characteristic features from an acoustic signal captured by at least one microphone (2 a , 2 b), and the identification, during an identification phase, of the transient acoustic scene on the basis of the extracted characteristics. According to the invention, at least auditory-based characteristics are identified in the extraction phase. Also specified are an application of the method per this invention and a hearing device.
Images(2)
Previous page
Next page
Claims(18)
1. Method for identifying a transient acoustic scene, said method including
the extraction, during an extraction phase, of characteristic features from an acoustic signal captured by at least one microphone (2 a, 2 b), and
the identification, during an identification phase, of the transient acoustic scene on the basis of the extracted characteristics,
whereby at least auditory-based characteristics are identified during the extraction phase.
2. Method as in claim 1, whereby, for the identification of the characteristic features during the extraction phase, Auditory Scene Analysis (ASA) techniques are employed.
3. Method as in claim 1 or 2, whereby, during the identification phase, Hidden Markov Model (HMM) techniques are employed for the identification of the transient acoustic scene.
4. Method as in one of the claims 1 to 3, whereby one or several of the following auditory characteristics are identified during the extraction of said characteristic features: Volume, spectral pattern, harmonic structure, common build-up and decay processes, coherent amplitude modulations, coherent frequency modulations, coherent frequency transitions and binaural effects.
5. Method as in one of the preceding claims, whereby any other suitable characteristics are identified in addition to the auditory characteristics.
6. Method as in one of the preceding claims, whereby, for the purpose of creating auditory objects, the auditory and any other characteristics are grouped along the principles of the gestalt theory.
7. Method as in claim 6, whereby the extraction of characteristics and/or the grouping of the characteristics are/is performed either in context-free or in context-sensitive fashion in the sense of human auditory perception, taking into account additional information or hypotheses relative to the signal content and thus providing an adaptation to the respective acoustic scene.
8. Method as in one of the preceding claims, whereby, during the identification phase, data are accessed which were acquired in an off-line training phase.
9. Method as in one of the preceding claims, whereby the extraction phase and the identification phase take place in continuous fashion or at regular or irregular time intervals.
10. Application of the method per one of the claims 1 to 9 for tuning a hearing device (1) to a transient acoustic scene.
11. Application as in claim 10, whereby, on the basis of a detected transient acoustic scene, a program or a transmission function between at least one microphone (2 a, 2 b) and a receiver (6) in the hearing device (1) is selected.
12. Application as in claim 9 or 10, whereby any other available function can be triggered in the hearing device (1) on the basis of the identified transient acoustic scene.
13. Application of the method per one of the claims 1 to 9 for voice recognition.
14. Hearing device (1) with a transmission unit (4) whose input end is connected to at least one microphone (2 a, 2 b) and whose output end is functionally connected to a receiver (6), characterized in that the input signal of the transmission unit (4) is simultaneously fed to a signal analyzer (7) for the extraction of at least auditory characteristics, that the signal analyzer (7) is functionally connected to a signal identifier unit (8) in which the transient acoustic scene is identified, and that the signal identifier unit (8) is functionally connected to the transmission unit (4) for the selection of a program or a transmission function.
15. Hearing device (1) as in claim 14, characterized in that a user input unit (11) is provided which is functionally connected to the transmission unit (4).
16. Hearing device (1) as in claim 14 or 15, characterized in that a control unit (9) is provided and that the signal identifier unit (8) is functionally connected to said control unit (9).
17. Hearing device (1) as in claim 15 or 16, characterized in that the user input unit (11) is functionally connected to the control unit (9).
18. Hearing device (1) as in one of the claims 14 to 17, characterized in that it is provided with suitable means serving to transfer parameters from a training unit (10) to the signal identifier unit (8).
Description
  • [0001]
    This invention relates to a method for identifying a transient acoustic scene, an application of said method in conjunction with hearing devices, as well as a hearing device.
  • [0002]
    Modern-day hearing aids, when employing different audiophonic programs—typically two to a maximum of three such hearing programs—permit their adaptation to varying acoustic environments or scenes. The idea is to optimize the effectiveness of the hearing aid for its user in all situations.
  • [0003]
    The hearing program can be selected either via a remote control or by means of a selector switch on the hearing aid itself. For many users, however, having to switch program settings is a nuisance, or difficult, or even impossible. Nor is it always easy even for experienced wearers of hearing aids to determine at what point in time which program is most comfortable and offers optimal speech discrimination. An automatic recognition of the acoustic scene and corresponding automatic switching of the program setting in the hearing aid is therefore desirable.
  • [0004]
    There exist several different approaches to the automatic classification of acoustic surroundings. All of the methods concerned involve the extraction of different characteristics from the input signal which may be derived from one or several microphones in the hearing aid. Based on these characteristics, a pattern-recognition device employing a particular algorithm makes a determination as to the attribution of the analyzed signal to a specific acoustic environment. These various existing methods differ from one another both in terms of the characteristics on the basis of which they define the acoustic scene (signal analysis) and with regard to the pattern-recognition device which serves to classify these characteristics (signal identification).
  • [0005]
    For the extraction of characteristics in audio signals, J. M. Kates in his article titled “Classification of Background Noises for Hearing-Aid Applications” (1995, Journal of the Acoustical Society of America 97(1), pp 461-469), suggested an analysis of time-related sound-level fluctuations and of the sound spectrum. On its part, the European patent EP-B1-0 732 036 proposed an analysis of the amplitude histogram for obtaining the same result. Finally, the extraction of characteristics has been investigated and implemented based on an analysis of different modulation frequencies. In this connection, reference is made to the two papers by Ostendorf et al titled “Empirical Classification of Different Acoustic Signals and of Speech by Means of a Modulation-Frequency Analysis” (1997, DAGA 97, pp 608-609), and “Classification of Acoustic Signals Based on the Analysis of Modulation Spectra for Application in Digital Hearing Aids” (1998, DAGA 98, pp 402-403). A similar approach is described in an article by Edwards et al titled “Signal-processing algorithms for a new software-based, digital hearing device” (1998, The Hearing Journal 51, pp 44-52). Other possible characteristics include the sound-level transmission itself or the zero-passage rate as described for instance in the article by H. L. Hirsch, titled “Statistical Signal Characterization” (Artech House 1992). It is evident that the characteristics used to date for the analysis of audio signals are strictly based on system-specific parameters.
  • [0006]
    It is fundamentally possible to use prior-art pattern identification methods for sound classification purposes. Particularly suitable pattern-recognition systems are the so-called ranging devices, Bayes classifiers, fuzzy-logic systems and neural networks. Details of the first two of the methods mentioned are contained in the publication titled “Pattern Classification and Scene Analysis” by Richard O. Duda and Peter E. Hart (John Wiley & Sons, 1973). For information on neural networks, reference is made to the treatise by Christopher M. Bishop, titled “Neural Networks for Pattern Recognition” (1995, Oxford University Press). Reference is also made to the following publications: Ostendorf et al, “Classification of Acoustic Signals Based on the Analysis of Modulation Spectra for Application in Digital Hearing Aids” (Zeitschrift for Audiologie (Journal of Audiology), pp 148-150); F. Feldbusch, “Sound Recognition Using Neural Networks” (1998, Journal of Audiology, pp 30-36); European patent application, publication number EP-A1-0 814 636; and US patent, publication number U.S. Pat. No. 5,604,812. Yet all of the pattern-recognition methods mentioned are deficient in one respect in that they merely model static properties of the sound categories of interest.
  • [0007]
    One shortcoming of these earlier sound-classification methods, involving characteristics extraction and pattern recognition, lies in the fact that, although unambiguous and solid identification of voice signals is basically possible, a number of different acoustic situations cannot be satisfactorily classified, or not at all. While these earlier methods permit a distinction between pure voice or speech signals and “non-speech” sounds, meaning all other acoustic surroundings, that is not enough for selecting an optimal hearing program for a transient acoustic situation. It follows that the number of possible hearing programs is limited to those two automatically recognizable acoustic situations or the hearing-aid wearer himself has to recognize the acoustic situations that are not covered and manually select the appropriate hearing program.
  • [0008]
    It is therefore the objective of this invention to introduce first of all a method for identifying a transient acoustic scene which compared to prior-art methods is substantially more reliable and more precise.
  • [0009]
    This is accomplished by the measures specified in claim 1. Additional claims specify advantageous enhancements of the invention, an application of the method, as well as a hearing device.
  • [0010]
    The invention is based on an extraction of signal characteristics, a subsequent separation of different sound-sources as well as an identification of different sounds. In lieu of or in addition to system-specific characteristics, auditory characteristics are taken into account in the signal analysis for the extraction of characteristic features. These auditory characteristics are identified by means of Auditory Scene Analysis (ASA) techniques. In another form of implementation of the method per this invention, the characteristics are subjected to a context-free or a context-sensitive grouping process by applying the gestalt principle. The actual identification and classification of the audio signals derived from the extracted characteristics is preferably performed using Hidden Markov Models (HMM). One advantage of this invention is the fact that it allows for a larger number of identifiable sound categories and thus a greater number of hearing programs which results in enhanced sound classification and correspondingly greater comfort for the user of the hearing device.
  • [0011]
    The following will explain this invention in more detail by way of an example with reference to a drawing. The only FIGURE is a functional block diagram of a hearing device in which the method per this invention has been implemented.
  • [0012]
    In the FIGURE, the reference number 1 designates a hearing device. For the purpose of the following description, the term “hearing device” is intended to include hearing aids as used to compensate for the hearing impairment of a person, but also all other acoustic communication systems such as radio transceivers and the like.
  • [0013]
    The hearing device 1 incorporates in conventional fashion two electro-acoustic converters 2 a, 2 b and 6, these being one or several microphones 2 a, 2 b and a speaker 6, also referred to as a receiver. A main component of a hearing device 1 is a transmission unit 4 in which, in the case of a hearing aid, signal modification takes place in adaptation to the requirements of the user of the hearing device 1. However, the operations performed in the transmission unit 4 are not only a function of the nature of a specific purpose of the hearing device 1 but are also, and especially, a function of the momentary acoustic scene. There have already been hearing aids on the market where the wearer can manually switch between different hearing programs tailored to specific acoustic situations. There also exist hearing aids capable of automatically recognizing the acoustic scene. In that connection, reference is again made to the European patents EP-B!-0 732 036 and EP-A1-0 814 636 and to the U.S. Pat. No. 5,604,812, as well as to the “Claro Autoselect” brochure by Phonak Hearing Systems (28148 (GB)/0300, 1999).
  • [0014]
    In addition to the aforementioned components such as microphones 2 a, 2 b, the transmission unit 4 and the receiver 6, the hearing device 1 contains a signal analyzer 7 and a signal identifier 8. If the hearing device 1 is based on digital technology, one or several analog-to-digital converters 3 a, 3 b are interpolated between the microphones 2 a, 2 b and the transmission unit 4 and one digital-to-analog converter 5 is provided between the transmission unit 4 and the receiver 6. While a digital implementation of this invention is preferred, it should be equally possible to use analog components throughout. In that case, of course, the converters 3 a, 3 b and 5 are not needed.
  • [0015]
    The signal analyzer 7 receives the same input signal as the transmission unit 4. The signal identifier 8, which is connected to the output of the signal analyzer 7, connects at the other end to the transmission unit 4 and to a control unit 9.
  • [0016]
    A training unit 10 serves to establish in off-line operation the parameters required in the signal identifier 8 for the classification process.
  • [0017]
    By means of a user input unit 11, the user can override the settings of the transmission unit 4 and the control unit 9 as established by the signal analyzer 7 and the signal identifier 8.
  • [0018]
    The method according to this invention is explained as follows:
  • [0019]
    It is essentially based on the extraction of characteristic features from an acoustic signal during an extraction phase, whereby, in lieu of or in addition to the system-specific characteristics—such as the above-mentioned zero-passage rates, time-related sound-level fluctuations, different modulation frequencies, the sound level itself, the spectral peak, the amplitude distribution etc.—auditory characteristics as well are employed. These auditory characteristics are determined by means of an Auditory Scene Analysis (ASA) and include in particular the volume, the spectral pattern (timbre), the harmonic structure (pitch), common build-up and decay times (on-/offsets), coherent amplitude modulations, coherent frequency modulations, coherent frequency transitions, binaural effects etc. Detailed descriptions of Auditory Scene Analysis can be found for instance in the articles by A. Bregman, “Auditory Scene Analysis” (MIT Press, 1990) and W. A. Yost, “Fundamentals of Hearing—An Introduction” (Academic Press, 1977). The individual auditory characteristics are described, inter alia, by A. Yost and S. Sheft in “Auditory Perception” (published in “Human Psychophysics” by W. A. Yost, A. N. Popper and R. R. Fay, Springer 1993), by W. M. Hartmann in “Pitch, Periodicity, and Auditory Organization” (Journal of the Acoustical Society of America, 100 (6), pp 3491-3502, 1996), and by D. K. Mellinger and B. M. Mont-Reynaud in “Scene Analysis” (published in “Auditory Computation” by H. L. Hawkins, T. A. McMullen, A. N. Popper and R. R. Fay, Springer 1996).
  • [0020]
    In this context, an example of the use of auditory characteristics in signal analysis is the characterization of the tonality of the acoustic signal by analyzing the harmonic structure, which is particularly useful in the identification of tonal signals such as speech and music.
  • [0021]
    Another form of implementation of the method according to this invention additionally provides for a grouping of the characteristics in the signal analyzer 7 by means of gestalt analysis. This process applies the principles of the gestalt theory, by which such qualitative properties as continuity, proximity, similarity, common destiny, unity, good constancy and others are examined, to the auditory and perhaps system-specific characteristics for the creation of auditory objects. This grouping—and, for that matter, the extraction of characteristics in the extraction phase—can take place in context-free fashion, i.e. without any enhancement by additional knowledge (so-called “primitive” grouping), or in context-sensitive fashion in the sense of human auditory perception employing additional information or hypotheses regarding the signal content (so-called “design-based” grouping). This means that the contextual grouping is adapted to any given acoustic situation. For a detailed explanation of the principles of the gestalt theory and of the grouping process employing gestalt analysis, substitutional reference is made to the publications titled “Perception Psychology” by E. B. Goldstein (Spektrum Akademischer Verlag, 1997), “Neural Fundamentals of Gestalt Perception” by A. K. Engel and W. Singer (Spektrum der Wissenschaft, 1998, pp 66-73), and “Auditory Scene Analysis” by A. Bregman (MIT Press, 1990).
  • [0022]
    The advantage of applying this grouping process lies in the fact that it allows further differentiation of the characteristics of the input signals. In particular, signal segments are identifiable which originate in different sound-sources. The extracted characteristics can thus be mapped to specific individual sound sources, providing additional information on these sources and, hence, on the current, transient auditory scene.
  • [0023]
    The second aspect of the method according to this invention as described here relates to pattern recognition, i.e. the signal identification that takes place during the identification phase. The preferred form of implementation of the method per this invention employs the Hidden Markov Model (HMM) method in the signal identifier 8 for the automatic classification of the acoustic scene. This also permits the use of time changes of the computed characteristics for the classification process. Accordingly, it is possible to also take into account dynamic and not only static properties of the surrounding situation and of the sound categories. Equally possible is a combination of HMMs with other classifiers such as multi-stage recognition processes for identifying the acoustic scene.
  • [0024]
    The output signal of the signal identifier 8 thus contains information on the nature of the acoustic surroundings (the acoustic situation or scene). That information is fed to the transmission unit 4 which selects the program, or set of parameters, best suited to the transmission of the acoustic scene discerned. At the same time, the information gathered in the signal identifier 8 is fed to the control unit 9 for further actions whereby, depending on the situation, any given function, such as an acoustic signal, can be triggered.
  • [0025]
    If the identification phase involves Hidden Markov Models, it will require a complex process for establishing the parameters needed for the classification. This parameter ascertainment is therefore best done in the off-line mode, individually for each category or class at a time. The actual identification of various acoustic scenes requires very little memory space and computational capacity. It is therefore recommended that a training unit 10 be provided which has enough computing power for parameter determination and which can be connected via appropriate means to the hearing device 1 for data transfer purposes. The connecting means mentioned may be simple wires with suitable plugs.
  • [0026]
    The method according to this invention thus makes it possible to select from among numerous available settings and automatically pollable actions the one best suited without the need for the user of the device to make the selection. This makes the device significantly more comfortable for the user since upon the recognition of a new acoustic scene it promptly and automatically selects the right program or function in the hearing device 1.
  • [0027]
    The users of hearing devices often want to switch off the automatic recognition of the acoustic scene and corresponding automatic program selection, described above. For this purpose a user input unit 11 is provided by means of which it is possible to override the automatic response or program selection. The user input unit 11 may be in the form of a switch on the hearing device 1 or a remote control which the user can operate. There are also other options which offer themselves, for instance a voice-activated user input device.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US5604812 *Feb 8, 1995Feb 18, 1997Siemens Audiologische Technik GmbhProgrammable hearing aid with automatic adaption to auditory conditions
US6002116 *May 5, 1999Dec 14, 1999Camco Inc.Heater coil mounting arrangement
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US6862359 *May 29, 2002Mar 1, 2005Gn Resound A/SHearing prosthesis with automatic classification of the listening environment
US7269269 *Feb 27, 2004Sep 11, 2007Siemens Audiologische Technik GmbhMethod to adjust an auditory system and corresponding auditory system
US7343023 *Dec 18, 2001Mar 11, 2008Gn Resound A/SHearing prosthesis with automatic classification of the listening environment
US7428312Nov 18, 2003Sep 23, 2008Phonak AgMethod for adapting a hearing device to a momentary acoustic situation and a hearing device system
US7499559Dec 9, 2003Mar 3, 2009Bernafon AgHearing device and method for choosing a program in a multi program hearing device
US7773763Jun 23, 2004Aug 10, 2010Gn Resound A/SBinaural hearing aid system with coordinated sound processing
US7804973Apr 24, 2003Sep 28, 2010Gn Resound A/SFitting methodology and hearing prosthesis based on signal-to-noise ratio loss data
US7804974May 19, 2006Sep 28, 2010Widex A/SHearing aid and a method of processing signals
US7826631 *Mar 1, 2006Nov 2, 2010Siemens Audiologische Technik GmbhHearing aid with automatic sound storage and corresponding method
US7889879Nov 22, 2004Feb 15, 2011Cochlear LimitedProgrammable auditory prosthesis with trainable automatic adaptation to acoustic conditions
US7899199Dec 1, 2005Mar 1, 2011Phonak AgHearing device and method with a mute function program
US8054999 *Dec 19, 2006Nov 8, 2011Oticon A/SAudio system with varying time delay and method for processing audio signals
US8199949 *Oct 9, 2007Jun 12, 2012Siemens Audiologische Technik GmbhProcessing an input signal in a hearing aid
US8532317Feb 10, 2011Sep 10, 2013Hearworks Pty LimitedProgrammable auditory prosthesis with trainable automatic adaptation to acoustic conditions
US8588442 *Nov 25, 2008Nov 19, 2013Phonak AgMethod for adjusting a hearing device
US8605923Jun 20, 2008Dec 10, 2013Cochlear LimitedOptimizing operational control of a hearing prosthesis
US8781142 *Feb 24, 2012Jul 15, 2014Sverrir OlafssonSelective acoustic enhancement of ambient sound
US8792660Oct 15, 2009Jul 29, 2014Phonak AgHearing system with analogue control element
US8842861Jan 14, 2013Sep 23, 2014Widex A/SMethod of signal processing in a hearing aid system and a hearing aid system
US8948428 *Sep 4, 2007Feb 3, 2015Gn Resound A/SHearing aid with histogram based sound environment classification
US9100491 *Nov 23, 2009Aug 4, 2015Samsung Electronics Co., Ltd.Mobile communication terminal, digital hearing aid, and method of controlling the digital hearing aid using the mobile communication terminal
US9124984Jun 16, 2011Sep 1, 2015Panasonic Intellectual Property Management Co., Ltd.Hearing aid, signal processing method, and program
US20020191799 *Dec 18, 2001Dec 19, 2002Gn Resound A/SHearing prosthesis with automatic classification of the listening environment
US20030112987 *May 29, 2002Jun 19, 2003Gn Resound A/SHearing prosthesis with automatic classification of the listening environment
US20040047474 *Apr 24, 2003Mar 11, 2004Gn Resound A/SFitting methodology and hearing prosthesis based on signal-to-noise ratio loss data
US20040190738 *Nov 18, 2003Sep 30, 2004Hilmar MeierMethod for adapting a hearing device to a momentary acoustic situation and a hearing device system
US20040213424 *Feb 27, 2004Oct 28, 2004Volkmar HamacherMethod to adjust an auditory system and corresponding auditory system
US20050129262 *Nov 22, 2004Jun 16, 2005Harvey DillonProgrammable auditory prosthesis with trainable automatic adaptation to acoustic conditions
US20060078139 *Mar 27, 2003Apr 13, 2006Hilmar MeierMethod for adapting a hearing device to a momentary acoustic surround situation and a hearing device system
US20060115104 *Nov 30, 2004Jun 1, 2006Michael BoretzkiMethod of manufacturing an active hearing device and fitting system
US20060182295 *Feb 11, 2005Aug 17, 2006Phonak AgDynamic hearing assistance system and method therefore
US20060198530 *Mar 1, 2006Sep 7, 2006Siemens Audiologische Technik GmbhHearing aid with automatic sound storage and corresponding method
US20060204025 *May 19, 2006Sep 14, 2006Widex A/SHearing aid and a method of processing signals
US20060215860 *Dec 9, 2003Sep 28, 2006Sigi WyrschHearing device and method for choosing a program in a multi program hearing device
US20070127749 *Dec 1, 2005Jun 7, 2007Phonak AgMethod to operate a hearing device as well as a hearing device
US20070173962 *Dec 19, 2006Jul 26, 2007Oticon A/SAudio system with varying time delay and method for processing audio signals
US20080130925 *Oct 9, 2007Jun 5, 2008Siemens Audiologische Technik GmbhProcessing an input signal in a hearing aid
US20080175423 *Nov 27, 2007Jul 24, 2008Volkmar HamacherAdjusting a hearing apparatus to a speech signal
US20080212810 *Jun 23, 2004Sep 4, 2008Gn Resound A/SBinaural Hearing Aid System with Coordinated Sound Processing
US20100027820 *Sep 4, 2007Feb 4, 2010Gn Resound A/SHearing aid with histogram based sound environment classification
US20100254540 *Nov 23, 2009Oct 7, 2010Samsung Electronics Co., LtdMobile communication terminal, digital hearing aid, and method of controlling the digital hearing aid using the mobile communication terminal
US20100296661 *Jun 20, 2008Nov 25, 2010Cochlear LimitedOptimizing operational control of a hearing prosthesis
US20110202111 *Feb 10, 2011Aug 18, 2011Harvey DillonProgrammable auditory prosthesis with trainable automatic adaptation to acoustic conditions
US20110243355 *Nov 25, 2008Oct 6, 2011Phonak Agmethod for adjusting a hearing device
US20130051590 *Aug 31, 2012Feb 28, 2013Patrick SlaterHearing Enhancement and Protective Device
US20160142831 *Jan 25, 2016May 19, 2016Med-El Elektromedizinische Geraete GmbhBinaural Cochlear Implant Processing
US20170061969 *Aug 26, 2015Mar 2, 2017Apple Inc.Acoustic scene interpretation systems and related methods
EP1320281A2Mar 7, 2003Jun 18, 2003Phonak AgBinaural hearing device and method for controlling a such a hearing device
EP1326478A2Mar 7, 2003Jul 9, 2003Phonak AgMethod for producing control signals, method of controlling signal transfer and a hearing device
EP1359787A2 *Apr 24, 2003Nov 5, 2003GN ReSound asFitting methodology and hearing prosthesis based on signal-to-noise ratio loss data
EP1359787A3 *Apr 24, 2003Jun 15, 2005GN ReSound asFitting methodology and hearing prosthesis based on signal-to-noise ratio loss data
EP1432282A2Nov 18, 2003Jun 23, 2004Phonak AgMethod for adapting a hearing aid to a momentary acoustic environment situation and hearing aid system
EP1532841A1 *May 21, 2003May 25, 2005Hearworks Pty Ltd.Programmable auditory prosthesis with trainable automatic adaptation to acoustic conditions
EP1532841A4 *May 21, 2003Dec 24, 2008Hearworks Pty LtdProgrammable auditory prosthesis with trainable automatic adaptation to acoustic conditions
EP1635610A3 *Dec 1, 2005Dec 6, 2006Phonak AGMethod to operate a hearing device and a hearing device
EP1691574A2Feb 13, 2006Aug 16, 2006Phonak Communications AgMethod and system for providing hearing assistance to a user
EP1801786A1Dec 20, 2005Jun 27, 2007Oticon A/SAn audio system with varying time delay and a method for processing audio signals.
EP1819195A2Jun 1, 2006Aug 15, 2007Phonak Communications AgMethod and system for providing hearing assistance to a user
EP1912471B1 *Sep 25, 2007Mar 9, 2016Sivantos GmbHProcessing of an input signal in a hearing aid
WO2004056154A2 *Dec 9, 2003Jul 1, 2004Bernafon AgHearing device and method for choosing a program in a multi program hearing device
WO2004056154A3 *Dec 9, 2003Sep 10, 2004Bernafon AgHearing device and method for choosing a program in a multi program hearing device
WO2004114722A1 *Jun 23, 2004Dec 29, 2004Gn Resound A/SA binaural hearing aid system with coordinated sound processing
WO2005051039A1 *Nov 24, 2003Jun 2, 2005Widex A/SHearing aid and a method of noise reduction
WO2008154706A1 *Jun 20, 2008Dec 24, 2008Cochlear LimitedA method and apparatus for optimising the control of operation of a hearing prosthesis
WO2010133703A2Sep 15, 2010Nov 25, 2010Phonak AgMethod and system for providing hearing assistance to a user
WO2012007183A1Jan 12, 2011Jan 19, 2012Widex A/SMethod of signal processing in a hearing aid system and a hearing aid system
Classifications
U.S. Classification381/317, 381/312
International ClassificationH04R25/00
Cooperative ClassificationH04R25/407, H04R2225/41, H04R25/505
European ClassificationH04R25/40F
Legal Events
DateCodeEventDescription
Jun 28, 2001ASAssignment
Owner name: PHONAK AG, SWITZERLAND
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ALLEGRO, SYLVIA;BUCHLER, MICHAEL;REEL/FRAME:011933/0459;SIGNING DATES FROM 20010608 TO 20010612
Jun 6, 2002ASAssignment
Owner name: PHONAK AG, SWITZERLAND
Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE NAME OF THE ASSIGNOR. FILED ON 06/28/2001, RECORDED ON REEL 011933 FRAME 0459;ASSIGNORS:ALLEGRO, SILVIA;BUCHLER, MICHAEL;REEL/FRAME:012972/0029;SIGNING DATES FROM 20010608 TO 20010612
Nov 20, 2007CCCertificate of correction
Nov 26, 2008FPAYFee payment
Year of fee payment: 4
Nov 21, 2012FPAYFee payment
Year of fee payment: 8
Sep 24, 2015ASAssignment
Owner name: SONOVA AG, SWITZERLAND
Free format text: CHANGE OF NAME;ASSIGNOR:PHONAK AG;REEL/FRAME:036674/0492
Effective date: 20150710
Dec 21, 2016FPAYFee payment
Year of fee payment: 12