WO1997028634A1 - Database access - Google Patents

Database access Download PDF

Info

Publication number
WO1997028634A1
WO1997028634A1 PCT/GB1997/000233 GB9700233W WO9728634A1 WO 1997028634 A1 WO1997028634 A1 WO 1997028634A1 GB 9700233 W GB9700233 W GB 9700233W WO 9728634 A1 WO9728634 A1 WO 9728634A1
Authority
WO
WIPO (PCT)
Prior art keywords
representations
vocabulary
representation
combination
distinguishable
Prior art date
Application number
PCT/GB1997/000233
Other languages
French (fr)
Inventor
David John Attwater
Paul Andrew Olsen
Seamus Aodhain Bridgeman
Steven John Whittaker
Original Assignee
British Telecommunications Public Limited Company
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by British Telecommunications Public Limited Company filed Critical British Telecommunications Public Limited Company
Priority to NZ326441A priority Critical patent/NZ326441A/en
Priority to JP9527399A priority patent/JP2000504510A/en
Priority to EP97901199A priority patent/EP0878085B1/en
Priority to DE69729277T priority patent/DE69729277T2/en
Priority to AU36068/97A priority patent/AU707248C/en
Priority to CA002244116A priority patent/CA2244116C/en
Publication of WO1997028634A1 publication Critical patent/WO1997028634A1/en
Priority to NO983501A priority patent/NO983501L/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • H04M3/4931Directory assistance systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/228Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/40Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching
    • Y10S707/99936Pattern matching access

Definitions

  • the present invention relates to a database access particularly, though not exclusively, employing speech recognition input and synthesised speech output.
  • International patent publication number WO94/14270 describes a mechanised directory enquiry system in which a caller is first prompted to speak the name of the city required. The word spoken is then recognised and the word with the highest confidence level is selected as being the word spoken by a user. The caller is then prompted to speak the name of the sought party. When a satisfactory confidence level is obtained, a database is accessed and the number articulated to the caller.
  • the caller is prompted to spell all or part of the location or the name If more than one match between the spoken input and the database is found, the user is asked to confirm each match one by one until a confirmed match is found. If no such match can be located, the automatic processing is terminated.
  • European patent application publication no. 433964 relates to a system which uses a text input. First of all an input word representing the surname is matched with the entries. If a match is found that is "comparable" but not exactly the same as the input, the initial characters of the input and the database entry are compared. If these match, a record of the data entry is made. The system then compares the required titles and the personal names required. The most likely entry is provided to the user
  • US patent no. 5204894 relates to a personal electronic directory in which the names of the entries are stored in the user's voice and associated numbers are input using either a multitone (DTMF) telephone keypad or a spoken input.
  • DTMF multitone
  • the directory system compares the first word of the input with the stored words and provides all possibilities in sequence to the user until the user confirms one.
  • the systems provide to the user identified entries in a database in a sequential manner until a user confirms the data entry as being that required.
  • a database access apparatus comprising: (a) a database containing entries each comprising a plurality of fields which contain machine representations of items of information pertaining to the entry, the said representations forming a first vocabulary;
  • announcement means responsive to machine representations falling within a second vocabulary of such representations to generate audio signals representing spoken announcements
  • translation means defining a relationship between the first vocabulary and the second vocabulary and between the first vocabulary and the third vocabulary
  • control means operable
  • the included word output by the announcement means may be in any suitable form e.g. the included word may represent a whole word, a spelt word or alphanume ⁇ cs.
  • the invention provides a method of speech recognition comprising
  • Figure 1 is an Entity Relationship Diagram showing an example of translations between phonetic, spoken and database representations
  • FIG. 2 is a block diagram of apparatus according to the invention.
  • Figure 3 is a flow chart illustrating the operation of the apparatus of Fig. 1 ;
  • Figure 3a is a flow chart illustrating an alternative operation of the apparatus of Figure 1 ;
  • Figure 4 is a flow chart illustrating the process of identifying distinguishable tuples;
  • Figure 5 is an Entity Relationship Diagram showing an example of the translations between phonetic, spoken, spelt and database representations.
  • a voice interactive apparatus will be described, which generates questions to a user and recognises the user's responses in order to access the contents of the database.
  • a database of names, addresses and telephone numbers will be used as an example. Firstly, however, some basic concepts will be discussed which will be of value in understanding the operation of the apparatus.
  • the database will be supposed to contain a number of entries, with each entry containing a number of fields each containing an item of information about the entry; for example the forename, surname, location and telephone number of the person to whom the entry refers.
  • a complete entry is thus a tuple as also is a smaller set of fields extracted from one entry; thus a set of forename/surname pairs taken from the example database forms a set of extracted duples.
  • the items of information stored in the database fields may be in any convenient representation; generally this description will assume the use of a text representation such as, for the surname Jonson, character codes corresponding to the letters of the name, but with a stylised representation for some fields; for example one might, for geographical locations, identify several distinct places having the same name with different representations - e.g. Southend l , Southend2 and Southend3 for three place in England called Southend.
  • a text representation such as, for the surname Jonson, character codes corresponding to the letters of the name, but with a stylised representation for some fields; for example one might, for geographical locations, identify several distinct places having the same name with different representations - e.g. Southend l , Southend2 and Southend3 for three place in England called Southend.
  • the words used in the dialogue between the apparatus and the user to represent field contents are conceptually distinct from the database representations and represent for each field a spoken vocabulary. If the database representations are text then there will be some overlap between them and the spoken vocabulary, but even then it may be desired to take account of the fact that the user might use, to describe an item of information, a different word from that actually contained in the database field; that is, some words may be regarded as synonyms.
  • Figure 1 is an "Entity Relationship Diagram", where we see a need for translation between representations as one moves from left to right or right to left.
  • Box A represents a set of database entries.
  • Box B represents a set of unique surnames, which have a 1 :many relationship with the entries -i.e. one surname may appear in many entries but one entry will contain only one surname.
  • Boxes C, D and E correspond to sets of representations of forenames, towns and telephone numbers, where similar comments apply
  • Box F represents the spoken vocabulary corresponding to forenames i.e. the set of all words that are permitted by the apparatus to be used to describe this field.
  • Kesgrave A person living in Kesgrave might have his address recorded in the database either as Ipswich or as Kesgrave. Similarly an enquirer seeking the telephone number of such a person might give either name as the location.
  • Ipswich and Kesgrave may be regarded as synonymous for the purposes of database retrieval Note however that this geographical aliasing is complex: Ipswich may be regarded as synonymous with another local village such as Foxhall, but Kesgrave and Foxhall are not synonymous because they are different places.
  • Box H represents, for completeness, a spoken vocabulary for surnames, though there is probably little scope for synonyms for this field.
  • Box J represents a pronunciation vocabulary for surnames, to take account of homophones and homonyms.
  • the surname Smith is generally pronounced with a short "i” as in the English word “pith”
  • the name Smythe is pronounced with a long "i” as in “lithe” .
  • Smyth may be pronounced either way.
  • Other instances of varying pronunciation may arise, for example due to variations in regional accents.
  • “primary” and “may be” links are shown, for reasons to be explained later.
  • Boxes K and L represent pronunciation vocabularies for forenames and geographical names respectively.
  • FIG. 2 is a block diagram of an apparatus for conducting a dialogue.
  • An audio signal input 1 is connected to a speech recogniser 2, whilst an audio signal output 3 is connected to a speech synthesiser 4
  • a control unit in the form of a stored-program controlled processor 5 controls the operation of the recogniser and synthesiser and also has access to a program memory 6, a working memory (RAM) 7, a database 8, a spoken vocabulary translation table 9 and a pronunciation table 1 0.
  • the audio inputs and outputs are connected for two-way communication - perhaps via a telephone line - with a user.
  • the database 8 is assumed to contain telephone directory entries, as discussed above, in text form.
  • the spoken vocabulary translation table 9 is a store containing word pairs consisting of a directory representation and a spoken vocabulary representation, e.g., for the Ipswich example, Database Spoken representation representation
  • the translation table 9 has a separate area for each type of field and may be accessed by the processor 5 to determine the database representat ⁇ on(s) corresponding to a given vocabulary word and vice versa. If desired (or if the database representations are not in text form) all items may be translated.
  • the pronunciation table 10 is a store containing a look-up table (and, if desired, a set of rules to reduce the number of entries in the look-up table) so that the processor 5 may access it (for synthesis purposes or for identifying homophones) to obtain, for a given spoken vocabulary word, a phonetic representation of one or more ways of pronouncing it, and, conversely (for recognition purposes), to obtain, for a given phonetic representation, one or more spoken vocabulary words which correspond to that pronunciation A separate area for each type of field may be desirable.
  • the operation of the apparatus is illustrated in the flow-chart of Figure 3 - which is implemented as a program stored in the memory 6.
  • the first steps involve the generation, using the synthesiser, of questions to the user, and recognition of the user's responses.
  • the processor 5 sends to the synthesiser 4 commands instructing it to play announcements requesting the user to speak, respectively the surname, forename and town of the person whose telephone number he seeks.
  • the processor sends to the recogniser 2 commands instructing it to recognise the user's responses by reference to phonetic vocabularies corresponding to those fields.
  • the recogniser may access the translation table 9, 10 to determine the vocabularies to be used for each recognition step, or may internally store or generate its own vocabularies; in the latter case the vocabularies used must correspond to those determined by the table 9, 10 (and, if appropriate, the database) so that it can output only words included in the phonetic vocabulary.
  • the recogniser is arranged so that it will produce as output, for each recognition step, as many phonetic representations as meet a predetermined criterion of similarity to the word actually spoken by the user. (The recogniser could of course perform a translation to spoken vocabulary representations, and many recognisers are capable of doing so) It is possible that the recogniser may indicate that the word actually spoken by the user is too dissimilar to any of the phonetic representations in Table 10 and indicate this to the processor 5. Preferably the recogniser also produces a "score" or confidence measure for each representation indicating the relative probability or likelihood of correspondence to the word actually spoken.
  • the preliminary steps 100 - 1 10 will not be discussed further as they are described elsewhere; for example reference may be made to our co- pending International patent application no. PCT/GB/02524.
  • steps will be described which involve the matching of a number of scored tuples against database entries. From these matching entries a scored set of unique (or distinguishable) tuples are derived which correspond to a different set of (possibly overlapping) fields to the tuples used for the match.
  • step 1 10 the processor 5 has available to it, for each of the three fields, one or more phonetic representations deemed to have been recognised. What is required now is a translation to spoken vocabulary representations - i.e. the translation illustrated to the left of Figure 1 .
  • the processor accesses the table 10 to determine, for each word, one or more corresponding spoken vocabulary representations, so that it now has three sets of spoken vocabulary representations, one for each field.
  • the score for each spoken vocabulary representation is the score for the phonetic representation from which it was translated If two phonetic representations translate to the same vocabulary representation, the more confident of the two scores may be taken. This is a specific example of the generalised matching process described above where the matching set of singles are pronunciations and the derived set of singles are spoken vocabulary items
  • step 1 1 4 the processor 5 now performs a translation to database representations - i.e the translation illustrated in the centre of Figure 1 - using the table 9 to determine, for each spoken representation, one or more corresponding database representations, so that it now has three sets of database representations Scores may be propagated as for the earlier translation
  • the database representations represent a number of triples (the actual number being the product of the number of representations in each of the three sets)
  • the score for a triple is typically the product of the scores of the individual representations of which it is composed.
  • step 1 1 6 the processor generates a list of these triples and passes it to the database which returns a count K of the number of database entries corresponding to these triples If (step 1 1 8) this number is zero, then the processor in step 1 20 sends a command to the synthesiser to play an announcement to the effect that no entry has been found, and terminates the program (step 1 22) Alternatively other action may be taken such as transferring the user to a manual operator.
  • step 1 24 the full entry tuples which matched at step 1 1 6 are retrieved in turn from the database to determine whether there are three or fewer distinguishable entries
  • the tuples are retrieved in order of likelihood, most likely first. It is possible that more than one tuple may share the same score.
  • step 1 26 the processor retrieves these entries from the database and forwards them to the synthesiser 4 which reads them to the user in confidence order, highest first, using the tables 9, 1 0 for translation from database representation to primary phonetic representation.
  • the process enters an iterative confirmation phase in which an attempt is made to identify lists of extracted tuples which contain three or fewer distinguishable tuples, and to offer the tuples in turn to the user for confirmation.
  • the tuples are the duple corresponding to the name (i.e. forename + surname), and the single corresponding to the town. Note that, although the case in this example, it is not in principle necessary that the constituent words of these tuples correspond to fields for which the user has already been asked.
  • step 1 30 a check is made as to whether the name duples have already been offered for confirmation, on the first pass the answer will always be "no" , and at step 1 32 a list of extracted name duples is prepared from the list of triples.
  • the name duples from the list are examined in similar fashion to that of the triples in step 1 24 to determine whether there are three or fewer distinguishable duples. (If desired the number of non-identical database representation duples in the list may be counted, and if this exceeds a predetermined limit, e.g. 30 the detailed examination process may be skipped (to step 144)).
  • each of the scored duples are translated into a single primary phonetic representation and fed to the synthesiser in confidence order in step 136 so that the synthesiser speaks the question (e.g.) "is the name John Smith 7 please answer yes or no” one at a time with the recogniser forwarding the reply to the processor ( 138) for testing for "yes” or “no” . If the user replies "yes”, then, in step 140: (a) the surname and forename fields are marked “confirmed” so that further offering of them for confirmation is bypassed by the test at step 130;
  • the process may then recommence from step 1 24. If a user replies "no" then the corresponding members in the list of triples are deleted. Which ones are deleted depends upon the defined relationship between the phonetic representations and the database representations, as chosen by the system designer For instance, if the user is asked "Is the name John Smith 7 " and the user replies "no", all members of the list of triples including John and Smith/Smyth/Smythe may be deleted or only those members including John and Smith/Smyth may be deleted, it having been decided by the system designer that Smythe is always pronounced differently to Smith or Smyth.
  • step 142 If (step 142) the user has answered no to all the offered tuples, this is considered a failure and the process it terminated via steps 1 20 and 1 22.
  • step 1434 If in the test at step 1 34 the number of distinguishable name duples is too large for confirmation, or at step 1 30 on a second or subsequent pass the name confirmation has already occurred, and assuming (step 144) the town name has not yet been offered for confirmation, then a town name confirmation process is commenced, comprising steps 1 46 to 1 54 which are in all respects analogous to the steps 1 32 to 1 42 already described. If these processes fail to reduce the number of distinguishable entries at the test 1 26 then the process eventually terminates with an announcement 1 56 that too many entries have been found for a response to be given. Alternatively, a further procedure may follow in which one or more further questions are asked (as in step 1 00) to obtain information on further fields.
  • Two tuples are considered indistinguishable if every field of one tuple is indistinguishable (as defined above) from the corresponding field of the other tuple.
  • Equally two representations are considered distinguishable if: (a) they are not identical; and (b) they do not translate to identical spoken vocabulary words (e.g. they are not synonyms or geographically confused); and
  • the list is ordered by score; i.e. the tuple having the highest confidence is D( 1 ), the next D(2) and so on.
  • the process to be described is illustrated in the flowchart of Figure 4 and involves taking the first tuple from the list, and comparing it with the tuple below it in the list to ascertain whether the two are distinguishable. If they are not, the tuple occupying the lower position is deleted from the list. This is repeated until all tuples have been examined. The same steps are then performed for the tuple now occupying the second position in the list, and so on; eventually every tuple remaining in the list is distinguishable from every other. If desired, the process may be terminated as soon as it is certain that the number of distinguishable tuples exceed that which can be handled by subsequent steps (i.e., in this example, 3).
  • i points to a tuple in the list and j points to a tuple lower down the list.
  • I is the number of tuples in the list.
  • i is initialised to 1
  • I is set to N
  • D(i) is read from the database.
  • Step 204 sets j to point to the following tuple and in step 206 D(j) is read.
  • a field pointer m is then initialised to 1 in step 208 and this is followed by a loop in which each field of the two tuples is taken in turn.
  • B is the number of such representations, i.e. A multiplied by the number of homophones.
  • each of the phonetic representations p 1 (b) is compared with each of the representations p2(d) (i.e. BD comparisons in total) . If equality is not found in any of these comparisons, then the two tuples are considered distinguishable. If (step 226) j has not reached the last tuple in the list, it is incremented (228) prior to reading a further tuple in a repeat of step 206; otherwise the tupie pointer i is tested at step 230 as to whether it has reached the penultimate member of the list and either (if it has not) is incremented (232) prior to a return to step 202, or (if it has) the process ends. At this point, the list now contains only mutually distinguishable tuples - I in number - and thus the result k is set to I in step 233 prior to exit from this part of the process at step 234.
  • the comparison at 21 8 indicates identity between one of the phonetic representations generated for one field of one tuple and one of the phonetic representations generated for the same field of the other tuple then it is necessary to increment m (step 236) and repeat steps 21 0 to 21 8 for a further field. If all fields of the two tuples have been compared and all are indistinguishable then this is recognised at step 238 and the tuples are deemed to be indistinguishable.
  • the lower tuple D(j) is removed from the list and I is decremented so that it continues to represent the number of tuples remaining in the list (steps 240, 242) j is then tested at step 244 to determine whether it points beyond the end of the (now shortened) list and if not a further tuple is examined, continuing from step 206. Otherwise the process proceeds to step 230, already described.
  • step 232 increments i to point to a tupie, it is known that there are at least i tuples which will not be removed from the list by step 240. Thus at this point ⁇ can be tested (step 246) to see if it has reached 3, and if so the process may be interrupted, k set to 4, and thence to the exit 234.
  • can be tested (step 246) to see if it has reached 3, and if so the process may be interrupted, k set to 4, and thence to the exit 234.
  • step 1 the algorithm represents the execution of step 1 24, with the list at the conclusion of Figure 4 being used to access (from the database) the entries to be offered in step 1 28;
  • step 1 32 the algorithm represents the execution of step 1 32, with the list at the conclusion of Figure 4 representing the list of name duples (in database representation) to be offered to the user in step 136;
  • step 146 the algorithm represents the execution of step 146, with list at the conclusion of Figure 4 representing the list of towns to be offered to the user in step 1 50;
  • step 140 the principle followed is that: where the user has confirmed a tuple (in this case a duple) which is one of a pair (or group) of tuples deemed indistinguishable, then this is considered to constitute confirmation also of the other tuple(s) of the pair or group. For example, if the list of name duples contains:
  • step 1 32 only one entry (for example the first "Dave Smith") is offered to the user for confirmation in step 1 36.
  • Which tuple is presented is determined according to a choice made by the system designer. However, if the user says “yes”, then in step 140, all tuples containing "Dave Smith” and all tuples containing "David Smyth" are retained.
  • each field p of the confirmed duple in phonetic representation i.e. the one generated in step 1 36
  • mappings used to IsP ⁇ ma ⁇ lySpoken/ output 146 decide on IsPrima ⁇ lyPronounced
  • the processor 5 For all entries found the processor 5 then examines each representation contained in the selected field to identify distinguishable ones of those combinations. The distinguishable entries may then be presented to the user for confirmation. To achieve this, the processor 5, with reference to the spoken vocabulary store 1 0 and according to the defined relationships, translates the identified database representations into spoken vocabulary via the "may be spoken" route. Thus all entries of Dave are translated to "Dave” and “David”, all entries for David are translated to “Dave” or “David”, all entries for Mave are translated to "Mave” or “Mavis" and all entries of Mavis are translated to "Mave” or “Mavis”. The processor 5 then translates the spoken vocabulary representations into phonetic representations ("may be pronounced") with reference to store 1 0.
  • phonetic representations which represent how "Dave”, “David”, “Mave” and “Mavis” are pronounced are determined as D Al V, D Al V I D, D AA V I D, M Al V and M Al VI S. These phonetic representations are then examined to identify distinguishable ones. For example, Dave and David are indistinguishable because they share at least one common pronunciation. However, Mave and Dave are distinguishable because they do not share any common phonetic representation. If two database representations are found to be indistinguishable, one of the representations is maintained and the other is discarded e.g.
  • Figure 1 shows a situation in which the vocabulary of the announcement means e.g. the speech synthesiser is the same as the vocabulary of the input means e.g. the speech recogniser.
  • spellings may also be used as an alternative input and/or confirmation medium to spoken forms.
  • the techniques required for spelling are directly analogous to spoken forms.
  • Figure 5 corresponds to Figure 1 with the inclusion of spelling (illustrated by box M) for town names (spelling may also be provided for surnames and forenames although, for simplicity, these mappings have not been shown in Figure 5) . Translations of "may be spelt" and “is primarily spelt" must be provided in addition to the spoken recognition.
  • spoken or spelt input or output is not essential - since the considerations concerning the offering and confirming still arise.
  • keypad input could be used, which has ambiguity problems owing to the allocation of more than one letter to each button of a telephone keypad.
  • a further vocabulary - of keypad input codes - is required, with "May be keyed ." translations analogous to the pronunciation spelling translations described above.
  • the machine representations of the input vocabulary and the database may be generated according to the same technique, for instance the database entries may be stored in text form and the input also be in text form, with the machine representations of the database and the output being generated according to a different technique e.g. a spoken output.
  • Confusion may arise if a user is presented with an announcement which includes a synonym of the actual word said by the user. For instance, say a user asks for "Dave Smith” and the system geneates an output as follows- “Did you say David Smith 7 " In order to avoid this confusion, a check may be carried out to ensure that the word corresponding to the identified distinguishable database entry corresponds also to the word recognised by the input means

Abstract

A method and apparatus for accessing a database system, said database system comprising a database containing entries each comprising a plurality of fields which contain machine representations of items of information pertaining to the entry, the said representations forming a first vocabulary; output means responsive to machine representations falling within a second vocabulary of such representations to generate signals representing the machine representations; and input means operable to receive signals and to produce machine representations falling within a third vocabulary of such representations. The method of accessing the database system comprises (i) generating, in accordance with a defined relationship between the first vocabulary and the third vocabulary, for each representation produced by the input means, one or more representations according to the first vocabulary; (ii) identifying database entries containing the generated representation; (iii) examining each representation or combination of representations which is contained in a selected field or combination of fields of the identified entries to identify distinguishable one(s) of those representations or combinations, a distinguishable representation or combination being one which, when translated in accordance with the defined relationship into representations of the second vocabulary, differs from every other such distinguishable representation or combination when similarly translated; and (iv) controlling the output means to generate an output including at least one word or combination of words which correspond(s) to one of the distinguishable representations or combinations.

Description

DATABASE ACCESS
The present invention relates to a database access particularly, though not exclusively, employing speech recognition input and synthesised speech output. International patent publication number WO94/14270 describes a mechanised directory enquiry system in which a caller is first prompted to speak the name of the city required. The word spoken is then recognised and the word with the highest confidence level is selected as being the word spoken by a user. The caller is then prompted to speak the name of the sought party. When a satisfactory confidence level is obtained, a database is accessed and the number articulated to the caller. If the confidence level fails to meet a preferred confidence level, the caller is prompted to spell all or part of the location or the name If more than one match between the spoken input and the database is found, the user is asked to confirm each match one by one until a confirmed match is found. If no such match can be located, the automatic processing is terminated.
European patent application publication no. 433964 relates to a system which uses a text input. First of all an input word representing the surname is matched with the entries. If a match is found that is "comparable" but not exactly the same as the input, the initial characters of the input and the database entry are compared. If these match, a record of the data entry is made. The system then compares the required titles and the personal names required. The most likely entry is provided to the user
US patent no. 5204894 relates to a personal electronic directory in which the names of the entries are stored in the user's voice and associated numbers are input using either a multitone (DTMF) telephone keypad or a spoken input. When a user needs to access the directory, the user speaks the required name and the directory system compares the first word of the input with the stored words and provides all possibilities in sequence to the user until the user confirms one.
In all of the prior art systems discussed above, the systems provide to the user identified entries in a database in a sequential manner until a user confirms the data entry as being that required.
According to one aspect of the present invention there is provided a database access apparatus comprising: (a) a database containing entries each comprising a plurality of fields which contain machine representations of items of information pertaining to the entry, the said representations forming a first vocabulary;
(b) announcement means responsive to machine representations falling within a second vocabulary of such representations to generate audio signals representing spoken announcements;
(c) input means operable to receive signals and to produce machine representations thereof falling within a third vocabulary of such representations;
(d) translation means defining a relationship between the first vocabulary and the second vocabulary and between the first vocabulary and the third vocabulary; and
(e) control means operable
(i) to generate, in accordance with the defined relationship, for each representation produced by the input means, one or more representations according to the first vocabulary;
(n) to identify database entries containing the generated representations; (iii) to examine each representation or combination of representations which is contained in a selected field or combination of fields of the identified entries to identify distinguishable one(s) of those representations or combinations, a distinguishable representation or combination being one which, when translated in accordance with the defined relationship into representations of the second vocabulary, differs from every other such distinguishable representation or combination when similarly translated; and (iv) to control the announcement means to generate an announcement including at least one word or combination of words which correspond(s) to one of the distinguishable representations or combinations.
The included word output by the announcement means may be in any suitable form e.g. the included word may represent a whole word, a spelt word or alphanumeπcs.
In another aspect the invention provides a method of speech recognition comprising
(a) generating at least one announcement requiring a response; (b) recognising the response(s);
(c) identifying database entries containing fields matching the recognised responses;
(d) in the event that the number of such entries exceeds a predetermined limit, generating an announcement containing at least one word corresponding to a selected field of an identified entry for a positive or negative response;
(e) upon receipt of a positive response, identifying database entries which contain fields matching the recognised responses and whose selected fields match the said word; and (f) repeating steps (d) and (e) at least once
Some embodiments of the present invention will now be described, by way of example, with reference to the accompanying drawings in which:
Figure 1 is an Entity Relationship Diagram showing an example of translations between phonetic, spoken and database representations;
Figure 2 is a block diagram of apparatus according to the invention;
Figure 3 is a flow chart illustrating the operation of the apparatus of Fig. 1 ;
Figure 3a is a flow chart illustrating an alternative operation of the apparatus of Figure 1 ; Figure 4 is a flow chart illustrating the process of identifying distinguishable tuples;
Figure 5 is an Entity Relationship Diagram showing an example of the translations between phonetic, spoken, spelt and database representations.
A voice interactive apparatus will be described, which generates questions to a user and recognises the user's responses in order to access the contents of the database. A database of names, addresses and telephone numbers, as might be used for an automated telephone directory enquiry system, will be used as an example. Firstly, however, some basic concepts will be discussed which will be of value in understanding the operation of the apparatus. The database will be supposed to contain a number of entries, with each entry containing a number of fields each containing an item of information about the entry; for example the forename, surname, location and telephone number of the person to whom the entry refers. A set of fields from one entry is here referred to as a tuple, viz. a combination of N fields (when N = 1 , 2 or 3 the terms single. duple and triple respectively are used) A complete entry is thus a tuple as also is a smaller set of fields extracted from one entry; thus a set of forename/surname pairs taken from the example database forms a set of extracted duples.
The items of information stored in the database fields may be in any convenient representation; generally this description will assume the use of a text representation such as, for the surname Jonson, character codes corresponding to the letters of the name, but with a stylised representation for some fields; for example one might, for geographical locations, identify several distinct places having the same name with different representations - e.g. Southend l , Southend2 and Southend3 for three place in England called Southend.
The words used in the dialogue between the apparatus and the user to represent field contents are conceptually distinct from the database representations and represent for each field a spoken vocabulary. If the database representations are text then there will be some overlap between them and the spoken vocabulary, but even then it may be desired to take account of the fact that the user might use, to describe an item of information, a different word from that actually contained in the database field; that is, some words may be regarded as synonyms.
Finally one needs also to note that more than one pronunciation may be associated with a word (homonyms), and conversely more than one word may have the same pronunciation (homophones) .
These concepts are illustrated in Figure 1 which is an "Entity Relationship Diagram", where we see a need for translation between representations as one moves from left to right or right to left. Box A represents a set of database entries. Box B represents a set of unique surnames, which have a 1 :many relationship with the entries -i.e. one surname may appear in many entries but one entry will contain only one surname. Boxes C, D and E correspond to sets of representations of forenames, towns and telephone numbers, where similar comments apply Box F represents the spoken vocabulary corresponding to forenames i.e. the set of all words that are permitted by the apparatus to be used to describe this field. This can differ from the database vocabulary (or, even if it is the same, may not have a 1 : 1 correspondence with it) to take account of aliases such as synonyms, for example an abbreviated form of a forename such as Andy or Jim may be considered to have the same meaning as the full forms of Andrew and James. Two connecting paths are shown between boxes C and F, corresponding to a preferred form for the spoken vocabulary word and to alternative forms which "may possibly" be used. Similarly, Box G represents the spoken vocabulary corresponding to town names Here again the possibility of aliasing arises since often a large town may contain smaller places or districts within it. For example, Ipswich is a town in the county of Suffolk, England. Nearby is a small district called Kesgrave. A person living in Kesgrave might have his address recorded in the database either as Ipswich or as Kesgrave. Similarly an enquirer seeking the telephone number of such a person might give either name as the location. Thus Ipswich and Kesgrave may be regarded as synonymous for the purposes of database retrieval Note however that this geographical aliasing is complex: Ipswich may be regarded as synonymous with another local village such as Foxhall, but Kesgrave and Foxhall are not synonymous because they are different places.
Box H represents, for completeness, a spoken vocabulary for surnames, though there is probably little scope for synonyms for this field.
Box J represents a pronunciation vocabulary for surnames, to take account of homophones and homonyms. For example the surname Smith is generally pronounced with a short "i" as in the English word "pith", whilst the name Smythe is pronounced with a long "i" as in "lithe" . Smyth, on the other hand, may be pronounced either way. Other instances of varying pronunciation may arise, for example due to variations in regional accents. Again, "primary" and "may be" links are shown, for reasons to be explained later. Boxes K and L represent pronunciation vocabularies for forenames and geographical names respectively.
Figure 2 is a block diagram of an apparatus for conducting a dialogue. An audio signal input 1 is connected to a speech recogniser 2, whilst an audio signal output 3 is connected to a speech synthesiser 4 A control unit in the form of a stored-program controlled processor 5 controls the operation of the recogniser and synthesiser and also has access to a program memory 6, a working memory (RAM) 7, a database 8, a spoken vocabulary translation table 9 and a pronunciation table 1 0. The audio inputs and outputs are connected for two-way communication - perhaps via a telephone line - with a user. The database 8 is assumed to contain telephone directory entries, as discussed above, in text form. The spoken vocabulary translation table 9 is a store containing word pairs consisting of a directory representation and a spoken vocabulary representation, e.g., for the Ipswich example, Database Spoken representation representation
IPSWICH IPSWICH
IPSWICH KESGRAVE
IPSWICH FOXHALL KESGRAVE KESGRAVE
KESGRAVE IPSWICH
FOXHALL FOXHALL
FOXHALL IPSWICH
(If desired any word used as a database representation which has a 1 : 1 correspondence with, and is the same as, a spoken vocabulary word may be omitted from the table, since no translation is required) . The translation table 9 has a separate area for each type of field and may be accessed by the processor 5 to determine the database representatιon(s) corresponding to a given vocabulary word and vice versa. If desired (or if the database representations are not in text form) all items may be translated.
The pronunciation table 10 is a store containing a look-up table (and, if desired, a set of rules to reduce the number of entries in the look-up table) so that the processor 5 may access it (for synthesis purposes or for identifying homophones) to obtain, for a given spoken vocabulary word, a phonetic representation of one or more ways of pronouncing it, and, conversely (for recognition purposes), to obtain, for a given phonetic representation, one or more spoken vocabulary words which correspond to that pronunciation A separate area for each type of field may be desirable. The operation of the apparatus is illustrated in the flow-chart of Figure 3 - which is implemented as a program stored in the memory 6. The first steps involve the generation, using the synthesiser, of questions to the user, and recognition of the user's responses. Thus in steps 100, 104, 108 the processor 5 sends to the synthesiser 4 commands instructing it to play announcements requesting the user to speak, respectively the surname, forename and town of the person whose telephone number he seeks. In steps 102, 106 and 1 10 the processor sends to the recogniser 2 commands instructing it to recognise the user's responses by reference to phonetic vocabularies corresponding to those fields. The recogniser may access the translation table 9, 10 to determine the vocabularies to be used for each recognition step, or may internally store or generate its own vocabularies; in the latter case the vocabularies used must correspond to those determined by the table 9, 10 (and, if appropriate, the database) so that it can output only words included in the phonetic vocabulary. The recogniser is arranged so that it will produce as output, for each recognition step, as many phonetic representations as meet a predetermined criterion of similarity to the word actually spoken by the user. (The recogniser could of course perform a translation to spoken vocabulary representations, and many recognisers are capable of doing so) It is possible that the recogniser may indicate that the word actually spoken by the user is too dissimilar to any of the phonetic representations in Table 10 and indicate this to the processor 5. Preferably the recogniser also produces a "score" or confidence measure for each representation indicating the relative probability or likelihood of correspondence to the word actually spoken. The preliminary steps 100 - 1 10 will not be discussed further as they are described elsewhere; for example reference may be made to our co- pending International patent application no. PCT/GB/02524.
In the following text, steps will be described which involve the matching of a number of scored tuples against database entries. From these matching entries a scored set of unique (or distinguishable) tuples are derived which correspond to a different set of (possibly overlapping) fields to the tuples used for the match.
Following step 1 10 the processor 5 has available to it, for each of the three fields, one or more phonetic representations deemed to have been recognised. What is required now is a translation to spoken vocabulary representations - i.e. the translation illustrated to the left of Figure 1 . Thus in step 1 1 2 the processor accesses the table 10 to determine, for each word, one or more corresponding spoken vocabulary representations, so that it now has three sets of spoken vocabulary representations, one for each field. The score for each spoken vocabulary representation is the score for the phonetic representation from which it was translated If two phonetic representations translate to the same vocabulary representation, the more confident of the two scores may be taken This is a specific example of the generalised matching process described above where the matching set of singles are pronunciations and the derived set of singles are spoken vocabulary items
In step 1 1 4, the processor 5 now performs a translation to database representations - i.e the translation illustrated in the centre of Figure 1 - using the table 9 to determine, for each spoken representation, one or more corresponding database representations, so that it now has three sets of database representations Scores may be propagated as for the earlier translation The database representations represent a number of triples (the actual number being the product of the number of representations in each of the three sets) The score for a triple is typically the product of the scores of the individual representations of which it is composed. At step 1 1 6, the processor generates a list of these triples and passes it to the database which returns a count K of the number of database entries corresponding to these triples If (step 1 1 8) this number is zero, then the processor in step 1 20 sends a command to the synthesiser to play an announcement to the effect that no entry has been found, and terminates the program (step 1 22) Alternatively other action may be taken such as transferring the user to a manual operator.
If there are entries, then in step 1 24 the full entry tuples which matched at step 1 1 6 are retrieved in turn from the database to determine whether there are three or fewer distinguishable entries The tuples are retrieved in order of likelihood, most likely first. It is possible that more than one tuple may share the same score. In this case an arbitrary ranking may be selected between them or a priori knowledge may be used to determine the ranking As the tuples are retrieved, an assessment is made as to whether they represent three or fewer distinguishable entries The meaning of "distinguishable" and the method of its determination will be explained presently Once a count of four is reached the test is terminated If (step 1 26) the number of distinguishable entries is three or fewer, then in step 1 28 the processor retrieves these entries from the database and forwards them to the synthesiser 4 which reads them to the user in confidence order, highest first, using the tables 9, 1 0 for translation from database representation to primary phonetic representation.
If there are more than three distinguishable entries then the process enters an iterative confirmation phase in which an attempt is made to identify lists of extracted tuples which contain three or fewer distinguishable tuples, and to offer the tuples in turn to the user for confirmation. In this example the tuples are the duple corresponding to the name (i.e. forename + surname), and the single corresponding to the town. Note that, although the case in this example, it is not in principle necessary that the constituent words of these tuples correspond to fields for which the user has already been asked.
In step 1 30 a check is made as to whether the name duples have already been offered for confirmation, on the first pass the answer will always be "no" , and at step 1 32 a list of extracted name duples is prepared from the list of triples. The name duples from the list are examined in similar fashion to that of the triples in step 1 24 to determine whether there are three or fewer distinguishable duples. (If desired the number of non-identical database representation duples in the list may be counted, and if this exceeds a predetermined limit, e.g. 30 the detailed examination process may be skipped (to step 144)). If there are three or fewer distinguishable duples, (step 1 34) then each of the scored duples are translated into a single primary phonetic representation and fed to the synthesiser in confidence order in step 136 so that the synthesiser speaks the question (e.g.) "is the name John Smith7 please answer yes or no" one at a time with the recogniser forwarding the reply to the processor ( 138) for testing for "yes" or "no" . If the user replies "yes", then, in step 140: (a) the surname and forename fields are marked "confirmed" so that further offering of them for confirmation is bypassed by the test at step 130;
(b) all members of the list of triples, other than those which are related to the confirmed duple (see below), are deleted.
The process may then recommence from step 1 24. If a user replies "no" then the corresponding members in the list of triples are deleted. Which ones are deleted depends upon the defined relationship between the phonetic representations and the database representations, as chosen by the system designer For instance, if the user is asked "Is the name John Smith7" and the user replies "no", all members of the list of triples including John and Smith/Smyth/Smythe may be deleted or only those members including John and Smith/Smyth may be deleted, it having been decided by the system designer that Smythe is always pronounced differently to Smith or Smyth.
Equally, if the user is asked "Is the name Dave Smith7" and the user replies "no", the members including Dave Smith may be deleted and the user asked "Is the name David Smith7"
If (step 142) the user has answered no to all the offered tuples, this is considered a failure and the process it terminated via steps 1 20 and 1 22.
If in the test at step 1 34 the number of distinguishable name duples is too large for confirmation, or at step 1 30 on a second or subsequent pass the name confirmation has already occurred, and assuming (step 144) the town name has not yet been offered for confirmation, then a town name confirmation process is commenced, comprising steps 1 46 to 1 54 which are in all respects analogous to the steps 1 32 to 1 42 already described. If these processes fail to reduce the number of distinguishable entries at the test 1 26 then the process eventually terminates with an announcement 1 56 that too many entries have been found for a response to be given. Alternatively, a further procedure may follow in which one or more further questions are asked (as in step 1 00) to obtain information on further fields. This process shown in Figure 3 from step 1 1 6 onwards has, for clarity, been described in , terms of confirmation of a duple and a single. A more generalised algorithm might proceed as follows. Start: If there are no database entries still active: Give "none" message. Finish algorithm.
Jump: If there are three or less distinguishable database entries: Offer them. Finish algorithm. If there are more than three distinguishable database entries, then: Do the following for successive prioritised fields or combinations of fields that have not already been confirmed until no such fields remain:
If for this there is a tuple list with 3 or less distinguishable tuples then: Attempt to confirm this list.
If positive confirmation, confirm it and go to JUMP. If negative confirmation give "wrong entry" message, go back to "do the following"
In a prioritised list, get the next vocabulary which may be asked. If there is an un-asked and un-confirmed vocab remaining: Ask for it.
Goto start of algorithm. If not:
Give "too many" message. Finish algorithm
An alternative process for the whole of the enquiry process is shown in Figure 3a. This process proceeds as follows: Start
If there are no database entries still active (300): give "none" message (301 ) finish algorithm If there are three or less distinguishable database entries (302): offer them (303) finish algorithm If there are more than three distinguishable database entries (302), consider each of a prioritised list of fields or combinations of fields that have not yet been confirmed (304):
If there is a tuple list with three or less distinguishable tuples Attempt to confirm this list (308)
If positive confirmation (309): go to start If negative: give wrong entry message (310) finish algorithm If not: Consider next field or combination of fields. If no tuple lists with three or less distinguishable tuples:
In a prioritised list, if there is a remaining vocabulary which remains un-asked and un-confirmed (305) : ask for it (307) go to "start" if not: give "too many" message (306) fim'sh algorithm In the above procedures, it is required to examine a list of tuples in database representation to determine how many distinguishable tuples there are. The tuple in question may be an entire database entry (as in step 1 24 above), it may be an extracted tuple containing representations from two (or more) fields (as in step 132) or it may be an extracted single (as in step 146). Two representations are considered indistinguishable if:
(a) they are identical; or
(b) they translate to identical spoken vocabulary words (e.g. they are synonyms or are geographically confused); or
(c) they translate to spoken vocabulary words which are homophones (i.e. those words translate to identical phonetic representations).
Two tuples are considered indistinguishable if every field of one tuple is indistinguishable (as defined above) from the corresponding field of the other tuple.
Equally two representations are considered distinguishable if: (a) they are not identical; and (b) they do not translate to identical spoken vocabulary words (e.g. they are not synonyms or geographically confused); and
(c) they do not translate to spoken vocabulary words which are homophones (i.e. the words do not translate to identical phonetic representations).
Suppose that we have a list of tuples in database representation where the first tuple in the list is D( 1 ) and the tuple currently occupying the n'th position in the list is D(n) where n = 1 , ... ,N, there being N tuples in the list. Each tuple consists of M fields, designated d, so that the m'th field of tuple D(n) is d(n,m) - i.e. D(n) = (d(n,m)}, m = 1 , ...,M. Preferably the list is ordered by score; i.e. the tuple having the highest confidence is D( 1 ), the next D(2) and so on. The process to be described is illustrated in the flowchart of Figure 4 and involves taking the first tuple from the list, and comparing it with the tuple below it in the list to ascertain whether the two are distinguishable. If they are not, the tuple occupying the lower position is deleted from the list. This is repeated until all tuples have been examined. The same steps are then performed for the tuple now occupying the second position in the list, and so on; eventually every tuple remaining in the list is distinguishable from every other. If desired, the process may be terminated as soon as it is certain that the number of distinguishable tuples exceed that which can be handled by subsequent steps (i.e., in this example, 3). In Figure 4, i points to a tuple in the list and j points to a tuple lower down the list. I is the number of tuples in the list. In step 200, i is initialised to 1 , and I is set to N, and in step 202 D(i) is read from the database. Step 204 sets j to point to the following tuple and in step 206 D(j) is read. A field pointer m is then initialised to 1 in step 208 and this is followed by a loop in which each field of the two tuples is taken in turn. Field m of tuple D(i) is (step 210) translated, with the aid of the table 9, into one or more spoken vocabulary words s1 (a) where a = 1 ,...A and A is, effectively, the number of synonyms found. The spoken vocabulary word(s) a 1 (a) is/are then translated (21 2) with the aid of the table 1 0 into a total of B phonetic representations p1 (b) (b = 1 ,...B) . B is the number of such representations, i.e. A multiplied by the number of homophones. Analogous steps 214, 21 6 perform a two-stage translation of the corresponding field of D(j) to produce one or more phonetic representations p2(d) (d = 1 ,...D).
In step 21 8, each of the phonetic representations p 1 (b) is compared with each of the representations p2(d) (i.e. BD comparisons in total) . If equality is not found in any of these comparisons, then the two tuples are considered distinguishable. If (step 226) j has not reached the last tuple in the list, it is incremented (228) prior to reading a further tuple in a repeat of step 206; otherwise the tupie pointer i is tested at step 230 as to whether it has reached the penultimate member of the list and either (if it has not) is incremented (232) prior to a return to step 202, or (if it has) the process ends. At this point, the list now contains only mutually distinguishable tuples - I in number - and thus the result k is set to I in step 233 prior to exit from this part of the process at step 234.
If on the other hand the comparison at 21 8 indicates identity between one of the phonetic representations generated for one field of one tuple and one of the phonetic representations generated for the same field of the other tuple then it is necessary to increment m (step 236) and repeat steps 21 0 to 21 8 for a further field. If all fields of the two tuples have been compared and all are indistinguishable then this is recognised at step 238 and the tuples are deemed to be indistinguishable. In this case, the lower tuple D(j) is removed from the list and I is decremented so that it continues to represent the number of tuples remaining in the list (steps 240, 242) j is then tested at step 244 to determine whether it points beyond the end of the (now shortened) list and if not a further tuple is examined, continuing from step 206. Otherwise the process proceeds to step 230, already described.
Each time step 232 increments i to point to a tupie, it is known that there are at least i tuples which will not be removed from the list by step 240. Thus at this point ι can be tested (step 246) to see if it has reached 3, and if so the process may be interrupted, k set to 4, and thence to the exit 234. In order to clarify the relationship between the algorithm of Figure 4 and the steps of Figure 3 or 3a, it should be mentioned that.
(a) the algorithm represents the execution of step 1 24, with the list at the conclusion of Figure 4 being used to access (from the database) the entries to be offered in step 1 28; (b) the algorithm represents the execution of step 1 32, with the list at the conclusion of Figure 4 representing the list of name duples (in database representation) to be offered to the user in step 136;
(c) the algorithm represents the execution of step 146, with list at the conclusion of Figure 4 representing the list of towns to be offered to the user in step 1 50;
(d) the algorithm represents the execution of step 302.
It remains to explain the removal which occurs in steps 140 and 1 54 in Figure 3. Taking step 140 as an example, the principle followed is that: where the user has confirmed a tuple (in this case a duple) which is one of a pair (or group) of tuples deemed indistinguishable, then this is considered to constitute confirmation also of the other tuple(s) of the pair or group. For example, if the list of name duples contains:
Dave Smith David Smyth and these are considered by step 1 32 to be indistinguishable, only one entry (for example the first "Dave Smith") is offered to the user for confirmation in step 1 36. Which tuple is presented is determined according to a choice made by the system designer. However, if the user says "yes", then in step 140, all tuples containing "Dave Smith" and all tuples containing "David Smyth" are retained.
Whilst this could be done using the results of the translations performed in step 1 32, we prefer to proceed as follows. Each field p of the confirmed duple in phonetic representation (i.e. the one generated in step 1 36) is translated using the tables 9, 10 into one or more database representation. All duples represented by combinations of these representations are to be confirmed - i.e. any of the list of triples which contains one of these duples is retained, and the other triples are deleted.
It is perhaps worth clarifying the relationship between the Entity Relationship Diagram of Figure 1 and the processes set out in Figures 3 and 4. In these processes, translations occur from database representation to spoken vocabulary representation to phonetic representation (i.e. right to left in Figure 1 ) and in the opposite direction, viz. from phonetic representation to spoken vocabulary representation to database representation (i.e. left to right in Figure 1 ). The existence of alternative paths in the diagram (e.g. may be spoken/is primarily spoken) implies a choice of translation routes. For synthesis, the "primarily spoken" routes would normally be used; for other purposes, variations are possible according to one's desire to include or exclude synonyms or homophones in the translation. Different routes are tabulated below, with an example set of routes for forenames. Other mappings may be used.
Direction Used in Description Typical Route for forenames steps input to 112, 1 14 Mappings used to MayBePronounced / database convert a MayBeSpoken recognition result (i e include synonyms and into all possible homophones) database representations database to 124, 132, mappings used to IsPπmarilySpoken / output 146 decide on MayBePronounced
302, 304 distinguishable (i e results in synonyms but not database homophones being included in representations the resulting list of distinguishable tuples) database to 124, 132, mappings used to IsPπmaπlySpoken/ output 146 decide on IsPrimaπlyPronounced
302, 304 distinguishable (i e excludes homophones from database output but includes synonyms representations and homonyms)
database to 136, 150 mappings used for IsPπmarilySpoken/ output output of result e g IsPnmaπlyPronounced
128, 303, synthesis (i e provides a primary output
308 form for each database representation) output to 140,154 mappings used to MayBePronounced/ database confirm an output MayBeSpoken (i e include both
308 pronunciation back synonyms and homophones) into database / representation output to 140, 154, mappings used to Maybe Pronounced/ database 308 exclude items Is Primarily Spoken rejected by user (i e excludes homophones but includes synonyms for subsequent database searches)
Thus when an input is received and machine representations representing the input signal are generated, the machine representations for the single input are converted to all possible data representations To achieve this, the input is mapped to the spoken recognition vocabulary by the "May be pronounced" route.
Thus all possible spoken representations of the input are identified. These spoken vocabulary representations are then mapped onto all possible database representations which the spoken representations may represent (e.g "May be spoken") For example, say the forename "Dave" is input and the phonetic representations D Al V and M Al V are generated by the speech recogniser 2. These phonetic representations are then converted to spoken vocabulary representations, for instance, "Dave" and "Mave". Each of these spoken vocabulary representations is then converted, by means of the store 9, into all possible database representations ("May be spoken") e g. Dave, David, Mave, Mavis. The processor 5 then searches the database 8 for all entries including any of these entries in their forename field.
For all entries found the processor 5 then examines each representation contained in the selected field to identify distinguishable ones of those combinations. The distinguishable entries may then be presented to the user for confirmation. To achieve this, the processor 5, with reference to the spoken vocabulary store 1 0 and according to the defined relationships, translates the identified database representations into spoken vocabulary via the "may be spoken" route. Thus all entries of Dave are translated to "Dave" and "David", all entries for David are translated to "Dave" or "David", all entries for Mave are translated to "Mave" or "Mavis" and all entries of Mavis are translated to "Mave" or "Mavis". The processor 5 then translates the spoken vocabulary representations into phonetic representations ("may be pronounced") with reference to store 1 0. Thus the phonetic representations which represent how "Dave", "David", "Mave" and "Mavis" are pronounced are determined as D Al V, D Al V I D, D AA V I D, M Al V and M Al VI S. These phonetic representations are then examined to identify distinguishable ones. For example, Dave and David are indistinguishable because they share at least one common pronunciation. However, Mave and Dave are distinguishable because they do not share any common phonetic representation. If two database representations are found to be indistinguishable, one of the representations is maintained and the other is discarded e.g. David may be selected over Dave and Mavis over Mave This choice is determined by the system designer and stored in memory 6 The phonetic representation of the most probable of "David" and "Mavis" is presented by the processor 5 to the synthesiser 4 to a user using the "Is primarily spoken'V'is primarily pronounced" relationship Note that in practice the stores 9, 10 may contain separate "tables" for each mapping.
Figure 1 shows a situation in which the vocabulary of the announcement means e.g. the speech synthesiser is the same as the vocabulary of the input means e.g. the speech recogniser. However this is not always so. For instance, it should be noted that spellings may also be used as an alternative input and/or confirmation medium to spoken forms. The techniques required for spelling are directly analogous to spoken forms. Figure 5 corresponds to Figure 1 with the inclusion of spelling (illustrated by box M) for town names (spelling may also be provided for surnames and forenames although, for simplicity, these mappings have not been shown in Figure 5) . Translations of "may be spelt" and "is primarily spelt" must be provided in addition to the spoken recognition.
If spellings are to be used during recognition and/or confirmation then for all the routes mentioned above with reference to Figure 1 , "Spelt" is substituted for 'Pronounced' and the algorithms all still apply.
It should also be mentioned that spoken or spelt input or output is not essential - since the considerations concerning the offering and confirming still arise. For example, keypad input could be used, which has ambiguity problems owing to the allocation of more than one letter to each button of a telephone keypad. In this case a further vocabulary - of keypad input codes - is required, with "May be keyed ...." translations analogous to the pronunciation
Figure imgf000020_0001
spelling translations described above.
The machine representations of the input vocabulary and the database may be generated according to the same technique, for instance the database entries may be stored in text form and the input also be in text form, with the machine representations of the database and the output being generated according to a different technique e.g. a spoken output.
Confusion may arise if a user is presented with an announcement which includes a synonym of the actual word said by the user. For instance, say a user asks for "Dave Smith" and the system geneates an output as follows- "Did you say David Smith7" In order to avoid this confusion, a check may be carried out to ensure that the word corresponding to the identified distinguishable database entry corresponds also to the word recognised by the input means

Claims

1 . A database access apparatus comprising:
(a) a database containing entries each comprising a plurality of fields which contain machine representations of items of information pertaining to the entry, the said representations forming a first vocabulary;
(b) announcement means responsive to machine representations falling within a second vocabulary of such representations to generate audio signals representing spoken announcements; (c) input means operable to receive signals and to produce machine representations thereof falling within a third vocabulary of such representations;
(d) translation means defining a relationship between the first vocabulary and the second vocabulary and between the first vocabulary and the third vocabulary; and (e) control means operable
(i) to generate, in accordance with the defined relationship, for each representation produced by the input means, one or more representations according to the first vocabulary; (n) to identify database entries containing the generated representations; (in) to examine each representation or combination of representations which is contained in a selected field or combination of fields of the identified entries to identify distinguishable one(s) of those representations or combinations, a distinguishable representation or combination being one which, when translated in accordance with the defined relationship into representations of the second vocabulary, differs from every other such distinguishable representation or combination when similarly translated; and (iv) to control the announcement means to generate an announcement including at least one word or combination of words which correspond(s) to one of the distinguishable representations or combinations
2. Apparatus according to claim 1 wherein the control means is operable to control the announcement means to generate successive announcements, each of which includes at least one word or combination of words which correspoπd(s) to one of the distinguishable representations or combinations, the control means being operable to control the announcement means to output the announcements in sequential confidence order, the first announcement including at least one word or combination or words which correspond(s) to the most likely distinguishable representation or combinations
3. An apparatus according to claim 1 or 2 in which the control means is operable, in step (iv), for the or each distinguishable representation or combination, to generate, using the translation means, from the distinguishable representation or combination, one representation or combination in the second vocabulary and to transmit this to the announcement means.
4. An apparatus according to claim 1 or 2 in which the control means is operable, in step (iv), for the or each distinguishable representation or combination, to transmit to the announcement means one representation or combination in the second vocabulary which corresponds, in accordance with a relationship defined by the translation means, to the distinguishable representation or combination and which has already been generated in step (in).
5. An apparatus according to any of claims 1 to 4 in which the control means is operable in step (iv) to generate an announcement requesting confirmation of the included word or combination and is further arranged, in operation:
(v) upon receipt of a confirmatory response, to generate from a representation or combination in the second vocabulary, which corresponds to the included word(s), one or more representations or combinations according to the first vocabulary and to identify the database entries or entry which contains such a representation or combination in the selected fιeld(s).
6. An apparatus according to any one of the preceding claims in which the input means is a speech recogniser operable to receive audio signals.
7. An apparatus according to claim 6 in which the second and third vocabularies are identical.
8. An apparatus according to any preceding claims wherein the first and third vocabularies are identical.
5 9. An apparatus according to any preceding claim in which at least one of the selected fιeld(s) is a field in which, in step (n), a generated representation was found, and in which a word included at step (iv) is a word which corresponds to a representation generated by the input means.
1 0 1 0. Apparatus according to any preceding claims further comprising an intermediate vocabulary and the translation means defines the relationships between the first and the intermediate vocabulary; the second and the intermediate vocabulary; and the third and the intermediate vocabulary
1 5 1 1 . A method of accessing a database system, said database system comprising a database containing entries each comprising a plurality of fields which contain machine representations of items of information pertaining to the entry, the said representations forming a first vocabulary; announcement means responsive to machine representations falling within a second vocabulary of such
20 representations to generate audio signals representing spoken announcements; and input means operable to receive signals and to produce machine representations falling within a third vocabulary of such representations; the method of accessing the database system comprising:
(i) generating, in accordance with a defined relationship between the first
25 vocabulary and the third vocabulary, for each representation produced by the input means, one or more representations according to the first vocabulary;
(II ) identifying database entries containing the generated representations; (in) examining each representation or combination of representations which is contained in a selected field or combination of fields of the identified entries to
30 identify distinguishable one(s) of those representations or combinations, a distinguishable representation or combination being one which, when translated in accordance with the defined relationship into representations of the second vocabulary, differs from every other such distinguishable representation or combination when similarly translated; and (iv) controlling the announcement means to generate an announcement including at least one word or combination of words which correspond(s) to one of the distinguishable representations or combinations.
1 2. A method according to claim 1 1 further comprising, in step (iv), controlling the output means to output one or more announcement(s) in sequential confidence order, the first announcement including at least one word or combination of words which corresponds to the most likely distinguishable representation.
1 3. A method according to Claim 1 1 or 1 2 further comprising-
(a) generating at least one announcement requiring a response;
(b) recognising the response(s),
(c) identifying database entries containing fields matching the recognised responses; (d) in the event that the number of such entries exceeds a predetermined limit, generating an output containing at least one word corresponding to a selected field of an identified entry for a positive or negative response; (e) upon receipt of a positive response, identifying database entries which contain fields matching the recognised responses and whose selected fields match the said word; and
(f) repeating steps (d) and (e) at least once.
14. A method according to claim 1 1 or 1 2 comprising:
(a) generating at least one announcement requiring a response; (b) recognising the response(s);
(c) identifying database entries containing fields matching the recognised response(s);
(d) in the event that the number of such entries is below or equal to a predetermined limit, generating an output presenting one or more of the entries each containing one or more of the fιeld(s) of the matching entries, and exit;
(e) in the event that the number of such entries exceeds a predetermined limit, for a particular field or selection of fields, examining a distinguishable representation or selection of representations, which is contained in the particular field or combination of fields of the identical entries; (f) in the event that the number of such distinguishable representations or combinations of representations is above a predetermined limit, repeat step (e), selecting another field or selection of fields, according to a pre-determined order, that has not already been considered until no such fields remain to be considered; and
(g) generating at least one output requiring a response that has not already been requested or confirmed;
(h) recognising the response(s);
(i) repeat step (c) af least once; (j) in the event that the number of such distinguishable representations or combinations of representations is below or equal to a predetermined limit, generating an announcement containing at least one word corresponding to the selected fιeld(s) of an identified entry for a positive or negative response,
(k) upon receipt of a positive response, identifying database entries which contain fields matching the recognised responses and whose selected fields match the said words or combination of words; and
(I) repeating steps (d) and (e) at least once;
(m) upon receipt of a negative response for all such word or words, exiting the database accessing method
PCT/GB1997/000233 1996-01-31 1997-01-27 Database access WO1997028634A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
NZ326441A NZ326441A (en) 1996-01-31 1997-01-27 Database access using speech recognition
JP9527399A JP2000504510A (en) 1996-01-31 1997-01-27 Database access
EP97901199A EP0878085B1 (en) 1996-01-31 1997-01-27 Database access
DE69729277T DE69729277T2 (en) 1996-01-31 1997-01-27 DATABASE ACCESS
AU36068/97A AU707248C (en) 1996-01-31 1997-01-27 Database access
CA002244116A CA2244116C (en) 1996-01-31 1997-01-27 Database access
NO983501A NO983501L (en) 1996-01-31 1998-07-30 Database access

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GB9601925.2 1996-01-31
GBGB9601925.2A GB9601925D0 (en) 1996-01-31 1996-01-31 Database access
US08/659,526 US5778344A (en) 1996-01-31 1996-06-05 Database access using data field translations to find unique database entries

Publications (1)

Publication Number Publication Date
WO1997028634A1 true WO1997028634A1 (en) 1997-08-07

Family

ID=10787862

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1997/000233 WO1997028634A1 (en) 1996-01-31 1997-01-27 Database access

Country Status (12)

Country Link
US (1) US5778344A (en)
EP (1) EP0878085B1 (en)
JP (1) JP2000504510A (en)
KR (1) KR19990082252A (en)
CN (1) CN1121777C (en)
CA (1) CA2244116C (en)
DE (1) DE69729277T2 (en)
GB (1) GB9601925D0 (en)
MX (1) MX9806168A (en)
NO (1) NO983501L (en)
NZ (1) NZ326441A (en)
WO (1) WO1997028634A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2362746A (en) * 2000-05-23 2001-11-28 Vocalis Ltd Data recognition and retrieval
WO2004036887A1 (en) 2002-10-16 2004-04-29 Koninklijke Philips Electronics N.V. Directory assistant method and apparatus
EP1481328A1 (en) * 2002-02-07 2004-12-01 SAP Aktiengesellschaft User interface and dynamic grammar in a multi-modal synchronization architecture
US7603291B2 (en) 2003-03-14 2009-10-13 Sap Aktiengesellschaft Multi-modal sales applications
US8372418B2 (en) 2007-07-20 2013-02-12 Bayer Innovation Gmbh Polymer composite film with biocide functionality
US8383549B2 (en) 2007-07-20 2013-02-26 Bayer Cropscience Lp Methods of increasing crop yield and controlling the growth of weeds using a polymer composite film
CN107967916A (en) * 2016-10-20 2018-04-27 谷歌有限责任公司 Determine voice relation

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6301560B1 (en) * 1998-01-05 2001-10-09 Microsoft Corporation Discrete speech recognition system with ballooning active grammar
US6629069B1 (en) 1998-07-21 2003-09-30 British Telecommunications A Public Limited Company Speech recognizer using database linking
US6185530B1 (en) 1998-08-14 2001-02-06 International Business Machines Corporation Apparatus and methods for identifying potential acoustic confusibility among words in a speech recognition system
US6269335B1 (en) * 1998-08-14 2001-07-31 International Business Machines Corporation Apparatus and methods for identifying homophones among words in a speech recognition system
US6192337B1 (en) 1998-08-14 2001-02-20 International Business Machines Corporation Apparatus and methods for rejecting confusible words during training associated with a speech recognition system
JP2002532763A (en) * 1998-12-17 2002-10-02 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Automatic inquiry system operated by voice
GB2347307B (en) * 1999-02-26 2003-08-13 Mitel Inc Dial by name feature for messaging system
US6587818B2 (en) * 1999-10-28 2003-07-01 International Business Machines Corporation System and method for resolving decoding ambiguity via dialog
US7013276B2 (en) * 2001-10-05 2006-03-14 Comverse, Inc. Method of assessing degree of acoustic confusability, and system therefor
DE602004011753T2 (en) * 2003-03-01 2009-02-05 Coifman, Robert E. Method and device for improving transcription accuracy in speech recognition
JP3890326B2 (en) * 2003-11-07 2007-03-07 キヤノン株式会社 Information processing apparatus, information processing method, recording medium, and program
US20060167920A1 (en) * 2005-01-25 2006-07-27 Listdex Corporation System and Method for Managing Large-Scale Databases
DE102014114845A1 (en) * 2014-10-14 2016-04-14 Deutsche Telekom Ag Method for interpreting automatic speech recognition

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994014270A1 (en) * 1992-12-17 1994-06-23 Bell Atlantic Network Services, Inc. Mechanized directory assistance

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4831654A (en) * 1985-09-09 1989-05-16 Wang Laboratories, Inc. Apparatus for making and editing dictionary entries in a text to speech conversion system
GB8610809D0 (en) * 1986-05-02 1986-06-11 Smiths Industries Plc Speech recognition apparatus
US5027406A (en) * 1988-12-06 1991-06-25 Dragon Systems, Inc. Method for interactive speech recognition and training
AU631276B2 (en) * 1989-12-22 1992-11-19 Bull Hn Information Systems Inc. Name resolution in a directory database
US5251129A (en) * 1990-08-21 1993-10-05 General Electric Company Method for automated morphological analysis of word structure
US5204894A (en) * 1990-11-09 1993-04-20 Bell Atlantic Network Services, Inc. Personal electronic directory
US5454062A (en) * 1991-03-27 1995-09-26 Audio Navigation Systems, Inc. Method for recognizing spoken words
CA2088080C (en) * 1992-04-02 1997-10-07 Enrico Luigi Bocchieri Automatic speech recognizer
US5623578A (en) * 1993-10-28 1997-04-22 Lucent Technologies Inc. Speech recognition system allows new vocabulary words to be added without requiring spoken samples of the words

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994014270A1 (en) * 1992-12-17 1994-06-23 Bell Atlantic Network Services, Inc. Mechanized directory assistance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
D.J.ATTWATER ET AL: "ISSUES IN LARGE-VOCABULARY INTERACTIVE SPEECH SYSTEMS", BT TECHNOLOGY JOURNAL, vol. 14, no. 1, January 1996 (1996-01-01), IPSWICH SUFFOLK (GB), pages 177 - 186, XP000579339 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2362746A (en) * 2000-05-23 2001-11-28 Vocalis Ltd Data recognition and retrieval
EP1481328A1 (en) * 2002-02-07 2004-12-01 SAP Aktiengesellschaft User interface and dynamic grammar in a multi-modal synchronization architecture
EP1481328A4 (en) * 2002-02-07 2005-07-20 Sap Ag User interface and dynamic grammar in a multi-modal synchronization architecture
US7177814B2 (en) 2002-02-07 2007-02-13 Sap Aktiengesellschaft Dynamic grammar for voice-enabled applications
WO2004036887A1 (en) 2002-10-16 2004-04-29 Koninklijke Philips Electronics N.V. Directory assistant method and apparatus
US7603291B2 (en) 2003-03-14 2009-10-13 Sap Aktiengesellschaft Multi-modal sales applications
US8372418B2 (en) 2007-07-20 2013-02-12 Bayer Innovation Gmbh Polymer composite film with biocide functionality
US8383549B2 (en) 2007-07-20 2013-02-26 Bayer Cropscience Lp Methods of increasing crop yield and controlling the growth of weeds using a polymer composite film
CN107967916A (en) * 2016-10-20 2018-04-27 谷歌有限责任公司 Determine voice relation
CN107967916B (en) * 2016-10-20 2022-03-11 谷歌有限责任公司 Determining phonetic relationships
US11450313B2 (en) 2016-10-20 2022-09-20 Google Llc Determining phonetic relationships

Also Published As

Publication number Publication date
KR19990082252A (en) 1999-11-25
NO983501L (en) 1998-09-30
DE69729277T2 (en) 2005-06-02
CA2244116C (en) 2001-11-27
CN1121777C (en) 2003-09-17
NZ326441A (en) 1999-09-29
EP0878085A1 (en) 1998-11-18
CA2244116A1 (en) 1997-08-07
MX9806168A (en) 1998-10-31
US5778344A (en) 1998-07-07
DE69729277D1 (en) 2004-07-01
JP2000504510A (en) 2000-04-11
CN1210643A (en) 1999-03-10
EP0878085B1 (en) 2004-05-26
AU3606897A (en) 1997-08-22
NO983501D0 (en) 1998-07-30
AU707248B2 (en) 1999-07-08
GB9601925D0 (en) 1996-04-03

Similar Documents

Publication Publication Date Title
EP0878085B1 (en) Database access
US6996531B2 (en) Automated database assistance using a telephone for a speech based or text based multimedia communication mode
KR100383352B1 (en) Voice-operated service
US5987414A (en) Method and apparatus for selecting a vocabulary sub-set from a speech recognition dictionary for use in real time automated directory assistance
US6324513B1 (en) Spoken dialog system capable of performing natural interactive access
US8185539B1 (en) Web site or directory search using speech recognition of letters
JPH10229449A (en) Method and device for automatically generating vocabulary recognized talk out of registered item of telephone directory, and computer readable recording medium recording program element ordering computer to generate vocabulary recognized talk used in talk recognition system
JP2000032140A (en) Robot hotel employee using speech recognition
WO2009149340A1 (en) A system and method utilizing voice search to locate a procuct in stores from a phone
US7020612B2 (en) Facility retrieval apparatus and method
US6629069B1 (en) Speech recognizer using database linking
AU707248C (en) Database access
JP3316826B2 (en) Information guidance method and device
GB2304957A (en) Voice-dialog system for automated output of information
CA2440463C (en) Speech recognition
Williams Dialogue Management in a mixed-initiative, cooperative, spoken language system
JPH09114493A (en) Interaction controller
JP3576511B2 (en) Voice interaction device
JPH10105190A (en) Method performing inquiry to data base
KR930000809B1 (en) Language translation system
JP2003029784A (en) Method for determining entry of database
JPH09288495A (en) Button specification and voice recognition jointly using type input method and device
KR0173914B1 (en) Name Search Method in Voice Dialing System
JP2001013987A (en) Method and apparatus for speech controller having improved phrase memory, use, conversion, transfer and recognition

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 97191984.4

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA CN JP KR MX NO NZ SG US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1997901199

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 326441

Country of ref document: NZ

ENP Entry into the national phase

Ref document number: 2244116

Country of ref document: CA

Ref document number: 2244116

Country of ref document: CA

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1019980705981

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: PA/a/1998/006168

Country of ref document: MX

WWP Wipo information: published in national office

Ref document number: 1997901199

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1019980705981

Country of ref document: KR

WWG Wipo information: grant in national office

Ref document number: 1997901199

Country of ref document: EP

WWR Wipo information: refused in national office

Ref document number: 1019980705981

Country of ref document: KR