|Publication number||US7712020 B2|
|Application number||US 10/104,430|
|Publication date||May 4, 2010|
|Filing date||Mar 22, 2002|
|Priority date||Mar 22, 2002|
|Also published as||US20030182124, WO2003083641A1|
|Publication number||10104430, 104430, US 7712020 B2, US 7712020B2, US-B2-7712020, US7712020 B2, US7712020B2|
|Inventors||Emdadur R. Khan|
|Original Assignee||Khan Emdadur R|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (25), Non-Patent Citations (4), Referenced by (7), Classifications (8), Legal Events (4)|
|External Links: USPTO, USPTO Assignment, Espacenet|
1. Field of the Invention
The present invention relates to a method for accessing the Internet, and more particularly to accessing and navigating the Internet through the use of an audio interface, e.g., via standard POTS (plain old telephone service), with vocal and aural navigation, selection and rendering of the Internet content.
2. Description of the Related Art
The number of Internet access methods has increased with the rapid growth of the Internet. World Wide Web (WWW) “surfing” has likewise increased in popularity. Surfing or “Internet surfing” is a term used by analogy to describe the ease with which a user can use the waves of information flowing around the Internet to find desired or useful information. The term surfing as used in this specification is intended to encompass all of the possible activities a user can participate in using the Internet. Beyond looking up a particular Internet resource or executing a search, surfing as used herein is intended to include playing video games, chatting with other users, composing web pages, reading email, applying for an online mortgage, trading stocks, paying taxes to the Internal Revenue Service, transferring funds via online banking, purchasing concert or airline tickets, etc. Various kinds of web browsers have been developed to facilitate Internet access and allow users to more easily surf the Internet. In a conventional web interface, a web browser (e.g., Netscape Navigator® which is part of Netscape Communicator® produced by Netscape Communications Corporation of Mountain View, Calif.) visually displays the contents of web pages and the user interacts with the browser visually via mouse clicking and keyboard commands. Thus, web surfing using conventional web browsers requires a computer or some other an Internet access appliance such as a WB-2001 WebTV® Plus Receiver produced by Mitsubishi Digital Electronics America, Inc., of Irvine, Calif.
Recently, some web browsers have added a voice based web interface in a desktop environment. In such a system, a user can verbally control the visual web browser and thus surf the Internet. The web data is read to the user by the browser. However, this method of Internet access is not completely controllable by voice commands alone. Users typically must use a mouse or a keyboard to input commands and the browser only reads the parts of the web page selected using the mouse or the keyboard. In other words, existing browsers that do allow some degree of voice control still must rely on the user and visual displays to operate. In addition, these browsers require that the web data to be read aloud must be formatted in a specific way (e.g., the shareware Talker Plug-In written by Matt Pallakoff and produced by MVP Solutions Inc. of Mountain View, Calif., can be used with Netscape Commerce Server and uses files formatted in accordance with a file format identified by the extension “talk”.
Some commercially available products (e.g., Dragon Dictate® from Dragon Systems Inc. of Newton, Mass.) can read a web page as displayed on a conventional browser in the standard web data format, however, the particular portion of the page to be read must be selected by the user either via mouse or voice commands. A critical limitation of these systems is that they require the user to visually examine the web data and make a selection before any web data to speech conversion can be made. This limitation also exists when using these systems to surf the web. The user needs to look at the browser and visually identify the desired Uniform Resource Locator (URL), or use a predetermined stored list of URLs, and then select the desired URL by voice commands.
For reasons of increased mobility, it would be more desirable to be able to access and surf the Internet without being required to visually perceive the web data. Furthermore, it would be desirable to allow for “audio-only” access to the Internet such that authors of web pages need not provide web data in specialized formats for audio playback. However, the Internet is primarily a visual medium with information designed to be accessed visually, i.e., by looking at it. Accordingly, the information is displayed with visual access in mind, resulting in use of columns, tables, frames, color, bolded text, various font sizes, variable positioning of text and images, popup windows and so on. During observation, the human brain processes such information and selects the content that the user is interested in reading. When such information is accessed by voice, normally all of the associated text is extracted after filtering out graphics, banners, images, HTML and XML tags, and other unwanted nuances not useful to audio playback. Listening to such content may require much time and thereby lose the interest of the user. Also, selecting part of the text or navigating within a large amount of text displayed for visual access in mind is very difficult.
What would be helpful is an appropriate way of rendering the Internet content such that a relatively small amount of text is produced, quite suitable for audio playback, for facilitating further navigation and selection of content while still accurately representing the source data, i.e., the visual web page.
Additionally, some further important issues relating to accessing the Internet by voice include inter- and intra-page navigation, finding the correct as well as relevant contents on a linked page, and assembling the right contents from a linked page.
In accordance with the presently claimed invention, selection and rendering of Internet content is facilitated when accessing the Internet using vocal and aural navigation techniques. Visual Internet content is selected and rendered to produce information in amounts appropriate for representation in concise aural form to facilitate vocal selection and navigation based upon such aural representation. Such rendering of the visual Internet content is done using the normal visual characteristics of such content, including text size and length, color, presence and density of Internet links, and overall density of the content. (It should be noted that the terms “Internet” and “web” are intended to be interchangeable in that information accessed via the Internet can include information other than that found on the World Wide Web per se.)
In accordance with one embodiment of the presently claimed invention, a method of facilitating access to the Internet involving vocal and aural navigation, selection and rendering of Internet content includes the steps of:
establishing a bi-directional voice communication link between an audio Internet access provider and a user;
receiving, via said bi-directional voice communication link, a voice command signal corresponding to a Internet surfing command;
locating an Internet page corresponding to said Internet surfing command;
identifying one or more highlights associated with said Internet page;
transmitting, via said bi-directional voice communication link, a voice response signal corresponding to an Internet data signal representing a recitation of said one or more highlights;
receiving, via said bi-directional voice communication link, a voice selection signal identifying a selected one of said recited one or more highlights;
locating Internet content related to said selected one of said recited one or more highlights; and
transmitting, via said bi-directional voice communication link, a voice content signal corresponding to a selected portion of said related Internet content.
In accordance with another embodiment of the presently claimed invention, a method of accessing the Internet involving vocal and aural navigation, selection and rendering of Internet content includes the steps of:
establishing a bi-directional voice communication link between an audio Internet access provider and a user;
initiating access to an Internet page corresponding to an Internet surfing command by transmitting, via said bi-directional voice communication link, a voice command signal corresponding to said Internet surfing command;
receiving, via said bi-directional voice communication link, a voice response signal corresponding to an Internet data signal representing a recitation of one or more highlights identified as being associated with said Internet page;
initiating access to Internet content related to a selected one of said recited one or more highlights by transmitting, via said bi-directional voice communication link, a voice selection signal identifying said selected one of said recited one or more highlights; and
receiving, via said bi-directional voice communication link, a voice content signal corresponding to a selected portion of said related Internet content.
The present invention is preferably embodied as a computer program developed using an object oriented language that allows the modeling of complex systems with modular objects to create abstractions that are representative of real world, physical objects and their interrelationships. However, it will be readily understood by one of ordinary skill in the art that the subject invention can be implemented in many different ways using a wide range of programming techniques as well as general purpose hardware systems or dedicated controllers.
The present invention is used in accessing the Internet using only voice and audio instead of conventional visual inputs and displays. A POTS (plain old telephone service) can be used to access the Internet by calling an “audio” ISP (Internet service provider). An audio ISP includes a conventional data ISP that is buffered by an apparatus capable of performing a selective translation function using artificial intelligence methods. This selective translation function can be performed by an apparatus called an Intelligent Agent (IA) as described in more detail below. The IA translates Internet data into spoken language as well as translate spoken data and commands into Internet web surfing commands.
An audio ISP uses a standard telephone (POTS, digital or analog cellular telephone, PCS telephone, satellite telephone, etc.) instead of a modem, telephone line and a direct connection to a conventional data ISP. An audio ISP uses TAPI (telephony application programming interface) or a similar protocol to connect a standard telephone to a computer or other Internet appliance. The IA takes information from the caller in the form of voice commands, accesses the Internet, retrieves the desired information, and reads it back to the caller using voice. Using voice input and output signals only, the caller can surf the net by interacting with the IA. The IA eliminates the need for a conventional visual web browser.
Turning now to
The IA 12 is configurable to provide a user-selectable level of detail in the audio-only version of a retrieved web page. Thus, for example, a web page containing a list of matching URLs generated by a search engine in response to a query could be read to the user in complete detail or in summary form.
Referring now to
The TPU 23 communicates with the user via the telephone 10 and the Internet 16 using signals 18 and 20. The users' telephone calls are answered by the answer phone unit (APU) 24 which is preferably embodied as a telephone card or modem and is part of the TPU 23. The TPU 23 communicates with the user via the telephone 10 using, for example, the TAPI standard, a protocol developed by Microsoft Corporation of Redmond, Wash., that is used in connecting a telephone with a computer over a standard telephone line (see Microsoft Corp., White Paper, “Microsoft Windows NT Server; The Microsoft Windows Telephony Platform: Using TAPI 2.0 and Windows to Create the Next Generation of Computer-Telephony Integration.” p. 34,1996, incorporated herein by reference). In one embodiment, the TPU 23 communicates with the Internet 16 via the conventional data ISP 14 using: a modem and a telephone line; a cable modem and a cable line; or an Ethernet connection as is known in the art. Thus, the IA 12 integrates an audio ISP with conventional data ISP using a modem or Ethernet connection. This form of Intelligent Agent operates as a true “voice browser” in that ordinary Internet content can be accessed and rendered into audio form for reading back to the user, as opposed to a conventional “voice browser” that can only read back content which has been originally written or rewritten in some form of voice-enabled language, such as Voice Extensible Markup Language (VXML).
The UU 21 is preferably implemented as a programmed computer processor including the normally associated memory and interface ports as is well known in the art. The UU 21 is operative to determine what part of a web page is graphics, what part is a dynamic advertisement, what part is an interactive program, which text is a link to a URL, etc. and makes decisions accordingly. The UU 21 is also equipped with means to understand a user's commands. The UU 21 uses a language processing engine (LPE) 29 to interpret multiple words received from the user. The UU 21 uses an artificial intelligence (AI) unit 28 that includes one or more expert systems, probabilistic reasoning systems, neural networks, fuzzy logic systems, genetic algorithm systems, and combinations of these systems and other systems based on other Al technologies (e.g., soft computing systems). In order to understand the users' commands, the UU 21 uses the SRE 27 to convert users' commands to text. Before sending the web page text to the user via the telephone 10, the UU 21 selectively converts text to speech using the TTS unit 25. The UU 21 allows the user to interact with Internet web pages by creating a complete audio representation of the web pages. Thus, if a web page includes a dynamic program such as a Java program to calculate a mortgage payment for example, the UU 21 would execute the program within the IA 12 and describe the display that would have been generated by a conventional visual browser. The IA 12 can also use the UU 21 to identify and interpret audio formatted data, including audio hypertext mark up language (HTML) tags.
The UU 21 also includes a client emulation unit (CEU) 30 that allows the UU 21 to execute web client type programs such as Java and Java script programs that would normally execute on a user's client computer. The CEU 30 can spawn a virtual machine (e.g., a Microsoft Windows NT window), execute the client program to generate the associated displays, and pass the display data to the UU 21 to be translated and relayed to the user as described above. In this way, users are able to execute and interact with web pages that include executable programs.
Turning now to
In one embodiment, the IA 12 is implemented in software and executed on a server computer. It is important to note that a user does not need a conventional visual browser because the IA 12 effectively provides an audio ISP. However, the audio ISP can be implemented using a conventional visual web browser in conjunction with the IA 12. Additionally, it should be understood that the IA 12 and ISP 14 can reside on the same computer. Alternatively, an audio ISP can use other means of accessing and retrieving web pages such as the Win32 Internet (Winlnet) Application Programming Interface (API) as developed by Microsoft Corporation, described at http://pbs.mcp.com/ebooks/1575211173/ch17.htm, printed on Jun. 22, 1999, and hereby incorporated herein by reference. One of ordinary skill in the art would further understand that the IA 12 can also be used to access, manage, compose, and send email. In other words, a user can send or receive email, as well as perform other tasks such as searching on the Internet, using voice only working through the IA 12. Thus, a user can surf the web and can exploit all of the capabilities of the Internet, simply through human voice commands and computer generated-voice responses instead of using a visual browser running on a computer or other Internet appliance.
Rendering information that is visual in nature to an audio format is difficult. For information displayed visually, it is the brain of the user that quickly selects and correctly processes the information. Visual processing is inherently parallel, while audio processing is inherently serial. Thus, the content to be provided in audio form needs to be precise and short. It is not sufficient to simply parse the content from HTML or XML to text and then to audio. Determining and filtering unnecessary text is important for audio rendering. Different web sites uses different styles in displaying the visual information. To develop rules that will handle all possible cases of visual display style and still provide good audio content is very challenging.
Providing a voice portal that can convey a reasonable representation of Internet content presents many challenges. Navigation and selection by voice can be attempted in many ways. If a voice menu based approach is used, the number of menus and steps to follow will generally be so large as to be impractical. If the content is searched in response to questions by the user, the number of possible questions would also be so large as to be impractical. Plus, many problems would exist concerning voice recognition (due to the large vocabulary needed) and a need for an intelligent database that can be reliably accessed for retrieving the correct information.
For purposes of the present invention, various algorithms are used by the Intelligent Agent IA to do rendering, navigation and selection of the desired content. The subject algorithms use the information already available on the visual web pages, including elements such as columns, tables, frames, colors, bolded text, font sizes, positioning and popup windows. As discussed in more detail below, “page highlights” that provide important information or highlights of the accessed page corresponding to the URL are used. A small number of such highlights (e.g., three) are read at a time, with users given the opportunity to select any one of the highlights or topics at a time. Once a highlight has been selected, the content associated with that highlight is read to the user. An assumption behind this is that the related content exists and is somewhere in either the current page or a linked page perhaps a level or few down.
One example is where the related content is on the same page as the selected highlight. In that case, the Intelligent Agent IA reads the selected content from the current page (discussed in more detail below).
Another example is where the selected highlight is a link. In that case, the Intelligent Agent IA accesses the linked page to find the relevant content and read it to the user (discussed in more detail below).
Still another example is where multiple related content exists on the linked page. In that case, the Intelligent Agent IA provides for fine tuning the selection, after which the selected content is read to the user (discussed in more detail below).
Yet another example is where multiple related content exists on the linked page, but none of it can be easily identified and selected. In that case, the Intelligent Agent IA either provides such related content as next level highlights or reads them to the user in some logical order based on content density or semantic matching (discussed in more detail below).
Page highlights are determined using techniques similar to those that one would use to visually find a highlight on a page by looking at the page contents. Thus, it is based on page elements such as (without limitation) font sizes, links, colors, sizes of the content, language understanding, and so on. The Intelligent Agent IA examines the HTML and/or XML tags and such page elements and determines the highlights. Further techniques are used to determine which highlights are more important and hence should be read to the user first. One example of a basic algorithm to determine highlights is as follows:
If the content is with largest font size (largest font on the current page but
not part of a banner)
this is highlight #1.
If this content is a link, then related content on the linked page
will be read when this highlight is selected
Associated content on the current page will be read when this
highlight is selected. In this case association is determined by next
paragraph or table or frame etc. that is directly related to this
If there are more than one content with largest size and none of
them are links, then priority is assigned to the highlight with
largest content associated with it. If they are all links, the one
with highest # of words has the highest priority. If they are
mixture of links and non-links, then priority is assigned to the
If the content is flashing but not part of a banner
this is highlight #2.
If there are more than one flashing content, the priority is decided
based on the same algorithm outlined above for the largest font size.
Use second largest font, followed by third largest font, etc., to
determine the priority. When font sizes become same, then priority is
determined using same algorithm as for the largest font size except that
a content with Bold has the higher priority.
It will be understood that variations of these techniques are possible. For example, flashing content may be treated with the highest instead of second highest priority. The goal is to use a technique that closely represents how a human user would select highlights when examining a visual web site. Also, if desired, banner advertisements can be retained as options for selection by the user.
The next set of highlights can be selected using a technique similar to that as outlined above. Clearly, the highlight identification techniques discussed herein provide important information in a logical manner that a user would normally use when observing a web page visually. Also provided are good navigation tools to access the next page for obtaining only relevant contents from the linked page (discussed in more detail below). Thus, such techniques “render” information from a visual web page into an audio format that is concise and pleasant for listening by the user.
Apart from using highlights, rendering can also be done by providing key words of a page and then using queries. In general, queries should include one or more of the key words. Queries can be a simple sentence or just a word or a few words. Using the word matching and content and link density analyses discussed in more detail below with the key words, appropriate related content can be selected.
The user may already know a few key words associated with a particular web site and may simply try using such key words without listening to any of the key words read from the page. Alternatively, a simple key word may not be found in a page but a user still can say other word(s) as part of the query, and if there is a match the relevant content can then be read out. If some confusion arises (e.g., multiple matches), the Intelligent Agent IA will ask the user more questions to minimize ambiguity and improve selection of the relevant content. If there is no match for the word(s) asked, semantic analysis and language understanding can be used to find and select relevant contents. For example, the user may say “U.S President” but the page may not contain this phrase or term. Instead the page may have the name of one U.S President (e.g., Clinton) and so the language understanding unit will match this with the query and will read back the content associated with this. If the page contains “Clinton” in multiple non-associated paragraphs, e.g., not next to each other or under separate topics, the Intelligent Agent IA will read the first sentence of each topic and ask the user which content he or she would like to hear.
Depending on the level of complexity, related contents are selected based on a variety of approaches that include: parsing and word matching; analysis of content density; and analysis of link density. For parsing and word matching, attempts are made to match words in the label of the highlight with words in the label of the highlights on the linked page. After a match is found, the content associated with the match is selected. Association can be based (without limitation) on frames, tables, paragraphs, etc. If multiple associations are found, then the most important association is selected first. Importance of association can be determined based upon semantic meaning, language understanding and processing, or simpler elements such as paragraph sizes. To save on the amount of computation needed, matching for all words in a sentence is not usually necessary. The relevant contents can often be found after matching a few words, since the page may have only one instance of the selected words in the desired sequence. If similar sequences of words are found more than once, contents can be read to the user based upon the priority as determined by the size of the paragraphs associated with such matches.
If no word matches are found, then the page is tested for content density and link density. If the content density is high, as compared to the link density, a key body portion of the content is identified and selected. Key body portions can be identified (without limitation) by length of contents, font sizes, colors, tables, frames, etc. Conversely, if the link density is high, as compared to the content density, then the highlight of the page is determined and presented so that user can link down to the next level to find the desired content.
Content density is determined by counting the total number of words (or letters or characters as appropriate, e.g., for Chinese or Japanese language pages), without considering links, divided by the total number of words (or letters or characters) while considering both links and non-links.
Link density is determined by either counting the total number of words (or letters or characters as appropriate, e.g., for Chinese or Japanese language pages) in the links, or counting the total number of links and dividing by the total number of words (or letters or characters) while considering both links and non-links.
If after performing the foregoing good content is still not found, more computation intensive approaches, e.g., semantic analysis, language processing or understanding, can be used to find more relevant content. These approaches are based upon semantic analysis, language processing and understanding using context information. Learning algorithms can be used to improve the semantic analysis and language understanding. With much improved language understanding, it will also be possible to make a summary of long paragraphs or contents. In such cases the key concept or statements in the first (and sometimes the second) and last paragraphs are noted. Contents with similar meaning (either explicit or implicit) are gathered and duplications are removed resulting in a summary. This is just an example. Other language understanding techniques based upon “summary” computations can also be used.
In the event that related content is still not found, and the page is not a link rich page (i.e., the density of links within the page is low), the entire section or page can then simply be read to the user.
Following the initial accessing 81 of a web page, a determination 82 is made as to whether any highlights exist on that page. As discussed above, page elements such as text font sizes, links, colors, amount of text, and so on are examined to make this determination 82. If it is determined that no highlights exist, all of the contents of the page are recited 83.
On the other hand, if it is determined that highlights do exist, selected highlights are recited 84. For example, if a number of highlights exist, the first three or four highlights can be recited to solicit feedback or commands from the user as to which highlight is to be selected for further processing. If no highlight is selected 85, then additional highlights are recited 84 for further selection opportunities for the user. When a highlight is selected 85, then a determination 86 is made as to whether the related content associated with the selected highlight is on the current web page. If not, then the linked page identified by the selected highlight is accessed 87. Following this, and also if the related content is on the current web page, a determination 88 is then made as to whether there are one or more word matches between the selected highlight and any portion of the related content.
If there is such a word match, then a determination 92 (
However, if the word match determination 88 (
If, however, the determination 89 (
If, however, it is determined that the link density is not high, then a determination 100 (
However, if paragraphs do exist, then summaries of the paragraphs are generated 102, following which a semantic matching 103 is performed upon the summaries. Then, according to the semantic meaning of the contents, such contents are placed into an appropriate order 104, following which the selected contents are then recited in order 95.
If, however, the original determination 100 finds that the contents are divisible into groups, then a determination 105 is made as to the semantic meaning, with appropriate weighted scores assigned. Following that, a determination 106 is made as to whether such weighted scores are close in their respective values. If the values are not close, then the contents are ordered 107 according to their weighted scores, and the selected contents are recited in order 95.
If, however, the weighted scores are close in values, then a determination 108 is made as to the density of the text for each group. Following that, the contents are ordered 109 according to their respective text densities. Finally, those selected contents are then recited in order 95.
Based upon the foregoing discussion, it will be recognized that all of the approaches and techniques discussed above are also applicable for languages other than English. Further, the selected contents can be converted into other languages in real time. For example, a web site written in English can be accessed by saying the name of the web site in Japanese and then listening to the selected content in Japanese by converting the English content into Japanese in real time.
While the method and apparatus of the present invention has been described in terms of its presently preferred and alternate embodiments, those skilled in the art will recognize that the present invention may be practiced with modification and alteration within the spirit and scope of the appended claims. The specifications and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. Further, even though only certain embodiments have been described in detail, those having ordinary skill in the art will certainly understand that many modifications are possible without departing from the teachings thereof. All such modifications are intended to be encompassed within the following claims.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5774628||Apr 10, 1995||Jun 30, 1998||Texas Instruments Incorporated||Speaker-independent dynamic vocabulary and grammar in speech recognition|
|US5873107||Mar 29, 1996||Feb 16, 1999||Apple Computer, Inc.||System for automatically retrieving information relevant to text being authored|
|US5915001 *||Nov 14, 1996||Jun 22, 1999||Vois Corporation||System and method for providing and using universally accessible voice and speech data files|
|US5953392||Mar 1, 1996||Sep 14, 1999||Netphonic Communications, Inc.||Method and apparatus for telephonically accessing and navigating the internet|
|US6029135||Nov 14, 1995||Feb 22, 2000||Siemens Aktiengesellschaft||Hypertext navigation system controlled by spoken words|
|US6101473||Aug 8, 1997||Aug 8, 2000||Board Of Trustees, Leland Stanford Jr., University||Using speech recognition to access the internet, including access via a telephone|
|US6115686||Apr 2, 1998||Sep 5, 2000||Industrial Technology Research Institute||Hyper text mark up language document to speech converter|
|US6157705||Dec 5, 1997||Dec 5, 2000||E*Trade Group, Inc.||Voice control of a server|
|US6188985||Oct 3, 1997||Feb 13, 2001||Texas Instruments Incorporated||Wireless voice-activated device for control of a processor-based host system|
|US6282511||Dec 4, 1996||Aug 28, 2001||At&T||Voiced interface with hyperlinked information|
|US6282512||Jan 21, 1999||Aug 28, 2001||Texas Instruments Incorporated||Enhancement of markup language pages to support spoken queries|
|US6300947||Jul 6, 1998||Oct 9, 2001||International Business Machines Corporation||Display screen and window size related web page adaptation system|
|US6311182||Nov 17, 1997||Oct 30, 2001||Genuity Inc.||Voice activated web browser|
|US6418199||May 23, 2000||Jul 9, 2002||Jeffrey Perrone||Voice control of a server|
|US6574601 *||Jan 13, 1999||Jun 3, 2003||Lucent Technologies Inc.||Acoustic speech recognizer system and method|
|US6591295 *||Nov 5, 1999||Jul 8, 2003||Oracle International Corp.||Methods and apparatus for using multimedia data stored in a relational database in web applications|
|US6601066 *||Dec 17, 1999||Jul 29, 2003||General Electric Company||Method and system for verifying hyperlinks|
|US6604076 *||Nov 9, 2000||Aug 5, 2003||Koninklijke Philips Electronics N.V.||Speech recognition method for activating a hyperlink of an internet page|
|US6823311||Feb 8, 2001||Nov 23, 2004||Fujitsu Limited||Data processing system for vocalizing web content|
|US6904408 *||Oct 19, 2000||Jun 7, 2005||Mccarthy John||Bionet method, system and personalized web content manager responsive to browser viewers' psychological preferences, behavioral responses and physiological stress indicators|
|US20010053987||Jun 14, 2001||Dec 20, 2001||Siemens Aktiengesellschaft||Tele-health information system|
|US20040006476 *||Jul 2, 2003||Jan 8, 2004||Leo Chiu||Behavioral adaptation engine for discerning behavioral characteristics of callers interacting with an VXML-compliant voice application|
|US20040205614 *||Aug 9, 2001||Oct 14, 2004||Voxera Corporation||System and method for dynamically translating HTML to VoiceXML intelligently|
|US20060010386||Sep 9, 2005||Jan 12, 2006||Khan Emdadur R||Microbrowser using voice internet rendering|
|US20070156761||Feb 15, 2007||Jul 5, 2007||Smith Julius O Iii||Method and apparatus for facilitating use of hypertext links on the World Wide Web|
|1||*||Kemble, K., Voice-Enabling Your Web Sites, IBM Developer Works Website, Jun. 30, 2001.|
|2||Microsoft Corp., White Paper, "Microsoft Windows NT Server; The Microsoft Windows Telephony Platform: Using TAPI 2.0 and Windows to Create the Next Generation of Computer-Telephony Integration." 1996.|
|3||USPTO; International Search Report PCT/US/2002/018695; Dec. 30, 2002; 2 pages.|
|4||*||Voice XML FORUM, version 0.9, pp. 1-63, Aug. 17, 1999.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7873900||Sep 9, 2005||Jan 18, 2011||Inet Spch Property Hldg., Limited Liability Company||Ordering internet voice content according to content density and semantic matching|
|US8117536 *||Aug 8, 2008||Feb 14, 2012||Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd.||System and method for controlling downloading web pages|
|US8504906 *||Sep 8, 2011||Aug 6, 2013||Amazon Technologies, Inc.||Sending selected text and corresponding media content|
|US9141712 *||May 13, 2011||Sep 22, 2015||Neov Co., Ltd.||Sequential website moving system using voice guide message|
|US20060010386 *||Sep 9, 2005||Jan 12, 2006||Khan Emdadur R||Microbrowser using voice internet rendering|
|US20090044102 *||Aug 8, 2008||Feb 12, 2009||Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd.||System and method for controlling downloading web pages|
|US20130080877 *||May 13, 2011||Mar 28, 2013||Soo-Hyun Kim||Sequential website moving system using voice guide message|
|U.S. Classification||715/205, 379/88.17|
|International Classification||G10L15/22, G06F3/14, G10L21/00, G06F17/20|
|Feb 26, 2008||AS||Assignment|
Owner name: INTERNETSPEECH, INC., CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KHAN, EMDADUR R;REEL/FRAME:020564/0119
Effective date: 20070827
Owner name: INET SPCH PROPERTY HLDG, LIMITED LIABILITY COMPANY
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNETSPEECH, INC.;REEL/FRAME:020564/0144
Effective date: 20070917
Owner name: INTERNETSPEECH, INC.,CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KHAN, EMDADUR R;REEL/FRAME:020564/0119
Effective date: 20070827
|Oct 5, 2010||CC||Certificate of correction|
|Oct 11, 2013||FPAY||Fee payment|
Year of fee payment: 4
|Oct 28, 2015||AS||Assignment|
Owner name: S. AQUA SEMICONDUCTOR, LLC, DELAWARE
Free format text: MERGER;ASSIGNOR:INET SPCH PROPERTY HLDG, LIMITED LIABILITY COMPANY;REEL/FRAME:036902/0953
Effective date: 20150812