|Publication number||US6732142 B1|
|Application number||US 09/490,747|
|Publication date||May 4, 2004|
|Filing date||Jan 25, 2000|
|Priority date||Jan 25, 2000|
|Publication number||09490747, 490747, US 6732142 B1, US 6732142B1, US-B1-6732142, US6732142 B1, US6732142B1|
|Inventors||Cary Lee Bates, Paul Reuben Day, John Matthew Santosuosso|
|Original Assignee||International Business Machines Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (11), Referenced by (87), Classifications (9), Legal Events (7)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The present application is related to commonly assigned application Ser. No. 09/660661, to Cary L. Bates, et al., entitled “Web Page Formatting for Audible Presentation” now abandoned, filed on the same date as the present application, which is herein incorporated by reference.
The present invention relates to the use of the Internet, and in particular, to browsers or similar devices which present web page content to a user.
One of the most remarkable applications of technology we have seen in recent years is the World Wide Web, often known simply as the “web”. Nonexistent only a few short years ago, it has suddenly burst upon us. People from schoolchildren to the elderly are learning to use the web, and finding an almost endless variety of information from the convenience of their homes or places of work. Businesses, government, organizations, and even ordinary individuals are making information available on the web, to the degree that it is now the expectation that anything worth knowing about is available somewhere on the web.
Although a great deal of information is available on the web, accessing this information can be difficult and time consuming, as any web user knows. Self-styled prophets of web technology have predicted no end of practical and beneficial uses of the web, if only problems of speed and ease of use can be solved. Accordingly, a great deal of research and development resources have been directed to these problems in recent years. While some progress has been made in the form of faster hardware, browsers which are more capable and easier to use, and so on, much improvement is still needed.
Nearly all web browsers follow the paradigm of a user visually examining web content presented on a display. I.e., typically a user sits in front of a computer display screen, and enters commands to view web pages presented by the user's browser. A great deal of effort is expended in the formatting of web pages for proper visual appeal and ease of understanding. The browser may run in a window, so that the user may switch back and forth from the browser to some other tasks running in other windows. But it is usually expected that when the user is viewing a web page in the browser, his entire attention will be directed thereto, and other tasks will be foreclosed.
Some of the information available on the web is of a form which is updated on a relatively frequent basis, and which may be followed in “real time”, i.e., as the information is being generated. Examples of such information include up-to-the-minute market reports, coverage of sporting events, certain news events, etc. In order to follow such information, some web browsers support periodic polling of a specified web server at a specified polling interval, to determine whether information at a given web site has changed. While this is an improvement over requiring the user to manually update a web page at intervals, the manner of presentation is still less than optimal in many cases. The user may be busy with some other task (either at the computer workstation, or at a desk or somewhere in proximity to the computer). In order to obtain the updated information, the user must interrupt his other task, and view his browser. An unrecognized need exists for an alternative method of presenting such information to the user, which is less disruptive of other tasks in which the user may be
In accordance with the present invention, a web user may elect to have certain frequently changing web content audibly presented in the background while performing other tasks. Content may be audibly presented when it changes, or at user-specified intervals. Audible presentation does not require that any other task in which the user is engaged be interrupted.
In the preferred embodiment, audible background presentation is an optional feature in a web browser. The user selects web content by highlighting a portion or portions of one or more web pages. The user specifies any of various options for audible presentation, such as at fixed intervals, every time any content changes, or every time selected content changes. At the specified intervals or events, the selected web content is converted from text to speech, and audibly played over the computer's speaker.
In an alternative embodiment, a web page has a viewable version and an audible version. The user selects the audible version, and the various parameters for audible presentation. The audible version is then played directly over the computer's speaker, without the need to convert from text to speech.
The audible presentation of web content in the background as described herein enables a user to perform other tasks while listening to web content, much as one might perform other tasks while listening to a radio broadcast in the background. The audio presentation may be thought of as a second “dimension” for receiving information, whereby a user can operate in both the video and audio dimensions independently, significantly improving user productivity, enjoyment or general enlightenment.
The details of the present invention, both as to its structure and operation, can best be understood in reference to the accompanying drawings, in which like reference numerals refer to like parts, and in which:
FIG. 1 is a high-level block diagram of a typical client computer system for accessing web content, according to the preferred embodiment of the present invention.
FIG. 2 is a conceptual illustration of the major software components of a client computer system for accessing web content, in accordance with the preferred embodiment.
FIG. 3 is a block diagram illustrative of a client/server architecture, according to the preferred embodiment.
FIG. 4 is a simplified representation of a computer network such as the Internet, according to the preferred embodiment.
FIG. 5 represents the structure of a script file for storing the parameters of audible web content presentation, according to the preferred embodiment.
FIG. 6 is a high-level flow diagram of the steps performed by the browser, in accordance with the preferred embodiment.
FIG. 7 is a flow diagram showing the operation of the audible presentation thread, according to the preferred embodiment.
FIG. 8 is an interactive screen for selecting script file entries to be edited or deleted, according to the preferred embodiment.
FIG. 9 is an interactive screen for editing a script file entry, according to the preferred embodiment.
Prior to discussing the operation of embodiments of the invention, a brief overview discussion of the Internet is provided herein.
The term “Internet” is a shortened version of “Internetwork”, and refers commonly to a collection of computer networks that utilize the TCP/IP suite of protocols, well-known in the art of computer networking. TCP/IP is an acronym for “Transport Control Protocol/Internet Protocol”, a software protocol that facilitates communications between computers.
Networked systems typically follow a client server architecture. A “client” is a member of a class or group that utilizes the services of another class or group to which it is not related. In the context of a computer network such as the Internet, a client is a process (i.e., roughly a program or task) that requests a service provided by another program. The client process utilizes the requested service without needing to know any working details about the other program or the server itself. In networked systems, a client is usually a computer that accesses shared network resources provided by another computer (i.e., a server).
A server is typically a remote computer system accessible over a communications medium such as the Internet. The server scans and searches for information sources. Based upon such requests by the user, the server presents filtered, electronic information to the user as server response to the client process. The client process may be active in a first computer system, and the server process may be active in a second computer system; the processes communicate with one another over a communications medium that allows multiple clients to take advantage of the information gathering capabilities of the server. A server can thus be described as a network computer that runs administrative software that controls access to all or part of the network and its resources, such as data on a disk drive. A computer acting as a server makes resources available to computers acting as workstations on the network.
Client and server can communicate with one another utilizing the functionality provided by a hypertext transfer protocol (HTTP). The World Wide Web (WWW), or simply, the “web”, includes all servers adhering to this protocol, which are accessible to clients via a Universal Resource Locator (URL) address. Internet services can be accessed by specifying Universal Resource Locators that have two basic components: a protocol to be used and an object pathname. For example, the Universal Resource Locator address, “http://www.uspto.gov/web/menu/intro.html” is an address to an introduction about the U.S. Patent and Trademark Office. The URL specifies a hypertext transfer protocol (“http”) and a name (“www.uspto.gov”) of the server. The server name is associated with a unique, numeric value (i.e., a TCP/IP address). The URL also specifies the name of the file that contains the text (“intro.html”) and the hierarchical directory (“web”) and subdirectory (“menu”) structure in which the file resides on the server.
Active within the client is a first process, known as a “browser, that establishes the connection with the server, sends HTTP requests to the server, receives HTTP responses from the server, and presents information to the user. The server itself executes corresponding server software that presents information to the client in the form of HTTP responses. The HTTP responses correspond to “web pages” constructed from a Hypertext Markup Language (HTML), or other server-generated data.
The browser retrieves a web page from the server and displays it to the user at the client. A “web page” (also referred to as a “page” or a “document”) is a data file written in a hyper-text language, such as HTML, that may have text, graphic images, and even multimedia objects, such as sound recordings or moving video clips associated with that data file. The page contains control tags and data. The control tags identify the structure: for example, the headings, subheadings, paragraphs, lists, and embedding of images. The data consists of the contents, such as text or multimedia, that will be displayed or played to the user. A browser interprets the control tags and formats the data according to the structure specified by the control tags to create a viewable object that the browser displays, plays or otherwise performs to the user. A control tag may direct the browser to retrieve a page from another source and place it at the location specified by the control tag. In this way, the browser can build a viewable object that contains multiple components, such as spreadsheets, text, hotlinks, pictures, sound, chat-rooms, and video objects. A web page can be constructed by loading one or more separate files into an active directory or file structure that is then displayed as a viewable object within a graphical user interface.
Referring to the Drawing, wherein like numbers denote like parts throughout the several views, FIG. 1 is a high-level block diagram of a typical client workstation computer system 100 attached to the Internet, from which a user accesses Internet servers and performs other useful work, according to the preferred embodiment. Computer system 100 includes CPU 101, main memory 102, various device adapters and interfaces 103-108, and communications bus 110. CPU 101 is a general-purpose programmable processor, executing instructions stored in memory 102; while a single CPU is shown in FIG. 1, it should be understood that computer systems having multiple CPUs could be used. Memory 102 is a random-access semiconductor memory for storing data and programs; memory is shown conceptually as a single monolithic entity, it being understood that memory is often arranged in a hierarchy of caches and other memory devices. Communications bus 110 supports transfer of data, commands and other information between different devices; while shown in simplified form as a single bus, it may be structured as multiple buses, and may be arranged in a hierarchical form. Display adapter 103 supports video display 111, which is typically a cathode-ray tube display, although other display technologies may be used. Keyboard/pointer adapter 104 supports keyboard 112 and pointing device 113, depicted as a mouse, it being understood that other forms of input devices could be used. Storage adapter 105 supports one or more data storage devices 114, which are typically rotating magnetic hard disk drives, although other data storage devices could be used. Printer adapter 106 supports printer 115. Adapter 107 may support any of a variety of additional devices, such as CD-ROM drives, audio devices, etc. Internet interface 108 provides a physical interface to the Internet. In a typical personal computer system, this interface often comprises a modem connected to a telephone line, through which an Internet access provider or on-line service provider is reached. However, many other types of interface are possible. For example, computer system 100 may be connected to a local mainframe computer system via a local area network using an Ethernet, Token Ring, or other protocol, the mainframe in turn being connected to the Internet. Alternatively, Internet access may be provided through cable TV, wireless, or other types of connection. Computer system 100 will typically be any of various models of single-user computer systems known as “personal computers”. The representation of FIG. 1 is intended as an exemplary simplified representation, it being understood that many variations in system configuration are possible in addition to those mentioned here. Furthermore, a browser function accessing web pages in accordance with the present invention need not be a personal computer system, and may be a larger computer system, a notebook or laptop computer, or any of various hardware variations. In particular, such a web browser need not be a general-purpose computer system at all, but may be a special-purpose device for accessing the web, such as an Internet access box for a television set, or a portable wireless web accessing device.
FIG. 2 is a conceptual illustration of the major software components of client workstation system 100 in memory 102. Operating system 201 provides various low-level software functions, such as device interfaces, management of memory pages, management of windowing interfaces, management of multiple tasks, etc. as is well-known in the art. Browser 202 provides a user interface to the web. Browser 202 may be integrated into operating system 201, or may be a separate application program. In addition to various conventional browser functions, such as rendering web pages, navigation aids (forward, backward,favorites list, etc.) filing and printing, and so on, as are known in the art, browser 202 contains background audible presentation function 205. Audible presentation function 205 supports the audible rendition of web content in the background, i.e, while the user is performing other unrelated tasks, as more fully described herein. Audible presentation function 205 uses audible presentation script file 206 to define the parameters of audible background presentation, and text-to-speech conversion software 207 to render text from the web in audible form. Memory 102 additionally may contain any of various applications for performing useful work, which are shown generically in FIG. 2 as applications 211-213. These applications may include, for example, word processing, spreadsheet, electronic calendar, accounting, graphics, computer code development, or any of thousands of other possible applications.
While a certain number of applications, files or other entities are shown in FIG. 2, it will be understood that these are shown for purposes of illustration only, and that the actual number of such entities may vary. Additionally, while the software components of FIG. 2 are shown conceptually as residing in memory, it will be understood that in general the memory of a computer system will be too small to hold all programs and data simultaneously, and that information is typically stored in data storage 114, comprising one or more mass storage devices such as rotating magnetic disk drives, and that the information is paged into memory by operating system 201 as required.
FIG. 3 is a block diagram illustrative of a client/server architecture. Client system 100 and server system 301 communicate by utilizing the functionality provided by HTTP. Active within client system 100 is browser 202, which established connections with server 100 and presents information to the user. Server 301 executes the corresponding server software, which presents information to the client in the form of HTTP responses 303. The HTTP responses correspond to the web pages represented using HTML or other data generated by server 301. Server 301 generates HTML document 304, which is a file of control codes that server 301 sends to client 100 and which browser 202 then interprets to present information to the user. Server 301 also provides Common Gateway Interface (CGI) program 305, which allows client 100 to direct server 301 to commence execution of the sepcified program contained within server 301. CGI program 305 executes on the server's CPU 302. Referring again to FIG. 3, using the CGI program and HTTP responses 303, server 301 may notify client 100 of the results of that execution upon completion. Although the protocols of HTML, CGI and HTTP are shown, any suitable protocols could be used.
FIG. 4 is a simplified representation of a computer network 400. Computer network 400 is representative of the Internet, which can be described as a known computer network based on the client-server model discussed herein. Conceptually, the Internet includes a large network of servers 401 (such as server 301) that are accessible by clients 402, typically computers such as computer system 100, through some private Internet access provider 403 or an on-line service provider 404. Each of the clients 402 may run a respective browser to access servers 401 via the access providers. Each server 401 operates a so-called “web site” that supports files in the form of documents or pages. A network path to servers 401 is identified by a Universal Resource Locator (URL) having a known syntax for defining a network connection. While various relatively direct paths are shown, it will be understood that FIG. 4 is a conceptual representation only, and that a computer network such as the Internet may in fact have a far more complex structure.
In accordance with the preferred embodiment of the present invention, a web user specifies parameters for audible presentation of certain web content in the background, and may listen to the specified web content at a later time in the background, i.e., while the user is performing other tasks. In order to support background audible presentation, a script 206 is generated which specifies the parameters of the presentation. FIG. 5 illustrates the structure of script 206.
As shown in FIG. 5, script 206 is a file containing one or more entries 501, each entry specifying the parameters of an audible presentation, i.e., specifying some web content and the times and conditions under which the web content will be audibly presented. In particular, a typical entry 501 contains URL 502, HTML tag(s) 503, time interval 504, start time 505, stop time 506, last time played 507, persistence flag 508, condition flag 509, and condition field 510. URL 502 specifies the URL at which the web content to be audibly presented resides. HTML tag(s) 503 specifies one or more HTML tags to be audibly presented within the web page located with URL 502. It is anticipated that in many cases a user will wish to hear only a portion of a web page, that portion being specified by HTML tag(s) 503. Where a user wishes to hear an entire web page, a single special tag indicating full play of the web page can be inserted in HTML tag field 503. Time interval 504 specifies a time interval for repeating the audio presentation. As more fully explained herein, audible presentation function 205 checks whether certain specified conditions for audio presentation are satisfied at the interval specified by time interval field 504, although the audio will actually be presented only if the conditions are met. Start time 505 and stop time 506 specify the time at which audible presentation is to begin and stop, respectively. Either or both start time field 505 or stop time field 506 may contain a suitable zero value, the former indicating that audio presentation is to begin immediately, and the later indicating that it continue indefinitely (i.e., until browser 202 is shut down, or the user orders it to stop by editing script 206). Last time played 507 stores the time at which audio presentation was last made or conditions for presentation were last checked. Persistence flag 508 is a flag field indicating whether the entry is to exist across loads of browser 202. I.e., if persistence flag is “Y”, the entry is persistent and is restarted every time browser 202 is reloaded for execution. If persistence flag is “N”, the entry is deleted upon loading the browser.
Condition flag 509 indicates whether audible presentation is conditional upon the presence of some condition, the condition being specified by condition field 510. Condition field 510 is a boolean expression specifying a condition for playing the specified web content. There are several possible embodiments for conditional audible presentation. The most common condition would be that web content has changed, i.e., that the current content of the web page or portion thereof specified by URL 502 and HTML tags 503 is unequal to the previous content. In a simple embodiment, it would be possible to verify whether the current content is the same as the previous content by any of various means. For example, a cyclic redundancy check sum (CRC) can be taken of the previous content, which can be compared with a CRC of the new content. Alternatively, some web sites contain the date and timestamp of the most recent update, which could be compared. In an alternative, more complex embodiment, it would be possible to support other types of conditions. For example, if a user were following prices of selected securities, he may wish to hear an updated price only if it differs from the previous price by more than a specified amount. A numeric price quantity could be extracted from an HTML string, saved, and compared with a current quantity to determine whether the two quantities differed by more than a specified amount.
FIG. 6 is a high-level flow diagram of the steps performed by browser 202, in accordance with the preferred embodiment. The browser is initialized and a connection is established with the Internet through some internet provider (step 601). As part of the initialization process, browser 202 checks to see whether a script 206 exists (step 602). If a script exists, any non-persistent entries in the script are deleted, i.e., any entries for which persistence flag 508 is set to “N” are deleted (step 603). If, after deletion, there are any remaining entries in script 206 (step 604), the audible presentation thread is launched (step 605). The operation of the audible presentation thread is described more fully herein, and illustrated in FIG. 7. After all required initialization steps are performed, the browser continues to step 606.
The browser, being interactive, sits in a loop waiting for an event (step 606). An event may be a user input, such as a command to take a link to a web site, to save or print a web page, to edit a favorites list, etc. Alternatively, an event may be something coming from the Internet, such as incoming web content in response to a previously submitted request. When an event occurs, the “Y” branch from step 606 is taken to handle the event.
If the event is invoking the function to edit the script file 206 (step 607), browser 202 presents the user with interactive editing screens (described below), from which the user may edit the script file (step 608). As noted above, script 206 may contain more than one entry 501, so that audible background presentation from multiple web sites, or based on multiple different conditions, are concurrently supported. Preferably, audible presentation function 205 includes an editing function for creating and editing script file 206. In the preferred embodiment, the editing function is invoked by the user from a pull-down menu on the browser's menu bar, or similar structure. The audible presentation function 205 preferably presents one or more input screens to a user for specifying the different parameters of web content audible presentation. Preferably, the editing function is invokable while the browser is browsing a web page, so that the user may select the currently active URL and portions of the displayed web page (e.g., using pointing device 113), without having to type in URLs and HTML tags. Parameters such as time interval, start time, etc., are manually input.
FIGS. 8 and 9 show interactive editing screens used by function 205 to receive interactive input for editing file 206. Upon entering the edit function at step 608, audible presentation function 205 presents selection menu 801 as shown in FIG. 8, from which an entry 501 from script file 206 may be selected using cursor pointing device 113. As shown in FIG. 8, the first entry 802 in the selection list is designated “new entry”, which means that a new entry 501 will be created for editing using default values. The entries below entry 802 represent existing entries in script 206, the URL fields of these entries being displayed. The user may delete any existing entry by selecting it, and clicking on the “Delete” button. Alternatively, the user may edit any entry by selecting it, and clicking on the “Edit” button.
When the user selects an entry and clicks on the “Edit” button, editing screen 901, as shown in FIG. 9, is presented to the user. Various fields in editing screen 901 contain default values. If editing an existing entry 501 in script 206, these default values are the values in the existing entry. If “new entry” 802 was selected, URL field 902 contains the currently active URL being displayed by browser 202. If the user has selected a portion of the displayed web page, HTML field 903 contains the HTML tags for the selected portion. By default, start time 904 and stop time 905 are blank. The default interval 906 is 15 minutes, and persistence flag 907 is off. Input fields 902-907 correspond to fields 502, 503, 505, 506, 504 and 508, respectively, of script entry 501.
The user may specify that the web page will be audibly played only if changed in field 908. If the user makes this election, function 205 automatically sets condition flag 509 to “Y”, and sets the value of condition field 510 accordingly. Alternatively, the user may manually specify a more complex condition in field 909, which would require greater knowledge of the condition specification syntax. When finished editing, the user clicks on the “OK” or “Cancel” button to exit screen 901.
Upon exiting the interactive script file editing screens at step 608, the script file is saved if required. If there are no entries 501 in the edited script file (step 609), and an audible presentation thread is currently running in the background (step 610), the thread is killed (step 611), and the browser returns to the idle loop at step 606. In this case, the user evidently removed any entries 501 from script file 206 at step 608. If there are no entries, and no thread exists (the “N” branch from step 610), it is not necessary to perform any action, and the browser returns to the idle loop at step 606. If the edited script file contains at least one entry 501 (the “Y” branch from step 609), and no audible presentation thread exists (step 612), an audible presentation thread is launched (step 613), and the browser returns to the idle loop at step 606. If a thread exists (the “Y” branch from step 612), it is not necessary to perform any further action, and the browser returns to step 606.
If the new event was not invoking the script file edit function (“N” branch from step 607), and is anything other than a shut down event (step 615), the event is handled in the conventional manner (step 616), and the browser returns to step 606. If the event is a user command to shut down the browser (“Y” branch from step 615), the browser is shut down (step 617). As part of the shut-down process, any audible presentation thread running in the background is killed. “Shut down” means that the application is stopped, any necessary dynamic variables are saved, and memory used by the application is released for use by other applications; “shut down” is to be distinguished from putting an application in the background, wherein the application remains resident in memory and may continue to execute, but is displayed to the user in a background manner (either as an icon, a partially obscured window, or other appropriate manner).
FIG. 7 is a flow diagram showing the operation of the audible presentation thread running within function 205. Once launched, the audible presentation thread remains resident on computer 100, executing in the background while other functions in browser 202, and/or other applications 211-213, may also be executing. As shown in FIG. 7, the audio thread is initialized (step 701), and then enters a waiting loop consisting of steps 702 and 703, wherein it waits for the expiration of the timer. I.e., at step 702, the thread retrieves the next entry 501 from script 206. At step 703, the thread determines whether a time interval has expired. Specifically, the time interval 504 is added to time last played 507. If the current time is greater than the sum, then it is time to check the conditions for playing the web content (the “Y” branch from step 703). Audible presentation function 205 checks whether the current time is after the start time 505 specified in the entry 501 of script 206 (step 704). If not, it proceeds to step 720. If the start time has already passed, function 205 checks whether the current time is before the stop time 506 specified in script 206 (step 705). If not, it proceeds to step 720.
If both start time has passed, and stop time has not been exceeded, function 205 retrieves a current version of the web page from the server at the URL specified in URL field 502 (step 706). Function 205 then checks condition flag 509 (step 707). If condition flag 509 is set “Y”, function 205 evaluates the condition specified in condition field 510 (step 708). If the condition evaluates to false, the audible presentation is not made, and the thread proceeds to step 720. If the condition evaluates to true, it may be necessary to update condition field 510 (step 709). For example, if condition field 510 specifies a change in content of the web page by saving a CRC, the new CRC will be saved in condition field 510 for comparing with subsequent web pages at subsequent time intervals.
If condition flag 509 is “N” or the condition in field 510 evaluates to true, the web content will be audibly presented in the background. Audible presentation function checks the nature of the web content. If the web content contains text (step 710), the text is converted to audible speech using text-to-speech converter 207 (step 711). A suitable text-to-speech converter is preferably software embedded in audible presentation function 205 of browser 202, but it may also be a separate application residing in memory 102, or may also be a special-purpose device (not shown) attached to computer system 100. If the web content contains only an audio clip, step 711 is by-passed. Function 205 then plays the audio version of the web content (step 712).
After audibly playing the web content, or after checking for certain pre-conditions as explained above, function 205 updates last time played 507 in the entry 501 from script 206 (step 720). As can be seen from the above description, last time played 507 actually represents the last time a “Y” branch was taken from step 703, whether or not anything was actually played at that time. Function 205 then returns to step 702 to get the next entry 501 from script 206. Function 205 cycles through the entries 501 in script 206 indefinitely at step 702, so that after reaching the last entry in script file 206, it starts again at the first entry.
In the preferred embodiment, audible presentation function 205 in browser 202 converts text HTML to audible speech using a text-to-speech converter, for presenting the web content in the background. This embodiment has the advantage that it requires no modification of existing web content for implementation, i.e., the implementation is supported entirely within the client's workstation. An alternative embodiment would utilize a related web formatting invention described in commonly assigned co-pending application Ser. No. 09/660,661, to Cary L. Bates, et al., entitled “Web Page Formatting for Audible Presentation” now abandoned, filed on the same date as the present application, which is herein incorporated by reference. In this alternative embodiment, web pages could have alternative audio formats provided by the server. If a web page selected for background audio presentation had such an alternative audio format, audible presentation function 205 would select the alternative audio format for play, rather than convert the HTML text to speech at the browser.
In general, the routines executed to implement the illustrated embodiments of the invention, whether implemented as part of an operating system or a specific application, program, object, module or sequence of instructions are referred to herein as “computer programs”. The computer programs typically comprise instructions which, when read and executed by one or more processors in the devices or systems in a computer system consistent with the invention, cause those devices or systems to perform the steps necessary to execute steps or generate elements embodying the various aspects of the present invention. Moreover, while the invention has and hereinafter will be described in the context of fully functioning computer systems, the various embodiments of the invention are capable of being distributed as a program product in a variety of forms, and the invention applies equally regardless of the particular type of signal-bearing media used to actually carry out the distribution. Examples of signal-bearing media include, but are not limited to, recordable type media such as volatile and non-volatile memory devices, floppy disks, hard-disk drives, CD-ROM's, DVD's, magnetic tape, and transmission-type media such as digital and analog communications links, including wireless communications links. An example of signal-bearing media is illustrated in FIG. 1 as data storage device 104.
Although a specific embodiment of the invention has been disclosed along with certain alternatives, it will be recognized by those skilled in the art that additional variations in form and detail may be made within the scope of the following claims.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5195092 *||Aug 30, 1991||Mar 16, 1993||Telaction Corporation||Interactive multimedia presentation & communication system|
|US5444768||Dec 31, 1991||Aug 22, 1995||International Business Machines Corporation||Portable computer device for audible processing of remotely stored messages|
|US5594658||Jun 6, 1995||Jan 14, 1997||International Business Machines Corporation||Communications system for multiple individually addressed messages|
|US5613038||Dec 18, 1992||Mar 18, 1997||International Business Machines Corporation||Communications system for multiple individually addressed messages|
|US5864870 *||Dec 18, 1996||Jan 26, 1999||Unisys Corp.||Method for storing/retrieving files of various formats in an object database using a virtual multimedia file system|
|US5903727 *||Jun 18, 1996||May 11, 1999||Sun Microsystems, Inc.||Processing HTML to embed sound in a web page|
|US6199076 *||Oct 2, 1996||Mar 6, 2001||James Logan||Audio program player including a dynamic program selection controller|
|US6324182 *||Mar 11, 1999||Nov 27, 2001||Microsoft Corporation||Pull based, intelligent caching system and method|
|US6349132 *||Dec 16, 1999||Feb 19, 2002||Talk2 Technology, Inc.||Voice interface for electronic documents|
|US6354748 *||Mar 9, 1995||Mar 12, 2002||Intel Corporation||Playing audio files at high priority|
|US6400806 *||Apr 5, 1999||Jun 4, 2002||Vois Corporation||System and method for providing and using universally accessible voice and speech data files|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7080315 *||Jun 28, 2000||Jul 18, 2006||International Business Machines Corporation||Method and apparatus for coupling a visual browser to a voice browser|
|US7120641||Apr 5, 2002||Oct 10, 2006||Saora Kabushiki Kaisha||Apparatus and method for extracting data|
|US7243346||May 21, 2001||Jul 10, 2007||Microsoft Corporation||Customized library management system|
|US7389515 *||May 21, 2001||Jun 17, 2008||Microsoft Corporation||Application deflation system and method|
|US7502834 *||Sep 30, 2003||Mar 10, 2009||International Business Machines Corporation||Autonomic content load balancing|
|US7516190 *||Feb 6, 2001||Apr 7, 2009||Parus Holdings, Inc.||Personal voice-based information retrieval system|
|US7519573 *||Aug 23, 2004||Apr 14, 2009||Fuji Xerox Co., Ltd.||System and method for clipping, repurposing, and augmenting document content|
|US7580841 *||Sep 17, 2004||Aug 25, 2009||At&T Intellectual Property I, L.P.||Methods, systems, and computer-readable media for associating dynamic sound content with a web page in a browser|
|US7593960 *||Jun 20, 2001||Sep 22, 2009||Fatwire Corporation||System and method for least work publishing|
|US7657828||Jun 5, 2006||Feb 2, 2010||Nuance Communications, Inc.||Method and apparatus for coupling a visual browser to a voice browser|
|US7761534||Nov 21, 2008||Jul 20, 2010||International Business Machines Corporation||Autonomic content load balancing|
|US7822735 *||May 25, 2001||Oct 26, 2010||Saora Kabushiki Kaisha||System and method for saving browsed data|
|US7881941||Feb 13, 2008||Feb 1, 2011||Parus Holdings, Inc.||Robust voice browser system and voice activated device controller|
|US7903570 *||May 3, 2004||Mar 8, 2011||Koninklijke Philips Electronics N.V.||System and method for specifying measurement request start time|
|US7945847||Jun 26, 2007||May 17, 2011||International Business Machines Corporation||Recasting search engine results as a motion picture with audio|
|US8054310||Jun 18, 2007||Nov 8, 2011||International Business Machines Corporation||Recasting a legacy web page as a motion picture with audio|
|US8098600||Feb 1, 2010||Jan 17, 2012||Parus Holdings, Inc.||Computer, internet and telecommunications based network|
|US8165885 *||Jul 17, 2009||Apr 24, 2012||At&T Intellectual Property I, Lp||Methods, systems, and computer-readable media for associating dynamic sound content with a web page in a browser|
|US8185402||Dec 20, 2010||May 22, 2012||Parus Holdings, Inc.||Robust voice browser system and voice activated device controller|
|US8195030 *||Aug 7, 2007||Jun 5, 2012||Panasonic Corporation||Reproduction apparatus, reproduction method, recording apparatus, recording method, AV data switching method, output apparatus, and input apparatus|
|US8352268||Sep 29, 2008||Jan 8, 2013||Apple Inc.||Systems and methods for selective rate of speech and speech preferences for text to speech synthesis|
|US8380507||Mar 9, 2009||Feb 19, 2013||Apple Inc.||Systems and methods for determining the language to use for speech generated by a text to speech engine|
|US8555151||Jan 27, 2010||Oct 8, 2013||Nuance Communications, Inc.||Method and apparatus for coupling a visual browser to a voice browser|
|US8666452 *||Sep 18, 2007||Mar 4, 2014||Lg Electronics Inc.||Method of setting ending time of application of mobile communication terminal, method of ending application of mobile communication terminal, and mobile communication terminal for performing the same|
|US8712776||Sep 29, 2008||Apr 29, 2014||Apple Inc.||Systems and methods for selective text to speech synthesis|
|US8751238||Feb 15, 2013||Jun 10, 2014||Apple Inc.||Systems and methods for determining the language to use for speech generated by a text to speech engine|
|US8838074||Mar 4, 2013||Sep 16, 2014||Parus Holdings, Inc.||Computer, internet and telecommunications based network|
|US8838673 *||Nov 22, 2004||Sep 16, 2014||Timothy B. Morford||Method and apparatus to generate audio versions of web pages|
|US8843120||Jan 13, 2012||Sep 23, 2014||Parus Holdings, Inc.||Computer, internet and telecommunications based network|
|US8843141||Jul 17, 2013||Sep 23, 2014||Parus Holdings, Inc.||Computer, internet and telecommunications based network|
|US8892446||Dec 21, 2012||Nov 18, 2014||Apple Inc.||Service orchestration for intelligent automated assistant|
|US8903716||Dec 21, 2012||Dec 2, 2014||Apple Inc.||Personalized vocabulary for digital assistant|
|US8930191||Mar 4, 2013||Jan 6, 2015||Apple Inc.||Paraphrasing of user requests and results by automated digital assistant|
|US8942986||Dec 21, 2012||Jan 27, 2015||Apple Inc.||Determining user intent based on ontologies of domains|
|US9117447||Dec 21, 2012||Aug 25, 2015||Apple Inc.||Using event alert text as input to an automated assistant|
|US9262612||Mar 21, 2011||Feb 16, 2016||Apple Inc.||Device access using voice authentication|
|US9300784||Jun 13, 2014||Mar 29, 2016||Apple Inc.||System and method for emergency calls initiated by voice command|
|US9318108||Jan 10, 2011||Apr 19, 2016||Apple Inc.||Intelligent automated assistant|
|US9330720||Apr 2, 2008||May 3, 2016||Apple Inc.||Methods and apparatus for altering audio output signals|
|US9338493||Sep 26, 2014||May 10, 2016||Apple Inc.||Intelligent automated assistant for TV user interactions|
|US9368114||Mar 6, 2014||Jun 14, 2016||Apple Inc.||Context-sensitive handling of interruptions|
|US9377992||May 26, 2010||Jun 28, 2016||Parus Holdings, Inc.||Personal voice-based information retrieval system|
|US9430463||Sep 30, 2014||Aug 30, 2016||Apple Inc.||Exemplar-based natural language processing|
|US9451084||May 3, 2012||Sep 20, 2016||Parus Holdings, Inc.||Robust voice browser system and voice activated device controller|
|US9483461||Mar 6, 2012||Nov 1, 2016||Apple Inc.||Handling speech synthesis of content for multiple languages|
|US9495129||Mar 12, 2013||Nov 15, 2016||Apple Inc.||Device, method, and user interface for voice-activated navigation and browsing of a document|
|US9502031||Sep 23, 2014||Nov 22, 2016||Apple Inc.||Method for supporting dynamic grammars in WFST-based ASR|
|US9535906||Jun 17, 2015||Jan 3, 2017||Apple Inc.||Mobile device having human language translation capability with positional feedback|
|US9548050||Jun 9, 2012||Jan 17, 2017||Apple Inc.||Intelligent automated assistant|
|US9571445||Jun 29, 2007||Feb 14, 2017||Parus Holdings, Inc.||Unified messaging system and method with integrated communication applications and interactive voice recognition|
|US9576574||Sep 9, 2013||Feb 21, 2017||Apple Inc.||Context-sensitive handling of interruptions by intelligent digital assistant|
|US9582608||Jun 6, 2014||Feb 28, 2017||Apple Inc.||Unified ranking with entropy-weighted information for phrase-based semantic auto-completion|
|US9594845 *||Sep 24, 2010||Mar 14, 2017||International Business Machines Corporation||Automating web tasks based on web browsing histories and user actions|
|US20010054085 *||Feb 6, 2001||Dec 20, 2001||Alexander Kurganov||Personal voice-based information retrieval system|
|US20020035563 *||May 25, 2001||Mar 21, 2002||Suda Aruna Rohra||System and method for saving browsed data|
|US20020065976 *||Jun 20, 2001||May 30, 2002||Roger Kahn||System and method for least work publishing|
|US20020078197 *||Aug 24, 2001||Jun 20, 2002||Suda Aruna Rohra||System and method for saving and managing browsed data|
|US20020147775 *||Apr 5, 2002||Oct 10, 2002||Suda Aruna Rohra||System and method for displaying information provided by a provider|
|US20030034999 *||May 30, 2002||Feb 20, 2003||Mindspeak, Llc||Enhancing interactive presentations|
|US20030135821 *||Jan 17, 2003||Jul 17, 2003||Alexander Kouznetsov||On line presentation software using website development tools|
|US20030177202 *||Mar 12, 2003||Sep 18, 2003||Suda Aruna Rohra||Method and apparatus for executing an instruction in a web page|
|US20050018654 *||Jul 25, 2003||Jan 27, 2005||Smith Sunny P.||System and method for delivery of audio content into telephony devices|
|US20050033715 *||Apr 5, 2002||Feb 10, 2005||Suda Aruna Rohra||Apparatus and method for extracting data|
|US20050071745 *||Sep 30, 2003||Mar 31, 2005||International Business Machines Corporation||Autonomic content load balancing|
|US20050071758 *||Sep 30, 2003||Mar 31, 2005||International Business Machines Corporation||Client-side processing of alternative component-level views|
|US20060036609 *||Aug 10, 2005||Feb 16, 2006||Saora Kabushiki Kaisha||Method and apparatus for processing data acquired via internet|
|US20060074683 *||Sep 17, 2004||Apr 6, 2006||Bellsouth Intellectual Property Corporation||Methods, systems, and computer-readable media for associating dynamic sound content with a web page in a browser|
|US20060111911 *||Nov 22, 2004||May 25, 2006||Morford Timothy B||Method and apparatus to generate audio versions of web pages|
|US20060206591 *||Jun 5, 2006||Sep 14, 2006||International Business Machines Corporation||Method and apparatus for coupling a visual browser to a voice browser|
|US20070002757 *||May 3, 2004||Jan 4, 2007||Koninklijke Philips Electronics N.V.||System and method for specifying measurement request start time|
|US20070016552 *||Sep 20, 2006||Jan 18, 2007||Suda Aruna R||Method and apparatus for managing imported or exported data|
|US20070022110 *||May 19, 2004||Jan 25, 2007||Saora Kabushiki Kaisha||Method for processing information, apparatus therefor and program therefor|
|US20070226640 *||May 25, 2007||Sep 27, 2007||Holbrook David M||Apparatus and methods for organizing and/or presenting data|
|US20070255806 *||Jun 29, 2007||Nov 1, 2007||Parus Interactive Holdings||Personal Voice-Based Information Retrieval System|
|US20070263601 *||Jun 29, 2007||Nov 15, 2007||Parus Interactive Holdings||Computer, internet and telecommunications based network|
|US20080081600 *||Sep 18, 2007||Apr 3, 2008||Lg Electronics Inc.||Method of setting ending time of application of mobile communication terminal, method of ending application of mobile communication terminal, and mobile communication terminal for performing the same|
|US20080285941 *||Aug 7, 2007||Nov 20, 2008||Matsushita Electric Industrial Co., Ltd.||Reproduction apparatus, reproduction method, recording apparatus, recording method, av data switching method, output apparatus, and input apparatus|
|US20080309670 *||Jun 18, 2007||Dec 18, 2008||Bodin William K||Recasting A Legacy Web Page As A Motion Picture With Audio|
|US20080313308 *||Jun 15, 2007||Dec 18, 2008||Bodin William K||Recasting a web page as a multimedia playlist|
|US20090003800 *||Jun 26, 2007||Jan 1, 2009||Bodin William K||Recasting Search Engine Results As A Motion Picture With Audio|
|US20090006965 *||Jun 26, 2007||Jan 1, 2009||Bodin William K||Assisting A User In Editing A Motion Picture With Audio Recast Of A Legacy Web Page|
|US20090070464 *||Nov 21, 2008||Mar 12, 2009||International Business Machines Corporation||Autonomic Content Load Balancing|
|US20090282053 *||Jul 17, 2009||Nov 12, 2009||At&T Intellectual Property I, L.P.|
|US20100218107 *||May 4, 2010||Aug 26, 2010||International Business Machines Corporation||Autonomic Content Load Balancing|
|US20100293446 *||Jan 27, 2010||Nov 18, 2010||Nuance Communications, Inc.||Method and apparatus for coupling a visual browser to a voice browser|
|US20120079395 *||Sep 24, 2010||Mar 29, 2012||International Business Machines Corporation||Automating web tasks based on web browsing histories and user actions|
|US20140013203 *||Jun 4, 2013||Jan 9, 2014||Convert Insights, Inc.||Systems and methods for modifying a website without a blink effect|
|U.S. Classification||709/203, 709/205, 379/88.17, 704/E13.008, 709/219, 379/88.13|
|Jan 25, 2000||AS||Assignment|
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BATES, CARY L.;DAY, PAUL R.;SANTOSUOSSO, JOHN M.;REEL/FRAME:010585/0586
Effective date: 20000124
|Sep 19, 2007||FPAY||Fee payment|
Year of fee payment: 4
|Sep 13, 2011||AS||Assignment|
Owner name: GOOGLE INC., CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:026894/0001
Effective date: 20110817
|Sep 23, 2011||FPAY||Fee payment|
Year of fee payment: 8
|Dec 11, 2015||REMI||Maintenance fee reminder mailed|
|May 4, 2016||LAPS||Lapse for failure to pay maintenance fees|
|Jun 21, 2016||FP||Expired due to failure to pay maintenance fee|
Effective date: 20160504