Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20020054130 A1
Publication typeApplication
Application numberUS 09/879,829
Publication dateMay 9, 2002
Filing dateJun 11, 2001
Priority dateOct 16, 2000
Also published asUS7877686, US20070089067, WO2002033578A2, WO2002033578A8
Publication number09879829, 879829, US 2002/0054130 A1, US 2002/054130 A1, US 20020054130 A1, US 20020054130A1, US 2002054130 A1, US 2002054130A1, US-A1-20020054130, US-A1-2002054130, US2002/0054130A1, US2002/054130A1, US20020054130 A1, US20020054130A1, US2002054130 A1, US2002054130A1
InventorsKenneth Abbott, Dan Newell, James Robarts
Original AssigneeAbbott Kenneth H., Dan Newell, Robarts James O.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Dynamically displaying current status of tasks
US 20020054130 A1
Abstract
The current status of a list of tasks to be performed is dynamically displayed. The tasks may be performed by a user (e.g., data entered by the user, words spoken by the user, actions taken by the user, and so forth) or alternatively by a computer (e.g., the steps it follows in carrying out a programmed task). At least a portion of the list is displayed at any given time along with an indication of which task is the next task to be performed. As the tasks are completed, the current status of the progression through the items on the list is dynamically updated so as to readily inform the user (or someone else) as to what the current task is that needs to be performed, as well as what tasks have already been performed and/or what tasks remain to be performed.
Images(9)
Previous page
Next page
Claims(59)
1. One or more computer-readable media storing a computer program that, when executed by one or more processors, causes the one or more processors to:
display a subset of a plurality of steps in an order to be performed by a user;
altering an appearance of a current step in the subset of steps that needs to be performed by the user to distinguish the current step from other steps in the subset;
allowing the user to input data corresponding to the current step; and
scrolling, in response to user input of data corresponding to the current step, the plurality of steps so that a new subset of the plurality of steps is presented to the user.
2. One or more computer-readable media as recited in claim 1, wherein the computer program further causes the one or more processors to:
alter, in response to user input of data corresponding to the current step, the appearance of another step as necessary to identify the new current step in the subset of steps that needs to be performed by the user.
3. One or more computer-readable media as recited in claim 1, wherein altering the appearance of the current step comprises marking the current location with a ball.
4. One or more computer-readable media as recited in claim 1, wherein altering the appearance of the current step comprises displaying the current step differently than other steps in the subset.
5. One or more computer-readable media as recited in claim 1, wherein altering the appearance of the current step comprises replacing the current step with a set of one or more input options for the current step.
6. One or more computer-readable media as recited in claim 1, wherein altering the appearance of the current step comprises superimposing, on the current step, a set of one or more input options for the current step.
7. One or more computer-readable media as recited in claim 1, wherein the computer program further causes the one or more processors to:
replace, in the subset, the display of the current step with a display of the input data.
8. One or more computer-readable media as recited in claim 1, wherein the computer program further causes the one or more processors to:
display a current processing marker that identifies which step in the subset of steps is currently being processed by the one or more processors.
9. One or more computer-readable media as recited in claim 1, wherein the one or more computer-readable media comprise a computer memory of a wearable computer.
10. A method comprising:
displaying a list of items to be handled by a user in a particular order;
identifying one item in the list of items that is the current item;
receiving a user input corresponding to the current item; and
updating, in response to receiving the user input, the identification of the one item that is the current item to indicate the next item in the list of items as the current item.
11. A method as recited in claim 10, wherein displaying the list of items comprises displaying at least one item corresponding to a task that has already been performed and at least one item corresponding to a task that still needs to be performed by the user.
12. A method as recited in claim 10, wherein displaying the list of items comprises displaying, after the user input is received, the user input in place of the corresponding item.
13. A method as recited in claim 10, wherein displaying the list of items comprises displaying only a subset of the list of items at any given time.
14. A method as recited in claim 13, further comprising scrolling through the list of items to display different subsets as items in the list are handled by the user.
15. A method as recited in claim 10, further comprising displaying a current processing marker identifying an item in the list of items corresponding to a current user input being processed.
16. A method as recited in claim 10, wherein the list of items comprises a list of tasks to be completed by the user, and wherein handling of an item by the user comprises the user completing the task.
17. A method as recited in claim 16, wherein the list of tasks comprises a list of prompts corresponding to data to be entered into the computer by the user.
18. A method as recited in claim 10, wherein the list of items comprises a list of prompts of words to be spoken by the user, and wherein handling of an item by the user comprises speaking one or more words corresponding to the prompt.
19. One or more computer-readable memories containing a computer program that is executable by a processor to perform the method recited in claim 10.
20. A method comprising:
displaying an identification of a plurality of users; and
for each of the plurality of users,
displaying a list of tasks to be performed by the user,
identifying one task in the list of tasks that is the current task that needs to be performed by the user, and
updating, in response to completion of the task by the user, the identification of the one task that is the current task that needs to be performed by the user to be the next task in the list of tasks.
21. A method as recited in claim 20, wherein displaying the list of tasks comprises displaying only a subset of the list of tasks to be performed by the user at any given time.
22. A method as recited in claim 21, further comprising scrolling through the list of tasks to display different subsets as tasks in the list are completed by the user.
23. A method as recited in claim 20, wherein the list of tasks comprises a list of actions to be taken by the user.
24. A method as recited in claim 20, wherein identifying one task that is the current task comprises displaying a geometric shape as a current location marker identifying the one task.
25. A method as recited in claim 20, wherein identifying one task that is the current task comprises displaying the one task differently than the other tasks in the list of tasks.
26. A method as recited in claim 20, further comprising:
receiving, for each of the plurality of users, an indication from each user's computer of the current task for that user.
27. One or more computer-readable memories containing a computer program that is executable by a processor to perform the method recited in claim 20.
28. A graphical user interface comprising:
a list portion identifying a list of a plurality of items to be handled by a user;
a user choices portion identifying information corresponding to a current item in the list; and
a current location marker that identifies one item of the list that is the current item to be handled by the user, wherein the current location marker is automatically updated to identify the next item in the list after the current item in the list has been handled by the user.
29. A graphical user interface as recited in claim 28, further comprising an applet window portion identifying information clarifying the information identified in the user choices portion.
30. A graphical user interface as recited in claim 29, wherein the user choices portion identifies information that is to be entered into a computer by the user, and wherein the applet window portion identifies information that has already been entered into the computer by the user.
31. A graphical user interface as recited in claim 28, wherein the list of a plurality of items comprises a list of words to be spoken by the user.
32. A graphical user interface as recited in claim 28, wherein the list of a plurality of items comprises a list of prompts of words to be spoken by the user, and wherein the user choices portion identifies, for each prompt, one or more words that can be spoken by the user to properly handle the prompt.
33. A graphical user interface as recited in claim 28, wherein the list portion further identifies information that has been entered by the user in handling previous items in the list.
34. A graphical user interface as recited in claim 28 implemented on a wearable computer.
35. A system comprising:
a display device;
a user interface component, coupled to the display device, causing a user interface to be displayed on the display device;
wherein the user interface includes a list portion in which a list of a plurality of items to be handled by a user are displayed;
wherein the user interface further includes a current location marker identifying one of the items in the list as the current item that needs to be handled by the user; and
wherein the user interface component further automatically updates the current location marker to identify a new item in the list in response to the user handling the current item in the list.
36. A system as recited in claim 35, wherein the user interface component further replaces, after the user has handled the current item, a user input in place of the current item.
37. A system as recited in claim 35, wherein the user interface includes only a subset of the list of the plurality of items at any given time.
38. A system as recited in claim 37, wherein the user interface component further scrolls through the list of items to display different subsets as items in the list are handled by the user.
39. A system as recited in claim 35, wherein the user interface component further displays, as part of the user interface, a current processing marker identifying an item in the list that is currently being processed by the system.
40. A system as recited in claim 35, wherein the list of a plurality of items comprises a list of a plurality of tasks to be completed by the user, and wherein handling of an item by the user comprises the user completing the task.
41. A system as recited in claim 40, wherein the list of tasks comprises a list of prompts corresponding to data to be entered into the system by the user.
42. A system as recited in claim 40, wherein the user interface component is implemented in software.
43. A method comprising:
displaying a list of tasks to be performed;
identifying one task in the list of tasks that is the current task needing to be performed;
receiving an input corresponding to the current task; and
updating, in response to receiving the input, the identification of the one task that is the current task to indicate that the next task in the list of tasks is the current task needing to be performed.
44. A method as recited in claim 43, wherein the displaying comprises displaying a list of tasks to be performed by a user.
45. A method as recited in claim 43, wherein the identifying comprises superimposing, on the display of the current task in the list, a set of one or more input options corresponding to the task.
46. A method as recited in claim 45, wherein the receiving comprises receiving, as the input corresponding to the current task, one of the input options from the set of one or more input options.
47. A method as recited in claim 43, wherein the receiving comprises receiving a user input.
48. A method as recited in claim 43, wherein the receiving comprises receiving an input from a computer component, wherein the input from the computer component indicates that the current task is completed.
49. A method as recited in claim 48, wherein the computer component comprises a processor executing a software program.
50. A method as recited in claim 48, wherein the computer component comprises a hardware component configured to carry out the current task.
51. A method as recited in claim 48, wherein the computer component comprises a remote computer.
52. A method as recited in claim 43, wherein displaying the list of tasks comprises displaying only a subset of the list of tasks at any given time.
53. A method as recited in claim 52, further comprising scrolling through the list of tasks to display different subsets as tasks in the list are performed by the user.
54. A method as recited in claim 43, further comprising displaying a current processing marker identifying a task in the list of tasks corresponding to a current input being processed by a computer performing the method.
55. One or more computer-readable memories containing a computer program that is executable by a processor to perform the method recited in claim 43.
56. A graphical user interface comprising:
a task list portion identifying a list of a plurality of tasks to be performed by a user; and
an indication in the task list portion of a current task to be performed, wherein the indication is changed, in response to the current task being performed, to indicate a next task in the list as the current task to be performed.
57. A graphical user interface as recited in claim 56, further comprising a user choices portion identifying information corresponding to the current task on the list to be performed.
58. A graphical user interface as recited in claim 56, further comprising:
a second task list portion identifying a list of a plurality of tasks to be performed by another user; and
an indication in the second task list portion of a current task to be performed by the other user, wherein the indication is changed, in response to the current task being performed by the other user, to indicate a next task in the list of tasks to be performed by the other use as the current task to be performed.
59. A system comprising:
means for displaying a list of items to be handled by a user in a particular order; and
means for identifying one item in the list of items that is the current item, for receiving a user input corresponding to the current item, and for updating, in response to receiving the user input, the identification of the one item that is the current item to indicate the next item in the list of items as the current item.
Description
RELATED APPLICATIONS

[0001] A claim of priority is made to U.S. Provisional Application No. 60/240,685, filed Oct. 16, 2000, entitled “Method for Dynamically Displaying the Current Status of Tasks”.

TECHNICAL FIELD

[0002] The present invention is directed to graphical user interfaces and more particularly to dynamically displaying the current status of tasks.

BACKGROUND

[0003] As computers become increasingly powerful and commonplace, they are being used for an increasingly broad variety of tasks. For example, in addition to traditional activities such as running word processing and database applications, computers are increasingly becoming an integral part of users' daily lives. Programs to schedule activities, generate reminders, and provide rapid communication capabilities are becoming increasingly popular. Moreover, computers are increasingly present during virtually all of a person's daily activities. For example, hand-held computer organizers (e.g., PDAs) are increasingly common, and communication devices such as portable phones are increasingly incorporating computer capabilities. More recently, the field of wearable computers (e.g., with eyeglass displays) has begun to expand, creating a further presence of computers in people's daily lives.

[0004] Computers often progress through a particular series of steps when allowing a user to accomplish a particular task. For example, if a user desires to enter a new name and address to an electronic address book, the computer progresses through a series of steps prompting the user to enter the desired information (e.g., name, street address, city, state, zip code, phone number, etc.). On computers with large displays (e.g., typical desktop computers), sufficient area exists on the display to provide an informative and useable user interface (UI) that allows the user to enter the necessary data for the series of steps. However, problems exist when attempting to guide the user through the particular series of steps on smaller displays. Without the large display area, there is frequently insufficient room to provide the prompts in the same informative and useable manner.

[0005] Additionally, the nature of many new computing devices with small displays (e.g., PDAs and wearable computers) is that the computing devices are transported with the user. However, traditional computer programs are not typically designed to efficiently present information to users in a wide variety of environments. For example, most computer programs are designed with a prototypical user being seated at a stationary computer with a large display device, and with the user devoting full attention to the display. In that environment, the computer program can be designed with the assumption that the user's attention is predominately on the display device. However, many new computing devices with small displays can be used when the user's attention is more likely to be diverted to some other task (e.g., driving, using machinery, walking, etc.). Many traditional computer programs, designed with large display devices in mind, frequently do not allow the user to quickly and easily reorient him-or her-self to the task being carried out by the computer. For example, if the user is performing a task by following a series of steps on a wearable computer, looks away from the display to focus his or her attention on crossing a busy intersection, and then returns to the task, it would be desirable for the user to be able to quickly and easily reorient him- or her-self to the task (in other words, readily know what steps he or she has accomplished so far and what the next step to be performed is).

[0006] Accordingly, there is a need for new techniques to display the current status of tasks to a user.

SUMMARY

[0007] Dynamically displaying current status of tasks is described herein.

[0008] According to one aspect, a list of items corresponding to tasks that are to be performed are displayed. The tasks may be performed by a user (e.g., data entered by the user, words spoken by the user, actions taken by the user, and so forth) or alternatively by a computer (e.g., the steps followed in carrying out a programmed task). At least a portion of the list is displayed at any given time along with an indication of which task is the next task to be performed. As the user progresses through the set of tasks, the current status of his or her progression through the corresponding items on the list is dynamically updated so as to readily inform the user (or someone else) as to what the current task is that needs to be performed, as well as what tasks have already been performed and/or what tasks remain to be performed.

[0009] According to another aspect, only a subset of the list of items is displayed at any given time. The list is scrolled through as the tasks are performed so that different items are displayed as part of the subset as tasks are performed.

[0010] According to another aspect, multiple lists of tasks to be performed by multiple individuals (or computing devices) are displayed on a display of the user. As the multiple individuals (or computing devices) finish the tasks in their respective lists, an indication of such completion is forwarded to the user's computer, which updates the display to indicate the next task in the list to be displayed. The user is thus able to monitor the progress of the multiple individuals (or computing devices) in carrying out their respective tasks.

BRIEF DESCRIPTION OF THE DRAWINGS

[0011] The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings. The same numbers are used throughout the figures to reference like components and/or features.

[0012]FIG. 1 illustrates an exemplary computing device such as may be used in accordance with certain embodiments of the invention.

[0013]FIG. 2 illustrates an exemplary user interface display in accordance with certain embodiments of the invention.

[0014]FIG. 3 illustrates an exemplary display of an item list and current location marker such as may be used in accordance with certain embodiments of the invention.

[0015]FIGS. 4A and 4B illustrates different ways in which the prompt in a sequence can be changed.

[0016]FIG. 5 is a flowchart illustrating an exemplary process for displaying the current status of tasks in accordance with certain embodiments of the invention.

[0017]FIGS. 6 and 7 illustrate alternative displays of the item list and current location identifiers with reference to a sequence of tasks to be completed in order to record a new inspection (e.g., a building inspection).

[0018]FIG. 8 illustrates an exemplary distributed environment in which the status of tasks being performed by multiple users can be monitored.

[0019]FIG. 9 illustrates an exemplary group of lists that may be displayed for the distributed environment of FIG. 8.

DETAILED DESCRIPTION

[0020] Dynamically displaying the current status of tasks is described herein. A list of items or prompts that is to be traversed by a user in a particular order is displayed to the user (e.g., a set of tasks the user is to perform in a particular sequence as part of his or her job, a set of words to be spoken, a list of questions or fields to be answered, and so forth). At least a portion of the list is displayed at any given time along with an indication of which item in the list is the next item that the user needs to handle (e.g., the next task to perform, the next word to speak, the next question to answer, and so forth). As the user progresses through the list of tasks, the current status of his or her progression through the prompts on the list is dynamically updated so as to readily inform the user as to what the current task is that needs to be performed, as well as what tasks have already been performed and/or what tasks remain to be performed.

[0021]FIG. 1 illustrates an exemplary computing device 100 such as may be used 18 in accordance with certain embodiments of the invention. Computing device 100 represents a wide variety of computing devices, such as wearable computers, personal digital assistants (PDAs), handheld or pocket computers, telephones (e.g., cell phones), laptop computers, gaming consoles or portable gaming devices, desktop computers, Internet appliances, etc. Although the dynamic displaying of current status of tasks described herein is particularly useful if computing device has a small display, any size display may be used with the invention.

[0022] Computing device 100 includes a central processing unit (CPU) 102, memory 104, a storage device 106, one or more input controllers 108, and one or more output controllers 110 (alternatively, a single controller may be used for both input and output) coupled together via a bus 112. Bus 112 represents one or more conventional computer buses, including a processor bus, system bus, accelerated graphics port (AGP), universal serial bus (USB), peripheral component interconnect bus (PCI), etc.

[0023] Memory 104 may be implemented using volatile and/or non-volatile memory, such as random access memory (RAM), read only memory (ROM), Flash memory, electronically erasable programmable read only memory (EEPROM), disk, and so forth. Storage device 106 is typically implemented using non-volatile “permanent” memory, such as ROM, EEPROM, magnetic or optical diskette, memory cards, and the like.

[0024] Input controller(s) 108 are coupled to receive inputs from one or more input devices 114. Input devices 114 include any of a variety of conventional input devices, such as a microphone, voice recognition devices, traditional qwerty keyboards, chording keyboards, half qwerty keyboards, dual forearm keyboards, chest mounted keyboards, handwriting recognition and digital ink devices, a mouse, a track pad, a digital stylus, a finger or glove device to capture user movement, pupil tracking devices, a gyropoint, a trackball, a voice grid device, digital cameras (still and motion), and so forth.

[0025] Output controller(s) 110 are coupled to output data to one or more output devices 116. Output devices 116 include any of a variety of conventional output devices, such as a display device (e.g., a hand-held flat panel display, an eyeglass-mounted display that allows the user to view the real world surroundings while simultaneously overlaying or otherwise presenting information to the user in an unobtrusive manner), a speaker, an olfactory output device, tactile output devices, and so forth.

[0026] One or more application programs 118 are stored in memory 104 and executed by CPU 102. When executed, application programs 118 generate data that may be output to the user via one or more of the output devices 116 and also receive data that may be input by the user via one or more of the input devices 114. For discussion purposes, one particular application program is illustrated with a user interface (UI) component 120 that is designed to present information to the user including dynamically displaying the current status of tasks as discussed in more detail below.

[0027] Although discussed herein primarily with reference to software components and modules, the invention may be implemented in hardware or a combination of hardware, software, and/or firmware. For example, one or more application a5 specific integrated circuits (ASICs) could be designed or programmed to carry out the invention.

[0028]FIG. 2 illustrates an exemplary user interface display in accordance with certain embodiments of the invention. User interface display 150 can be, for example, the display generated by user interface 120 of FIG. 1. UI display 150 includes an item or prompt list portion 152, a user choices portion 154, and an applet window portion 156. Additional labels or prompts 158 may also be included (e.g., a title for the task being handled, the current time, the amount of time left to finish the task, etc.). List portion 152 displays a list that prompts the user of tasks that are to be handled by the user in a particular order. An indication is also made to the user within list portion 152 of where the user currently is in that list (that is, what the next item or task is that needs to be handled by the user), and also identifies items or tasks (if any) that have already been handled by the user as well as future items or tasks (if any) that need to be handled by the user. The manner in which an item or task is handled by the user is dependent on the nature of the list, as discussed in more detail below.

[0029] User choices portion 154 displays the options for the user to select from based on the next item or task in the list that needs to be handled by the user. For example, assume that the list in portion 152 is a list prompting the user regarding what information needs to be gathered in order for the user to set up a meeting with a potential customer. The list of prompts in list portion 152 could be a list of tasks the user must perform—that is, a list of information that needs to be collected (e.g., the customer's name, the location of the meeting, the time of the meeting, and so forth). If we further assume that the current task that needs to be handled by the user is entry of the location of the meeting, user choices portion 154 could display the various permissible inputs for the location of the meeting (e.g., at the user's main office, at a remote office, at the customer's facility, and so forth).

[0030] By way of another example, the item list may be a list of prompts for the information to be verbally input by the user in each step, with user choices portion 154 displaying a list of which words can be spoken in each step.

[0031] Applet window portion 156 displays additional information clarifying or amplifying the choices in user choices portion 154 (or the current item or task in item list portion 152). Following the previous example, if the current task that needs to be handled by the user is entry of the location of the meeting, applet window portion 156 could display additional descriptive information for one or more of the permissible inputs for the location of the meeting (e.g., a street address, a distance from the user's home, a map flagging the locations of the various meeting locations, and so forth).

[0032] The list displayed in list portion 152 is a list of items that is to be traversed by a user in a particular order. This can be a list of task prompts regarding tasks that the user is to perform, a list of tasks prompts regarding tasks to be performed by another user or computer, and so forth. Any of a wide variety of lists can be displayed, such as a set of tasks the user is to perform in a particular sequence as part of his or her job (this can be used, for example, to assist in training users to do their jobs), a set of tasks the user is to perform in a particular sequence in order to assemble or install a product he or she has purchased, a set of words to be spoken (e.g., queues as to what voice inputs the user is to make in order to carry out a task), a list of questions or fields to be answered, and so forth. Alternatively, the list of items may be a list of tasks or steps to be performed by a computer or computer program. Such a list can be used, for example, by a user to track the process of the computer or program in carrying out the particular sequence of steps. Additionally, depending on the nature of the sequence of tasks being performed, multiple lists of items may be displayed (e.g., a multi-tiered item list).

[0033] Situations can arise in which the list of items or prompts is too large to be displayed in its entirety. In such situations, only a portion of the list is displayed (e.g., centered on the item or prompt for the next task to be performed). This subset of the steps to be performed is then scrolled as tasks are completed, resulting in a dynamic list display that changes when a task is completed.

[0034] By displaying the list of prompts (or at least a portion thereof), the user is able to readily identify the status of the set of tasks being performed (in other words, the user is also able to obtain a feel for where he or she is (or where the user or computer being monitored is) in progressing through the sequence of tasks). The user is able to quickly identify one or more previous tasks (if any) in the sequence, as well as one or more future tasks (if any) in the sequence. Such information is particularly helpful in reorienting the user to the sequence of tasks if his or her attention has been diverted away from the sequence. For example, the user's attention may be diverted away from the sequence to answer questions from another employee. After answering the question, the user can look back at display 150 and quickly reorient him- or her-self into the sequence of tasks being performed.

[0035] Item lists may be a set of predetermined items, such as a particular set of steps to be followed to assemble a machine or a set of words to be uttered to carry out a task for a speech-recognizing computer. Alternatively, item lists may be dynamic, changing based on the user's current location, current activity, past behavior, etc. For example, computer 100 of FIG. 1 may detect where the user is currently located (e.g., in his or her office, in the assembly plant, which assembly plant, etc.), and provide the appropriate instructions to perform a particular task based on that current location. Additional information regarding detecting the user's current context (e.g., current location, current activity, etc.) can be found in a co-pending U.S. patent application Ser. No. 09/216,193, entitled “Method and System For Controlling Presentation of Information To a User Based On The User's Condition”, which was filed Dec. 18, 1998, and is commonly assigned to Tangis Corporation. This application is hereby incorporated by reference.

[0036]FIG. 3 illustrates an exemplary display of an item list and current location marker such as may be used in accordance with certain embodiments of the invention. Assume that the sequence of items on the list is a set of prompts regarding information that needs to be supplied by the user in order to schedule a meeting. In the illustrated example, this list includes the following information: who the meeting is with (who), the date and time for the meeting (when), the duration of the meeting (how long), the location of the meeting (where), an indication of any materials to bring to the material (bring), and an indication of anyone else that should be notified of the meeting (cc).

[0037]FIG. 3 illustrates an example item list displayed in list portion 152 of FIG. 2. Initially, the item list 170 is displayed, including the following prompts: “who?”, “when?”, “how long?”, “where?”, and “bring?”. The prompts in list 170 provide a quick identification to the user of what information he or she needs to input for each task in the sequence of tasks for scheduling a meeting. Due to the limited is display area, list 170 does not include the prompts for each step in the sequence, but rather scrolls through the prompts as discussed in more detail below. A current location marker 172 is also illustrated in FIG. 3 to identify to the user what the current step is in the sequence. Assuming the meeting scheduling process has just begun, the first step in the sequence is to identify who the meeting is with (who), which is identified by current location marker 172 being situated above the prompt “who?”. In the illustrated example, location marker 172 is a circle or ball. Alternatively, other types of presentation changes may be made to alter the appearance of a prompt (or area surrounding a prompt) in order to distinguish the current step from other steps in the sequence. For example, different shapes other than a circle or ball may be used for a location marker, the text for the prompt may be altered (e.g., a different color, a different font, a different size, a different position on screen (e.g., slightly higher or lower than other prompts in the list), and so forth), the display around the prompt may be altered (e.g., the prompt may be inverted so that it appears white on a black background rather than the more traditional black on a white background, the prompt may be highlighted, the prompt may be encircled by a border, and so forth), etc. Those skilled in the art can easily determine a variety of alternate methods for marking the current step.

[0038] One additional presentation change that can be made to distinguish the current step from other steps in the sequence is to change the prompt itself. The prompt could be replaced with another prompt, or another prompt could be superimposed on the prompt for the current step. For example, the user may have a set of individuals that he or she typically meets with, and these may be superimposed on the “who?” prompt when it is the current step. FIGS. 4A- 4B illustrates different ways in which the prompt in a sequence can be changed. FIG. 4A illustrates an example item list with the prompt for the current step in the sequence being superimposed with various input options. A list 190 is illustrated and the current step is to input who the meeting is to be with (the “who?” prompt). As illustrated, a set of common people that the user schedules meetings with (Jane, David, Lisa, and Richard) are superimposed on the “who?” prompt. The appearance of the underlying prompt “who?” may be changed (e.g., shadowed out, different color, etc.) in order for overlying input options to be more easily viewed. It is to be appreciated that the exact location of the superimposed set of input options can vary (e.g., the characters of one or more input options may overlap the prompt, or be separated from the prompt).

[0039]FIG. 4B illustrates an example item list with the prompt for the current step in the sequence being replaced by the set of input options. A list 192 is illustrated and the current step is to input who the meeting is to be with (the “who?” prompt). However, as illustrated, the “who?” prompt is replaced with a set of common people that the user schedules meetings with (Jane, David, Lisa, and Richard).

[0040] The user is thus given an indication of both the current step in the sequence as well as common responses to that step. The type of information that is superimposed on or replaces the prompt can vary based on the current step. For example, when the “when?” prompt is the current step it may have superimposed thereon the times that the user is available for the current day (or current week, and so forth).

[0041] Returning to FIG. 3, once the user enters the information identifying who the meeting is with (assume for purposes of this example the meeting is with Bob Smith), list 170 is changed to list 174 in which the prompt “who?” is replaced with the name “Bob Smith” and the current location marker 172 is changed to indicate the next prompt (“when?”) is the current task that needs to be handled by the user. Assuming the user inputs that the meeting is to occur at 1Oam on October 31, list 174 is changed to list 176 in which the prompt “when?” is replaced with the date and time of the meeting, and the current location marker 172 is changed to indicate the next prompt (“how long?”) is the current task that needs to be handled by the user. Thus, as can be seen from lists 172, 174, and 176, the current location marker 172 “bounces” along the list from item to item, making the user readily aware of what the current task is that he or she should be performing (that is, which data he or she should be inputting in the present example).

[0042] Once the user inputs the duration of the meeting, list 176 is changed to list 178. Given the limited display area, the user interface now scrolls the list so that the leftmost item is no longer shown but a new item is added at the right. Thus, the identification of “Bob Smith” is no longer shown, but a prompt for who else should be notified of the meeting (“cc?”) is now shown. Once the user enters the location for the meeting (“home office”), list 178 is changed to list 180 and current location marker 172 is changed to indicate the next prompt (“bring?”) is the current task that needs to be handled by the user. Thus, as can be seen with lists 176, 178, and 180, current location marker 172 may not be moved in response to an input but the list may be scrolled.

[0043] Thus, as can be seen in FIG. 3, the item list provides a series of prompts identifying what tasks (if any) in the sequence have already been performed and what tasks (if any) remain to be performed. For those tasks that have already been performed, an indication is made in the list as to what action was taken by the user for those tasks (e.g., what information was entered by the user in the illustrated example). Thus, the user can readily orient him- or her-self to the sequence of steps, even if his or her attention is diverted from the display for a period of time. Alternatively, the prompts in the list need not be changed when the user enters the data (e.g., “who?” need not be replaced by “Bob Smith”). The data input by the user can alternatively be displayed elsewhere (e.g., in applet window portion 156).

[0044] One advantage of the item lists described herein is that the lists present the multiple steps or items in a concise manner - these steps or items can also be referred to as idioms. When these idioms are presented together in a sequence, the provide more information to the user than when presented in independent form. For example, the idiom “bring?” by itself does not present as much information to the user as the entire sequence of idioms “who?”, “when?”, “how long?”, “where?”, and “bring?”.

[0045] The use of item lists as described herein also allows an individual to “zoom” in on (and thus gain more information about) a particular task. For example, with reference to FIG. 3, the user is able to select and zoom in on the “where?” prompt and have additional information about that task displayed (e.g., the possible locations for the meeting). The user is able to “backtrack” through the list (e.g., by moving a cursor to the desired item and selecting it, or using a back arrow key or icon, or changing the current location marker (e.g., dragging and dropping the location marker to the desired item), etc.) and see this additional information for tasks already completed. Alternatively, the “backtracking” may be for navigational rather than informational purposes. Moving back through the list (whether by manipulation of the location marker or in some other manner) may also be used to accomplish other types of operations, such as defining a macro or annotation.

[0046] Additionally, by displaying the prompts for future items, the speed of handling of the sequence of the items by the user can potentially be increased. For example, the user can see the prompt for the next one or more items in the list and begin thinking about how he or she is going to handle that particular item even before the computing device is finished processing the input for the item he or she just handled.

[0047] According to another embodiment, multiple location markers are displayed along with the item list—one marker identifying the current item to be handled by the user and another marker identifying the current item being processed by the computing device. Situations can arise where the user can input data quicker than it can be processed by the computing device. For example, the user may be able to talk at a faster rate than the computing device is able to analyze the speech.

[0048] The use of two such markers can allow the user to identify if the computing device is hung up on or having difficulty processing a particular input (e.g., identify a particular word spoken by the user, misrecognition of the input, improper parsing, etc.), the user can identify this situation and go back to the task the computing device is having difficulty processing and re-enter the speech.

[0049]FIG. 5 is a flowchart illustrating an exemplary process for displaying the current status of tasks in accordance with certain embodiments of the invention. The process of FIG. 5 is carried out by the user interface of a computing device (e.g., interface 120 of FIG. 1), and may be performed in software. Although FIG. 5 is discussed with reference to a location marker, it is to be appreciated that any of the presentation changes discussed above an be used to identify items in the list.

[0050] Initially, an item list is displayed (act 200), which is a sequence of items or prompts for the user to follow. A current location marker is also displayed to identify the first item in the list (act 202), and input corresponding to the first item in the list is received (act 204). The nature of this input can vary depending on the sequence of tasks itself (e.g., it may be data input by a user, an indication from another computer program that the task has been accomplished, etc.). A check is then made as to whether the end of the list has been reached (at 206). If the end of the list has been reached then the process stops (act 208), waiting for the next sequence of tasks to begin or for the user to backtrack to a previously completed task.

[0051] However, if the end of the list has not been reached, then a check is made as to whether scrolling of the list is needed (act 210). Whether scrolling of the list is needed can be based on a variety of different factors. For example, the user interface may attempt to make sure that there are always at least a threshold number of prompts before and/or after the current location marker, the user interface may attempt to make sure that the current task remains as close to the center of the item list as is possible but that no portions of the item list be left empty, etc. These factors can optionally be user-configurable preferences, allowing the user to adjust the display to his or her particular likes and/or dislikes (e.g., the user may prefer to see more future tasks than previous tasks).

[0052] If scrolling is needed, then the item list is scrolled by one item (or alternatively more items) in the appropriate direction (act 212). The amount that the item list is scrolled can vary (e.g., based on the sizes of the different items in the list). The appropriate direction for scrolling can vary based on the activity being performed by the user and the layout of the list (e.g., in the example of FIG. 3, the scrolling is from right to left when progressing forward through the list, and left to right when backtracking through the list). Regardless of whether the ordered item list is scrolled, after act 210 or 212 the current location marker is moved as necessary to identify the next item in the list that is to be handled by the user (act 214). In some situations, movement of the current location marker may not be necessary due to the scrolling performed (e.g., as illustrated with reference to lists 176 and 178 in FIG. 3). At some point after the current location marker is moved (if necessary), user input is received corresponding to the identified next item in the list (act 216). The process then returns to determine whether the end of the list has been reached (act 206).

[0053] The item list and current location identifier or marker can be displayed in a wide variety of different manners. FIGS. 6 and 7 illustrate alternative displays of the item list and current location identifiers with reference to a sequence of tasks to be completed in order to record a new inspection (e.g., a building inspection). In the exemplary display 240 of FIG. 6, an item list portion 242 and an applet window portion 244 are illustrated. The item list portion 242 includes a list of tasks that are to be handled by the user, each of which is information to be entered by the user. Once entered, the information is displayed in applet window portion 244. A current location marker 246 advances down the list in portion 242 to identify the current information that the user needs to input (the customer's state in the illustrated display). Additional information is displayed at the top of display 240, including a prompt 248 identifying a type of information being entered by the user (inspection information).

[0054] In the exemplary display 260 of FIG. 7, a multi-tiered item list is displayed including list portion 262 and list portion 264. In list portion 262, prompts for the overall process of recording a new inspection are listed, including selecting a new inspection option and then entering inspection information. Two current location markers 266 and 268 are illustrated, each providing a visual indication of where in the overall process the current user is (inspection info in the illustrated display). A prompt 270 provides a further identification to the user of where he or she is in the overall process. List portion 264 includes prompts for the process of entering inspection information, with a current location marker 272 providing a visual indication of where in the inspection information entry process the user currently is (customer state in the illustrated display).

[0055] In addition to tracking the status of tasks being performed by a single user, the dynamic displaying of the current status of tasks of the present invention can further be used to track the status of tasks being performed by multiple users. In this situation, information indicating the status of tasks being performed by multiple users is communicated back to the computing devices of one or more other users, who in turn can view the status information of multiple users on a single display.

[0056]FIG. 8 illustrates an exemplary distributed environment in which the status of tasks being performed by multiple users can be monitored. In the illustrated example, multiple users Jamie, John, Max, and Carol each have a wearable computer with an eyeglass display 300, 302, 304, and 306, respectively. An item list is displayed on the eyeglass display for each of these users, with a current location marker to identify to the respective users where they are in the task sequences they are performing. Information regarding their current location is also communicated to another computing device of their supervisor Jane, who is also wearing an eyeglass display 308. The information communicated to Jane's computer can be simply an identification of the current location (e.g., Jane's 1s computer may already be programmed with all of the tasks in the list), or alternatively the entire (or at least a portion of) the item list. The information for one or more of the users Jamie, John, Max, and Carol can then be displayed on display 308, allowing Jane to keep track of the status of each of the users Jamie, John, Max, and Carol in performing their tasks. This allows Jane, as the supervisor, to see if people are proceeding through their tasks too quickly or too slowly (e.g., a user may be having difficulty and need assistance), to know when the individual users will be finished with their tasks, etc. If a multi-tiered item list is being used, then the supervisor can also zoom in on the particular step of a user and get additional information regarding where the user is stuck.

[0057]FIG. 9 illustrates an exemplary group of lists that may be displayed on eyeglass display 308 of FIG. 8. Assume that each of the users John, Jamie, Max, and Carol are each performing a machine assembly process involving the following tasks: inventory the necessary parts, assemble an intake, lubricate a core part of the machine, install the assembled intake, verify that the batteries are fully charged, and then run a diagnostic program. The tasks in the machine assembly process are illustrated in a portion 310 of display 308 in an abbreviated form. Alternatively, the tasks illustrated in portion 310 may not be abbreviated, or may be represented in some other manner (e.g., as icons). A separate item list is displayed on display 308 for each of the users along with a corresponding current ii location marker in the shape of a ball or circle. Thus, as illustrated in FIG. 9, the viewer of display 308 can readily identify that John is at the “assemble intake” step, Jamie and Max are both at the “install intake” step, and Carol is at the “verify charge” step. Thus, the supervisor viewing display 308 can quickly and easily determine, based on the item list and current location markers, that each of Jamie, Max, and Carol is proceeding normally through the assembly process, but that John is hung up on the “assemble intake” step, so the supervisor can check with John to see if he is experiencing difficulties with this step.

[0058] Conclusion

[0059] Although the description above uses language that is specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the invention.

Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US6999955Jun 28, 2002Feb 14, 2006Microsoft CorporationSystems and methods for estimating and integrating measures of human cognitive load into the behavior of computational applications and services
US7003525Jan 23, 2004Feb 21, 2006Microsoft CorporationSystem and method for defining, refining, and personalizing communications policies in a notification platform
US7039642May 4, 2001May 2, 2006Microsoft CorporationDecision-theoretic methods for identifying relevant substructures of a hierarchical file structure to enhance the efficiency of document access, browsing, and storage
US7043506Jun 28, 2001May 9, 2006Microsoft CorporationUtility-based archiving
US7053830Jul 25, 2005May 30, 2006Microsoft CorprorationSystem and methods for determining the location dynamics of a portable computing device
US7069259Jun 28, 2002Jun 27, 2006Microsoft CorporationMulti-attribute specification of preferences about people, priorities and privacy for guiding messaging and communications
US7089226Jun 28, 2001Aug 8, 2006Microsoft CorporationSystem, representation, and method providing multilevel information retrieval with clarification dialog
US7103806Oct 28, 2002Sep 5, 2006Microsoft CorporationSystem for performing context-sensitive decisions about ideal communication modalities considering information about channel reliability
US7107254May 7, 2001Sep 12, 2006Microsoft CorporationProbablistic models and methods for combining multiple content classifiers
US7139742Feb 3, 2006Nov 21, 2006Microsoft CorporationSystems and methods for estimating and integrating measures of human cognitive load into the behavior of computational applications and services
US7162473Jun 26, 2003Jan 9, 2007Microsoft CorporationMethod and system for usage analyzer that determines user accessed sources, indexes data subsets, and associated metadata, processing implicit queries based on potential interest to users
US7164361 *Mar 31, 2003Jan 16, 2007Schering-Plough Animal Health CorporationSystem and method for collecting, processing and managing livestock data
US7191159Jun 24, 2004Mar 13, 2007Microsoft CorporationTransmitting information given constrained resources
US7199754Jul 25, 2005Apr 3, 2007Microsoft CorporationSystem and methods for determining the location dynamics of a portable computing device
US7202816Dec 19, 2003Apr 10, 2007Microsoft CorporationUtilization of the approximate location of a device determined from ambient signals
US7203635Jun 27, 2002Apr 10, 2007Microsoft CorporationLayered models for context awareness
US7203909Apr 4, 2002Apr 10, 2007Microsoft CorporationSystem and methods for constructing personalized context-sensitive portal pages or views by analyzing patterns of users' information access activities
US7225187Apr 20, 2004May 29, 2007Microsoft CorporationSystems and methods for performing background queries from content and activity
US7233286Jan 30, 2006Jun 19, 2007Microsoft CorporationCalibration of a device location measurement system that utilizes wireless signal strengths
US7233933Jun 30, 2003Jun 19, 2007Microsoft CorporationMethods and architecture for cross-device activity monitoring, reasoning, and visualization for providing status and forecasts of a users' presence and availability
US7233954Mar 8, 2004Jun 19, 2007Microsoft CorporationMethods for routing items for communications based on a measure of criticality
US7240011Oct 24, 2005Jul 3, 2007Microsoft CorporationControlling the listening horizon of an automatic speech recognition system for use in handsfree conversational dialogue
US7243130Mar 16, 2001Jul 10, 2007Microsoft CorporationNotification platform architecture
US7251696Oct 28, 2002Jul 31, 2007Microsoft CorporationSystem and methods enabling a mix of human and automated initiatives in the control of communication policies
US7293013Oct 19, 2004Nov 6, 2007Microsoft CorporationSystem and method for constructing and personalizing a universal information classifier
US7293019Apr 20, 2004Nov 6, 2007Microsoft CorporationPrinciples and methods for personalizing newsfeeds via an analysis of information novelty and dynamics
US7305437Jan 31, 2005Dec 4, 2007Microsoft CorporationMethods for and applications of learning and inferring the periods of time until people are available or unavailable for different forms of communication, collaboration, and information access
US7319877Dec 19, 2003Jan 15, 2008Microsoft CorporationMethods for determining the approximate location of a device from ambient signals
US7319908Oct 28, 2005Jan 15, 2008Microsoft CorporationMulti-modal device power/mode management
US7327245Nov 22, 2004Feb 5, 2008Microsoft CorporationSensing and analysis of ambient contextual signals for discriminating between indoor and outdoor locations
US7327349Mar 2, 2004Feb 5, 2008Microsoft CorporationAdvanced navigation techniques for portable devices
US7330895Oct 28, 2002Feb 12, 2008Microsoft CorporationRepresentation, decision models, and user interface for encoding managing preferences, and performing automated decision making about the timing and modalities of interpersonal communications
US7337181Jul 15, 2003Feb 26, 2008Microsoft CorporationMethods for routing items for communications based on a measure of criticality
US7346622Mar 31, 2006Mar 18, 2008Microsoft CorporationDecision-theoretic methods for identifying relevant substructures of a hierarchical file structure to enhance the efficiency of document access, browsing, and storage
US7386801May 21, 2004Jun 10, 2008Microsoft CorporationSystem and method that facilitates computer desktop use via scaling of displayed objects with shifts to the periphery
US7389351Mar 15, 2001Jun 17, 2008Microsoft CorporationSystem and method for identifying and establishing preferred modalities or channels for communications based on participants' preferences and contexts
US7397357Nov 9, 2006Jul 8, 2008Microsoft CorporationSensing and analysis of ambient contextual signals for discriminating between indoor and outdoor locations
US7403935May 3, 2005Jul 22, 2008Microsoft CorporationTraining, inference and user interface for guiding the caching of media content on local stores
US7406449Jun 2, 2006Jul 29, 2008Microsoft CorporationMultiattribute specification of preferences about people, priorities, and privacy for guiding messaging and communications
US7409335Jun 29, 2001Aug 5, 2008Microsoft CorporationInferring informational goals and preferred level of detail of answers based on application being employed by the user
US7409423Jun 28, 2001Aug 5, 2008Horvitz Eric JMethods for and applications of learning and inferring the periods of time until people are available or unavailable for different forms of communication, collaboration, and information access
US7411549Jun 14, 2007Aug 12, 2008Microsoft CorporationCalibration of a device location measurement system that utilizes wireless signal strengths
US7428521Jun 29, 2005Sep 23, 2008Microsoft CorporationPrecomputation of context-sensitive policies for automated inquiry and action under uncertainty
US7430505Jan 31, 2005Sep 30, 2008Microsoft CorporationInferring informational goals and preferred level of detail of answers based at least on device used for searching
US7433859Dec 12, 2005Oct 7, 2008Microsoft CorporationTransmitting information given constrained resources
US7440950May 9, 2005Oct 21, 2008Microsoft CorporationTraining, inference and user interface for guiding the caching of media content on local stores
US7444383Jun 30, 2003Oct 28, 2008Microsoft CorporationBounded-deferral policies for guiding the timing of alerting, interaction and communications using local sensory information
US7444384Mar 8, 2004Oct 28, 2008Microsoft CorporationIntegration of a computer-based message priority system with mobile electronic devices
US7444598Jun 30, 2003Oct 28, 2008Microsoft CorporationExploded views for providing rich regularized geometric transformations and interaction models on content for viewing, previewing, and interacting with documents, projects, and tasks
US7451151May 9, 2005Nov 11, 2008Microsoft CorporationTraining, inference and user interface for guiding the caching of media content on local stores
US7454393Aug 6, 2003Nov 18, 2008Microsoft CorporationCost-benefit approach to automatically composing answers to questions by extracting information from large unstructured corpora
US7457879Apr 19, 2007Nov 25, 2008Microsoft CorporationNotification platform architecture
US7460884Jun 29, 2005Dec 2, 2008Microsoft CorporationData buddy
US7464093Jul 18, 2005Dec 9, 2008Microsoft CorporationMethods for routing items for communications based on a measure of criticality
US7467353Oct 28, 2005Dec 16, 2008Microsoft CorporationAggregation of multi-modal devices
US7490122Jan 31, 2005Feb 10, 2009Microsoft CorporationMethods for and applications of learning and inferring the periods of time until people are available or unavailable for different forms of communication, collaboration, and information access
US7493369Jun 30, 2004Feb 17, 2009Microsoft CorporationComposable presence and availability services
US7499896Aug 8, 2006Mar 3, 2009Microsoft CorporationSystems and methods for estimating and integrating measures of human cognitive load into the behavior of computational applications and services
US7512940Mar 29, 2001Mar 31, 2009Microsoft CorporationMethods and apparatus for downloading and/or distributing information and/or software resources based on expected utility
US7516113Aug 31, 2006Apr 7, 2009Microsoft CorporationCost-benefit approach to automatically composing answers to questions by extracting information from large unstructured corpora
US7519529Jun 28, 2002Apr 14, 2009Microsoft CorporationSystem and methods for inferring informational goals and preferred level of detail of results in response to questions posed to an automated information-retrieval or question-answering service
US7519564Jun 30, 2005Apr 14, 2009Microsoft CorporationBuilding and using predictive models of current and future surprises
US7519676Jan 31, 2005Apr 14, 2009Microsoft CorporationMethods for and applications of learning and inferring the periods of time until people are available or unavailable for different forms of communication, collaboration, and information access
US7529683Jun 29, 2005May 5, 2009Microsoft CorporationPrincipals and methods for balancing the timeliness of communications and information delivery with the expected cost of interruption via deferral policies
US7532113Jul 25, 2005May 12, 2009Microsoft CorporationSystem and methods for determining the location dynamics of a portable computing device
US7536650May 21, 2004May 19, 2009Robertson George GSystem and method that facilitates computer desktop use via scaling of displayed objects with shifts to the periphery
US7539659Jun 15, 2007May 26, 2009Microsoft CorporationMultidimensional timeline browsers for broadcast media
US7548904Nov 23, 2005Jun 16, 2009Microsoft CorporationUtility-based archiving
US7552862Jun 29, 2006Jun 30, 2009Microsoft CorporationUser-controlled profile sharing
US7565403Jun 30, 2003Jul 21, 2009Microsoft CorporationUse of a bulk-email filter within a system for classifying messages for urgency or importance
US7580908Apr 7, 2005Aug 25, 2009Microsoft CorporationSystem and method providing utility-based decision making about clarification dialog given communicative uncertainty
US7603427Dec 12, 2005Oct 13, 2009Microsoft CorporationSystem and method for defining, refining, and personalizing communications policies in a notification platform
US7610151Jun 27, 2006Oct 27, 2009Microsoft CorporationCollaborative route planning for generating personalized and context-sensitive routing recommendations
US7610560Jun 30, 2005Oct 27, 2009Microsoft CorporationMethods for automated and semiautomated composition of visual sequences, flows, and flyovers based on content and context
US7613670Jan 3, 2008Nov 3, 2009Microsoft CorporationPrecomputation of context-sensitive policies for automated inquiry and action under uncertainty
US7617042Jun 30, 2006Nov 10, 2009Microsoft CorporationComputing and harnessing inferences about the timing, duration, and nature of motion and cessation of motion with applications to mobile computing and communications
US7617164Mar 17, 2006Nov 10, 2009Microsoft CorporationEfficiency of training for ranking systems based on pairwise training with aggregated gradients
US7636890Jul 25, 2005Dec 22, 2009Microsoft CorporationUser interface for controlling access to computer objects
US7643985Jun 27, 2005Jan 5, 2010Microsoft CorporationContext-sensitive communication and translation methods for enhanced interactions and understanding among speakers of different languages
US7644427Jan 31, 2005Jan 5, 2010Microsoft CorporationTime-centric training, interference and user interface for personalized media program guides
US7646755Jun 30, 2005Jan 12, 2010Microsoft CorporationSeamless integration of portable computing devices and desktop computers
US7647171Jun 29, 2005Jan 12, 2010Microsoft CorporationLearning, storing, analyzing, and reasoning about the loss of location-identifying signals
US7653715Jan 30, 2006Jan 26, 2010Microsoft CorporationMethod and system for supporting the communication of presence information regarding one or more telephony devices
US7664249Jun 30, 2004Feb 16, 2010Microsoft CorporationMethods and interfaces for probing and understanding behaviors of alerting and filtering systems based on models and simulation from logs
US7673088Jun 29, 2007Mar 2, 2010Microsoft CorporationMulti-tasking interference model
US7685160Jul 27, 2005Mar 23, 2010Microsoft CorporationSystem and methods for constructing personalized context-sensitive portal pages or views by analyzing patterns of users' information access activities
US7689521Jun 30, 2004Mar 30, 2010Microsoft CorporationContinuous time bayesian network models for predicting users' presence, activities, and component usage
US7689615Dec 5, 2005Mar 30, 2010Microsoft CorporationRanking results using multiple nested ranking
US7693817Jun 29, 2005Apr 6, 2010Microsoft CorporationSensing, storing, indexing, and retrieving data leveraging measures of user activity, attention, and interest
US7694214Jun 29, 2005Apr 6, 2010Microsoft CorporationMultimodal note taking, annotation, and gaming
US7696866Jun 28, 2007Apr 13, 2010Microsoft CorporationLearning and reasoning about the context-sensitive reliability of sensors
US7698055Jun 30, 2005Apr 13, 2010Microsoft CorporationTraffic forecasting employing modeling and analysis of probabilistic interdependencies and contextual data
US7702635Jul 27, 2005Apr 20, 2010Microsoft CorporationSystem and methods for constructing personalized context-sensitive portal pages or views by analyzing patterns of users' information access activities
US7706964Jun 30, 2006Apr 27, 2010Microsoft CorporationInferring road speeds for context-sensitive routing
US7707131Jun 29, 2005Apr 27, 2010Microsoft CorporationThompson strategy based online reinforcement learning system for action selection
US7711716Mar 6, 2007May 4, 2010Microsoft CorporationOptimizations for a background database consistency check
US7714712Dec 12, 2007May 11, 2010Emigh Aaron TMobile surveillance
US7716057Jun 15, 2007May 11, 2010Microsoft CorporationControlling the listening horizon of an automatic speech recognition system for use in handsfree conversational dialogue
US7734471Jun 29, 2005Jun 8, 2010Microsoft CorporationOnline learning for dialog systems
US7738881Dec 19, 2003Jun 15, 2010Microsoft CorporationSystems for determining the approximate location of a device from ambient signals
US7739040Jun 30, 2006Jun 15, 2010Microsoft CorporationComputation of travel routes, durations, and plans over multiple contexts
US7739210Aug 31, 2006Jun 15, 2010Microsoft CorporationMethods and architecture for cross-device activity monitoring, reasoning, and visualization for providing status and forecasts of a users' presence and availability
US7739221Jun 28, 2006Jun 15, 2010Microsoft CorporationVisual and multi-dimensional search
US7742591Apr 20, 2004Jun 22, 2010Microsoft CorporationQueue-theoretic models for ideal integration of automated call routing systems with human operators
US7757250Apr 4, 2001Jul 13, 2010Microsoft CorporationTime-centric training, inference and user interface for personalized media program guides
US7761464Jun 19, 2006Jul 20, 2010Microsoft CorporationDiversifying search results for improved search and personalization
US7774349Jun 30, 2004Aug 10, 2010Microsoft CorporationStatistical models and methods to support the personalization of applications and services via consideration of preference encodings of a community of users
US7778632Oct 28, 2005Aug 17, 2010Microsoft CorporationMulti-modal device capable of automated actions
US7778820Aug 4, 2008Aug 17, 2010Microsoft CorporationInferring informational goals and preferred level of detail of answers based on application employed by the user based at least on informational content being displayed to the user at the query is received
US7797267Jun 30, 2006Sep 14, 2010Microsoft CorporationMethods and architecture for learning and reasoning in support of context-sensitive reminding, informing, and service facilitation
US7818317 *Sep 9, 2004Oct 19, 2010James RoskindLocation-based tasks
US7822762Jun 28, 2006Oct 26, 2010Microsoft CorporationEntity-specific search model
US7831532Jun 30, 2005Nov 9, 2010Microsoft CorporationPrecomputation and transmission of time-dependent information for varying or uncertain receipt times
US7831679Jun 29, 2005Nov 9, 2010Microsoft CorporationGuiding sensing and preferences for context-sensitive services
US7873620Jun 29, 2006Jan 18, 2011Microsoft CorporationDesktop search from mobile device
US7885817Jun 29, 2005Feb 8, 2011Microsoft CorporationEasy generation and automatic training of spoken dialog systems using text-to-speech
US7908663Apr 20, 2004Mar 15, 2011Microsoft CorporationAbstractions and automation for enhanced sharing and collaboration
US7912637Jun 25, 2007Mar 22, 2011Microsoft CorporationLandmark-based routing
US7917514Jun 28, 2006Mar 29, 2011Microsoft CorporationVisual and multi-dimensional search
US7925995Jun 30, 2005Apr 12, 2011Microsoft CorporationIntegration of location logs, GPS signals, and spatial resources for identifying user activities, goals, and context
US7948400Jun 29, 2007May 24, 2011Microsoft CorporationPredictive models of road reliability for traffic sensor configuration and routing
US7970721Jun 15, 2007Jun 28, 2011Microsoft CorporationLearning and reasoning from web projections
US7979252Jun 21, 2007Jul 12, 2011Microsoft CorporationSelective sampling of user state based on expected utility
US7984169Jun 28, 2006Jul 19, 2011Microsoft CorporationAnonymous and secure network-based interaction
US7991607Jun 27, 2005Aug 2, 2011Microsoft CorporationTranslation and capture architecture for output of conversational utterances
US7991718Jun 28, 2007Aug 2, 2011Microsoft CorporationMethod and apparatus for generating an inference about a destination of a trip using a combination of open-world modeling and closed world modeling
US7997485Jun 29, 2006Aug 16, 2011Microsoft CorporationContent presentation based on user preferences
US8024112Jun 26, 2006Sep 20, 2011Microsoft CorporationMethods for predicting destinations from partial trajectories employing open-and closed-world modeling methods
US8049615Mar 25, 2010Nov 1, 2011James. A. RoskindMobile surveillance
US8079079Jun 29, 2005Dec 13, 2011Microsoft CorporationMultimodal authentication
US8090530Jan 22, 2010Jan 3, 2012Microsoft CorporationComputation of travel routes, durations, and plans over multiple contexts
US8112755Jun 30, 2006Feb 7, 2012Microsoft CorporationReducing latencies in computing systems using probabilistic and/or decision-theoretic reasoning under scarce memory resources
US8126641Jun 30, 2006Feb 28, 2012Microsoft CorporationRoute planning with contingencies
US8180465Jan 15, 2008May 15, 2012Microsoft CorporationMulti-modal device power/mode management
US8225224May 21, 2004Jul 17, 2012Microsoft CorporationComputer desktop use via scaling of displayed objects with shifts to the periphery
US8230359Feb 25, 2003Jul 24, 2012Microsoft CorporationSystem and method that facilitates computer desktop use via scaling of displayed objects with shifts to the periphery
US8244240Jun 29, 2006Aug 14, 2012Microsoft CorporationQueries as data for revising and extending a sensor-based location service
US8244660Jul 29, 2011Aug 14, 2012Microsoft CorporationOpen-world modeling
US8254393Jun 29, 2007Aug 28, 2012Microsoft CorporationHarnessing predictive models of durations of channel availability for enhanced opportunistic allocation of radio spectrum
US8317097Jul 25, 2011Nov 27, 2012Microsoft CorporationContent presentation based on user preferences
US8346587Jun 30, 2003Jan 1, 2013Microsoft CorporationModels and methods for reducing visual complexity and search effort via ideal information abstraction, hiding, and sequencing
US8346800Apr 2, 2009Jan 1, 2013Microsoft CorporationContent-based information retrieval
US8375320 *Jun 22, 2010Feb 12, 2013Microsoft CorporationContext-based task generation
US8375434Dec 31, 2005Feb 12, 2013Ntrepid CorporationSystem for protecting identity in a network environment
US8381088Jun 22, 2010Feb 19, 2013Microsoft CorporationFlagging, capturing and generating task list items
US8386929Jun 22, 2010Feb 26, 2013Microsoft CorporationPersonal assistant for task utilization
US8386946Sep 15, 2009Feb 26, 2013Microsoft CorporationMethods for automated and semiautomated composition of visual sequences, flows, and flyovers based on content and context
US8458349Jun 8, 2011Jun 4, 2013Microsoft CorporationAnonymous and secure network-based interaction
US8473197Dec 15, 2011Jun 25, 2013Microsoft CorporationComputation of travel routes, durations, and plans over multiple contexts
US8539380Mar 3, 2011Sep 17, 2013Microsoft CorporationIntegration of location logs, GPS signals, and spatial resources for identifying user activities, goals, and context
US8565783Nov 24, 2010Oct 22, 2013Microsoft CorporationPath progression matching for indoor positioning systems
US8626136Jun 29, 2006Jan 7, 2014Microsoft CorporationArchitecture for user- and context-specific prefetching and caching of information on portable devices
US8661030Apr 9, 2009Feb 25, 2014Microsoft CorporationRe-ranking top search results
US8698622Oct 29, 2012Apr 15, 2014S. Moore Maschine Limited Liability CompanyAlerting based on location, region, and temporal specification
US8701027Jun 15, 2001Apr 15, 2014Microsoft CorporationScope user interface for displaying the priorities and properties of multiple informational items
US8706651Apr 3, 2009Apr 22, 2014Microsoft CorporationBuilding and using predictive models of current and future surprises
US8707204Oct 27, 2008Apr 22, 2014Microsoft CorporationExploded views for providing rich regularized geometric transformations and interaction models on content for viewing, previewing, and interacting with documents, projects, and tasks
US8707214Oct 27, 2008Apr 22, 2014Microsoft CorporationExploded views for providing rich regularized geometric transformations and interaction models on content for viewing, previewing, and interacting with documents, projects, and tasks
US8725567Jun 29, 2006May 13, 2014Microsoft CorporationTargeted advertising in brick-and-mortar establishments
US20100058243 *Aug 26, 2008Mar 4, 2010Schnettgoecke Jr William CMethods and systems for deploying a single continuous improvement approach across an enterprise
US20100083150 *Sep 30, 2008Apr 1, 2010Nokia CorporationUser interface, device and method for providing a use case based interface
US20100299669 *May 20, 2009Nov 25, 2010Microsoft CorporationGeneration of a Comparison Task List of Task Items
US20110314404 *Jun 22, 2010Dec 22, 2011Microsoft CorporationContext-Based Task Generation
Classifications
U.S. Classification715/783
International ClassificationG06Q10/00
Cooperative ClassificationG06Q10/10, G06Q10/109
European ClassificationG06Q10/10, G06Q10/109
Legal Events
DateCodeEventDescription
May 9, 2007ASAssignment
Owner name: MICROSOFT CORPORATION, WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TANGIS CORPORATION;REEL/FRAME:019265/0368
Effective date: 20070306
Owner name: MICROSOFT CORPORATION,WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TANGIS CORPORATION;US-ASSIGNMENT DATABASE UPDATED:20100330;REEL/FRAME:19265/368
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TANGIS CORPORATION;REEL/FRAME:19265/368
Aug 27, 2001ASAssignment
Owner name: TANGIS CORPORATION, WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ABBOTT, KENNETH H. III;NEWELL, DAN;ROBARTS, JAMES O.;REEL/FRAME:012112/0271
Effective date: 20010725