|Publication number||US7086032 B2|
|Application number||US 10/370,271|
|Publication date||Aug 1, 2006|
|Filing date||Feb 20, 2003|
|Priority date||Feb 20, 2003|
|Also published as||US9110688, US20040168149, US20070033574|
|Publication number||10370271, 370271, US 7086032 B2, US 7086032B2, US-B2-7086032, US7086032 B2, US7086032B2|
|Inventors||Magnus Nirell, Cecil Kift, Nathan Rich, Jorgen S. Lien|
|Original Assignee||Adobe Systems Incorporated|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (7), Non-Patent Citations (1), Referenced by (18), Classifications (8), Legal Events (5)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The present invention relates generally to computer software and, more particularly, to an improved method and system for representing object animation within demonstrations and presentations based on displayed information such as that generated by software application programs.
Over the past several decades, computer-related hardware and software technologies have experienced abundant growth. As is known, both hardware and software technologies have become increasingly powerful and sophisticated, which has enabled projects to be completed significantly more quickly than ever before. However, the increasing complexity of such technologies, particularly software, has required extensive investments in connection with the installation, training and maintenance of such programs.
User training has emerged as a particularly significant expense in connection with deployment of software application programs. For example, many application programs now contain numerous complicated features, each of which is effected using a different sequence of steps. As a consequence, users may experience difficulty in recalling the sequence of steps necessary to cause a desired operation to be performed. Accordingly, it has been common to incorporate various “help” systems within such programs. Many such systems initially comprised textual descriptions of the steps related to a given operation, which were presented upon the request of the user.
Although text-based help systems may offer acceptable performance in character-based environments, such systems have proven less effective in contexts in which graphics may be employed. For example, it is often impractical to employ text based help systems to identify areas of the user's screen or specific graphic images requiring manipulation in order to complete a particular function. Accordingly, many help systems now rely upon text as well as graphic images of particular icons and buttons in explaining various program operations. For example, the ROBOHELP™ product available from the assignee of the present invention enables quick and easy creation of professional text and graphic-based help systems for WINDOWS™-based and Web-based applications.
However, in certain cases it may be difficult to convey to end-users how certain software program features are accessed and used with static text/graphic-based help systems. As a result, demonstration systems have been developed in which an animated display of the performance of a set of procedural steps is generated for viewing by the applicable user. These animated demonstrations have often been produced by creating a sequence of bitmap images of the display screen (i.e., “screen shots”), with one or more screen shots being used to illustratively represent a single operational step. In certain approaches the author of the demonstration uses a screen capture program to record the physical actions of an expert user while stepping through a software procedure. These actions may include, for example, moving a mouse cursor to a specific screen area, selecting from among available options via dialog boxes and menus, and entering text through a keyboard. Upon recording of the procedure, the demonstration may be edited to add captions and sound. Unfortunately, editing of the resultant demonstration file tends to be tedious and time-consuming, and it may be necessary to record the procedure again in order to correct an error or modify the demonstration as a result of a corresponding change in functionality of the application program.
Related products facilitating creation of demonstrations of software application programs 20 (“demos”) and tutorials have also been developed. For example, the ROBODEMO™ product, which is commercially available from the assignee of the present invention, comprises a tool for creating animated, interactive demonstrations of software and related tutorials. The ROBODEMO™ product is configured to enable creation of Flash movies based on any application in use, or any onscreen activity, which may then be played back 25 as a Demo or tutorial. Authors may also easily enhance recorded movies with captions, images, click boxes, scoring, text boxes, audio, and special effects. In this way the ROBODEMO™ product may be utilized to create training tutorials, marketing demonstrations, and other presentations relating to the applicable software application program.
However, in certain commercially available products enabling automated development of demonstrations, tutorials and the like, the authoring process is complicated by the difficulty of appropriately placing annotation and caption objects upon the constituent screen shots during the “edit” mode of operation. That is, it has proven somewhat difficult to accurately specify the location of such objects upon individual screen shots while in edit mode in such a way that a desired position of such objects is achieved during the subsequent playback of the resultant presentation file.
The present invention provides a system and method for representing object animation, such as the movement of mouse pointers, within presentations based on software application programs or other information displayed by a computer system.
In one aspect the invention provides a method for representing animation of a mouse pointer object during the creation of a presentation based on information displayed by a computer system. This method includes storing a first mouse pointer position relative to a first screen shot derived from the displayed information. In addition, a second mouse pointer position is stored relative to a second screen shot derived from the information. The method further includes displaying, with respect to the first screen shot, a previous path portion and a next path portion near a displayed mouse pointer position. The previous path portion and the next path portion are representative of a mouse pointer path traversed during playback of the presentation. In a particular implementation the first and/or second mouse pointer positions may be modified during operation in an edit mode.
In another aspect, the invention relates to a computerized method for creating a presentation based on displayed information. The method involves automatically translating certain object information between frames when transitioning from an edit mode to a run-time mode in order to facilitate the positioning of such information during the editing process. The method includes capturing a first screen shot containing a first portion of the displayed information, and capturing a second screen shot containing a second portion of the displayed information. During operation in the edit mode, the method involves specifying the position of a first caption object relative to a particular screen shot. A desired sequence of display of the first caption object relative to the mouse pointer animation occurring during playback of the presentation may also be specified. The first caption object is then displayed with reference to a different screen shot in the desired sequence during the playback of the presentation.
In a further aspect, the present invention pertains to a computerized method for creating a presentation based on displayed information. The method comprises capturing a first screen shot containing a first portion of the displayed information and a second screen shot containing a second portion of the displayed information. The method also involves storing a first mouse pointer position relative to the first screen shot and a second mouse pointer position relative to the second screen shot. A previous path portion and a next path portion are then displayed near a displayed mouse pointer position. The previous path portion and the next path portion are representative of a mouse pointer path traversed during playback of the presentation. In a particular implementation the first and/or second mouse pointer positions may be modified during operation in an edit mode.
For a better understanding of the nature of the features of the invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings, in which:
The present invention relates to various improvements to authoring programs of the type generally used to create a presentation based on an application program. In particular, the invention provides an improved technique of introducing annotation, mouse pointer movement and animation information to a sequence of screen shots captured from the underlying application program. As is described hereinafter, the inventive authoring technique advantageously permits accurate addition of such information to the underlying sequence of screen shots during program operation in an edit mode. The inventive technique also provides, during edit mode operation, a unique and improved representation of the mouse pointer animation occurring during subsequent playback of the applicable presentation in a run-time mode.
Turning now to
In the exemplary embodiment the inventive presentation authoring program 156 comprises the above-referenced RoboDemo product as enhanced to provide the functionality described below. However, it should be understood that the teachings of the invention are not limited to this exemplary embodiment, and may be applied to, for example, augment and improve the functionality of other commercially available authoring programs.
Within the system of
In addition to merely capturing screen shots from the application program 150, during edit-mode operation the presentation authoring program 156 enables a user to add captions, callouts, click boxes, images, and audio (.WAV and .MP3 files) to the movie. Textual description in the form of captions and other objects may be added as desired In addition, the authoring program also permits the control of any mouse pointer movement which may occur during run-time mode (i.e., during playback of the file containing the movie or presentation). In this way it is possible to easily create, for example, full-featured product demonstrations, tutorials, and e-learning modules. In the exemplary embodiment various sequence controls (e.g., fast forward, play, stop) are made available for manipulation by the user during playback of the movie. The authoring program 156 will generally be configured to provide the opportunity for editing and arranging the frames comprising a given movie or presentation. These editing capabilities will generally enable, for example, the definition of characteristics such as mouse pointer shape, movement speed, and position.
Although in the exemplary embodiment the present invention is described with reference to screen shots of the type which may be created by a given application program, the teachings of the present invention may be equally applied to other varieties of “screen shots”. These may be derived from, for example, an imported movie (e.g., an AVI file), an imported or inserted image, a blank frame, a colored frame, imported presentations of various formats (e.g., PowerPoint presentations), and other files containing graphics information.
An initial step in using the presentation authoring program 156 to create a slide presentation or movie is to capture a sequence of screen shots generated by an application program or otherwise displayed by the computer 100. In the exemplary embodiment such screen shots are obtained using a desired combination of one or more capture modes depending upon the type of presentation it is desired to create. For example, in a Program capture mode a presentation may be created based on screen shots obtained from any application program 150 being executed by the CPU 110. In a Custom capture mode, the user of the computer is provided with the opportunity to specify the size and position of a capture window displayed by the presentation authoring program 156 upon a screen of the video monitor 128. In a Full Screen capture mode, all of the information within the “desktop” displayed by the monitor 128 is captured. Finally, in a Pocket PC capture mode, screens are captured in a resolution suitable for playback on a “Pocket PC” or similar handheld device.
The presentation authoring program 156 may be commanded to capture screen shots or to permit a user to capture screen shots manually. In a default mode of the exemplary embodiment, screen shots are manually captured by the user of the computer 100. In this mode, a selectable button (e.g., the Print Screen button) on keyboard 120 is depressed each time it is desired to capture what is currently being displayed upon the screen of the video monitor 128. Desired changes may then be made to the screen via the keyboard 120 and/or mouse 124 (e.g., moving a mouse pointer position, “pressing” an icon, or selecting a drop-down menu). Once such changes have been made, Print Screen is again depressed in order to capture the next screen shot. Alternatively, in the automatic capture mode the current screen (or specified portion thereof) displayed by video monitor 128 is captured each time a user changes the focus of an active application program 156 or otherwise presses a key of the keyboard 120. As is described below, once the above process of recording a sequence of screen shots has been completed, editing operations such as the insertion of audio or captions and the selection of display options may be performed.
Referring now to
In the exemplary embodiment the authoring program 156 is also configured to create a preliminary demonstration or “preview” of the movie or presentation being created. As the movie plays the user may start, stop, pause, and advance the current frame or return to the previous frame. While the movie is previewing, the user may click a displayed “Edit” button in order to enter the Frame Edit View with respect to the current frame.
Referring now to
As mentioned above, each frame 504 is derived from a screen shot which had been previously captured and stored. As the editing of each frame 504 is completed, the relevant data pertaining to mouse pointer shape, position and movement within such frame 504 is separately stored. With respect to mouse pointer position, in the exemplary embodiment only the location of the “active” pixel or “hotspot” within the mouse pointer is stored following “clicking” of the mouse pointer by the user via mouse 124. That is, the two-dimensional parameters of the mouse pointer will generally not be stored, only the single (x,y) grid location of the active pixel.
Referring again to frame 504 b, a previous path portion arrow 540 b generally indicates the direction of the actual starting position 544 b (not shown in Frame Edit View) from which the mouse pointer 510 b moves during subsequent playback mode operation. Similarly, next path portion arrow 520 b generally indicates the direction in which the mouse pointer 510 b will be translated during playback mode until reaching actual ending position 530 b. Within the last frame 504 n, previous path portion arrow 540 n generally indicates the direction of the actual starting position 544 n (not shown in Frame Edit View) from which the mouse pointer 510 n moves during subsequent movie playback. The mouse pointer 510 n is positioned coincident with the actual ending position 530 n (not shown) within frame 504 n.
Although not indicated by
In many instances it will be desired to associate various captions or other objects with the mouse pointer appearing in one or more frames of a given movie or other sequenced presentation. However, in cases in which the transition between frames is triggered by a mouse “click” operation, it may be difficult to appropriately position the caption or object relative to the mouse pointer during edit-mode operation, since the caption/object may appear during a different (generally earlier) frame than does the mouse pointer during run-time mode. This could complicate the editing process, as a mouse pointer and its associated caption could inconveniently appear on different frames during edit mode.
In accordance with one aspect of the invention, a translation process occurs during operation in edit and run-time modes which facilitates desired positioning of captions or other objects relative to an associated mouse pointer. Specifically, during edit mode operation a user is permitted to position all captions and other objects within the frame in which the associated mouse pointer will actually appear during run-time mode. Upon completion of edit mode operation, the captions and other objects are translated to the frame in which they will actually appear during run-time mode (typically the frame preceding the frame in which the associated mouse pointer appears). In this way the captions or other objects are caused to appear within the correct frame during run-time mode, but the user is spared the inconvenience of positioning a caption/object on one frame relative to a mouse pointer on another frame during edit mode.
In the discussion which follows, a “caption” comprises a text-based caption or an object linked thereto that is often in some way associated with a mouse pointer. Captions may be displayed during run-time mode either before or after movement or “clicking” of the associated mouse pointer. An “object” may comprise either a caption or another displayed item such as a highlight box or pop-up window. An object is generally displayed after a mouse pointer movement or clicking operation, unless it is associated with a caption that is specified to be displayed prior to mouse movement.
Once edit mode operation has been completed and the user opts to preview or run the movie or other presentation, the inventive translation process occurs. In particular, a copy of Frame1 is made and displayed to the user via monitor 128 as the “Alpha Frame” 812 in
Referring now to
Turning now to
Referring to the screen display of
Again referring to table 1210, a caption 1310 created during edit mode with reference to a Frame4 1304 screen display (
The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that the specific details are not required in order to practice the invention. In other instances, well-known circuits and devices are shown in block diagram form in order to avoid unnecessary distraction from the underlying invention. Thus, the foregoing descriptions of specific embodiments of the present invention are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, obviously many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the following claims and their equivalents define the scope of the invention.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5745738||May 29, 1996||Apr 28, 1998||Microsoft Corporation||Method and engine for automating the creation of simulations for demonstrating use of software|
|US5826102||Sep 23, 1996||Oct 20, 1998||Bell Atlantic Network Services, Inc.||Network arrangement for development delivery and presentation of multimedia applications using timelines to integrate multimedia objects and program objects|
|US5850548 *||Nov 14, 1994||Dec 15, 1998||Borland International, Inc.||System and methods for visual programming based on a high-level hierarchical data flow model|
|US6020886||Sep 4, 1996||Feb 1, 2000||International Business Machines Corporation||Method and apparatus for generating animated help demonstrations|
|US6167562 *||May 8, 1997||Dec 26, 2000||Kaneko Co., Ltd.||Apparatus for creating an animation program and method for creating the same|
|US6404441||Jul 16, 1999||Jun 11, 2002||Jet Software, Inc.||System for creating media presentations of computer software application programs|
|US6467080||Jun 24, 1999||Oct 15, 2002||International Business Machines Corporation||Shared, dynamically customizable user documentation|
|1||*||Shannon, "RoboDemo 3.0-eHelp", NewJersey PC User Group, pp. 1-6, and 68 pages of Demo, Apr. 2002.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7594183 *||Aug 12, 2004||Sep 22, 2009||International Business Machines Corporation||Capturing a workflow|
|US7779355 *||Aug 17, 2010||Ricoh Company, Ltd.||Techniques for using paper documents as media templates|
|US8307283||Nov 6, 2012||Google Inc.||Live demonstration of computing device applications|
|US8332765||Dec 11, 2012||Microsoft Corporation||Problem reporting system based on user interface interactions|
|US8386928 *||Mar 18, 2004||Feb 26, 2013||Adobe Systems Incorporated||Method and system for automatically captioning actions in a recorded electronic demonstration|
|US8566718||Jul 29, 2011||Oct 22, 2013||Google Inc.||Live demonstration of computing device applications|
|US8572603||Sep 4, 2009||Oct 29, 2013||Adobe Systems Incorporated||Initializing an application on an electronic device|
|US8930575 *||Mar 30, 2011||Jan 6, 2015||Amazon Technologies, Inc.||Service for automatically converting content submissions to submission formats used by content marketplaces|
|US9110688 *||Jul 31, 2006||Aug 18, 2015||Adobe Systems Incorporated||System and method for representation of object animation within presentations of software application programs|
|US9372779||May 2, 2014||Jun 21, 2016||International Business Machines Corporation||System, method, apparatus and computer program for automatic evaluation of user interfaces in software programs|
|US20060036958 *||Aug 12, 2004||Feb 16, 2006||International Business Machines Corporation||Method, system and article of manufacture to capture a workflow|
|US20060064641 *||Sep 6, 2005||Mar 23, 2006||Montgomery Joseph P||Low bandwidth television|
|US20070033574 *||Jul 31, 2006||Feb 8, 2007||Adobe Systems Incorporated||System and method for representation of object animation within presentations of software application programs|
|US20090083710 *||Oct 9, 2007||Mar 26, 2009||Morse Best Innovation, Inc.||Systems and methods for creating, collaborating, and presenting software demonstrations, and methods of marketing of the same|
|US20100229112 *||Mar 6, 2009||Sep 9, 2010||Microsoft Corporation||Problem reporting system based on user interface interactions|
|US20100318916 *||Jun 11, 2010||Dec 16, 2010||David Wilkins||System and method for generating multimedia presentations|
|US20110200305 *||Aug 18, 2011||Dacreous Co. Limited Liability Company||Low bandwidth television|
|US20130205207 *||Feb 23, 2013||Aug 8, 2013||Adobe Systems Incorporated||Method and system for automatically captioning actions in a recorded electronic demonstration|
|U.S. Classification||717/113, 717/109|
|International Classification||G06F9/44, G06F3/048|
|Cooperative Classification||G06F3/04812, G06F9/4446|
|European Classification||G06F3/0481C, G06F9/44W2|
|Jul 10, 2003||AS||Assignment|
Owner name: EHELP CORPORATION, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NIRELL, MAGNUS;KIFT, CECIL;RICH, NATHAN;AND OTHERS;REEL/FRAME:014270/0404;SIGNING DATES FROM 20030708 TO 20030710
|Mar 14, 2006||AS||Assignment|
Owner name: ADOBE SYSTEMS INCORPORATED, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MACROMEDIA, INC.;REEL/FRAME:017300/0770
Effective date: 20051207
|Mar 27, 2006||AS||Assignment|
Owner name: MACROMEDIA, INC., CALIFORNIA
Free format text: MERGER;ASSIGNORS:MACROMEDIA, INC.;AMPLITUDE ACQUISITION CORPORATION;EHELP CORPORATION;REEL/FRAME:017371/0001
Effective date: 20031022
|Jan 20, 2010||FPAY||Fee payment|
Year of fee payment: 4
|Jan 2, 2014||FPAY||Fee payment|
Year of fee payment: 8