WO2001016752A1 - Method and system for software object testing - Google Patents

Method and system for software object testing Download PDF

Info

Publication number
WO2001016752A1
WO2001016752A1 PCT/US2000/023303 US0023303W WO0116752A1 WO 2001016752 A1 WO2001016752 A1 WO 2001016752A1 US 0023303 W US0023303 W US 0023303W WO 0116752 A1 WO0116752 A1 WO 0116752A1
Authority
WO
WIPO (PCT)
Prior art keywords
test
ofthe
code
application under
data
Prior art date
Application number
PCT/US2000/023303
Other languages
French (fr)
Inventor
George Friedman
Michael V. Glik
Theodore M. Osborne, Iii
Caren H. Baker
Walter G. Vahey
Original Assignee
Empirix Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Empirix Inc. filed Critical Empirix Inc.
Priority to AU69348/00A priority Critical patent/AU6934800A/en
Publication of WO2001016752A1 publication Critical patent/WO2001016752A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3414Workload generation, e.g. scripts, playback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3419Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment by assessing time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3495Performance evaluation by tracing or monitoring for systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/875Monitoring of systems including the internet

Definitions

  • This invention relates generally to computer software applications and more specifically to testing computer software applications.
  • An enterprise-wide application is an application that allows a large group of people to work together on a common task.
  • An enterprise-wide application performs functions that are essential to a company's business. For example, in a bank, people at every bank branch must be able to access a database of accounts for every bank customer. Likewise, at an insurance company, people all over the company must be able to access a database containing information about every policyholder.
  • the software that performs these functions is generally known as enterprise-wide applications.
  • N-Tier enterprise model An architecture which is currently popular is called the N-Tier enterprise model.
  • the most prevalent N-tier enterprise model is a three tier model.
  • the three tiers are the front end, the middleware and the back end.
  • the back end is the database.
  • the front end is sometimes referred to as a "client” or a Graphical User Interface (GUI).
  • GUI Graphical User Interface
  • the middleware is the software that manages interactions with the database and captures the "business logic.” Business logic tells the system how to validate, process and report on the data in a fashion that is useful for the people in the enterprise.
  • the middleware resides on a computer called a server.
  • the database might be on the same computer or a different computer.
  • the "client” is usually on an individual's personal computer.
  • All ofthe computers are connected together through a network. Because many people use the enterprise wide application, such systems are set up to allow simultaneous users and there would be many clients connected to a single server. Often, many clients will be connected to the server simultaneously.
  • the N-tiered model also describes many internet web sites that sell goods or services. For example, a web site that auctions cars is likely to fit the N-tiered model.
  • databases are provided to track buyers, sellers and objects being auctioned. Also, a database must be provided to track the bids as they are entered.
  • the middleware provides the access to these databases and encapsulates the business logic around such transactions as when to accept a bid, when to declare an item sold, etc.
  • EJB Enterprise JavaBeans
  • Sun Microsystems COM, DCOM and COM+ by Microsoft Corporation
  • CORBA CORBA
  • IBM component specification standards that are commercially available.
  • EJB is used as an example of a component standard used to implement middleware in an N-tiered model , but it should be appreciated that and the concepts described herein could be used with other component standards.
  • EJBs are written in the JAVA language, which is intended to be "platform independent.”
  • Platform independent means that an application is intended to perform the same regardless ofthe hardware and operating system on which it is operating.
  • Platform independence is achieved through the use of a "container.”
  • a container is software that is designed for a specific platform. It provides a standardized environment that ensures the application written in the platform independent language operates correctly. The container is usually commercially available software and the application developer will buy the container rather create it.
  • Componentized software is software that is designed to allow different pieces ofthe application, or "objects", to be created separately but still to have the objects work together. For this to happen, the objects must have standard interfaces that can be understood and accessed by other objects. Some parts of these interfaces are enforced by the software language.
  • applications have been tested in one of two ways.
  • the objects are tested as they are written. Each is tested to ensure that it performs the intended function.
  • the entire application is then usually tested.
  • application testing has generally been done by applying test inputs at the client end and observing the response ofthe application.
  • One is that it is relatively labor intensive, particularly to develop a load or scalability test.
  • profilers Some tools, called “profilers,” have been available. However, these tools track things such as disk usage, memory usage or thread usage ofthe application under test. They do not provide data about performance ofthe application based on load.
  • TestMaster works from a state model ofthe application under test. Such an application is very useful for generating functional tests during the development of an application. Once the model ofthe application is specified, TestMaster can be instructed to generate a suite of tests that can be tailored for a particular task - such as to fully exercise some portion ofthe application that has been changed. Model based testing is particularly useful for functional testing of large applications, but is not fully automatic because it requires the creation of a state model ofthe application being tested.
  • a second shortcoming of testing enterprise wide applications is the critical performance criteria to measure often relates to how the application behaves as the number of simultaneous users increases.
  • test system that simulates use of a particular software object within an application by a plurality of simultaneous users.
  • the number of simultaneous users simulated is varied.
  • load testing is done on individual components in the application.
  • the test system analyzes response time measurements from plural software objects within the application and predicts which software object within the application that is likely to be a performance bottleneck.
  • performance settings within the application can also be varied to determine optimum settings to reduce performance bottlenecks.
  • the format ofthe output may be specified by the user to aid in understanding the performance ofthe application under test in response to load.
  • FIG. 1 is an illustration of an application under test by the test system ofthe invention
  • FIG. 2 is an illustration showing the test system ofthe invention in greater detail
  • FIG. 3 is an illustration showing the coordinator of FIG. 2 in greater detail
  • FIG. 4 is a flow chart illustrating the process of coordinating execution of load tests
  • FIG. 5 is an illustration showing the code generator of FIG. 2 in greater detail
  • FIG. 6 illustrates a user interface of test system 110 during a setup phase
  • FIG. 7 illustrates a user interface of test system 110 during the specification of a test case
  • FIG. 8 illustrates a user interface of test system 110 during a different action as part ofthe specification of a test case
  • FIG. 9 illustrates a user interface of test system 110 during a different action as part ofthe specification of a test case
  • FIG. 10 illustrates a user interface of test system 110 during a different action as part of the specification of a test case;
  • FIG. 11 illustrates a user interface of test system 110 during a phase for reviewing the results of a test case in tabular format
  • FIG. 12 illustrates a user interface of test system 110 during a phase for reviewing the results of a test case in graphical format
  • FIG. 13 illustrates a user interface of test system 110 during a phase for reviewing the results of a test case in an alternative graphical format
  • FIG. 14 illustrates a user interface of test system 110 during a phase for reviewing the results of a test case in a tabular format.
  • FIG. 1 illustrates a test system 110 according to the present invention.
  • the system is testing application under test 114.
  • application under test 1 14 is an application in the N-tiered model. More specifically, it is a three tiered database application.
  • Application under test 114 could represent a database for a bank or an insurance company or it could represent an Internet application. The specific function of application under test 114 are not important to the invention.
  • test system 110 and the application under test 114 reside is not important to the invention. It is sufficient if there is some connection between the two. In the illustration, that connection is provided by network 122. In the illustrated embodiment, it is contemplated that network 122 is part of a LAN operated by the owner of application under test 114, such as an Ethernet network. In this scenario, test system 110 is installed on a server within the LAN. However, many other implementations are possible. For example, network 122 could be a WAN owned by the owner of application under test 114.
  • network 122 could the Internet.
  • test system 110 could be located in a server owned by a testing company.
  • test system and application under test could be located in computers owned by a testing company. Many applications are written using platform independent technology such that the application will perform the same on many different platforms. Platform independent technology is intended to make it easy to run an application on any platform. Therefore, the application under test could be sent to a testing company, owning the hardware to implement test system 110, such as by uploading over the Internet. Thereafter, the application under test could be tested as described herein while running on a platform provided by the testing company with the results ofthe test being downloaded over the Internet.
  • Test system 110 and application under test 114 could physically be implemented in the same computer. However, that implementation is not presently preferred because a single computer would have to be very large or would be limited in the size of applications that could be tested. The presently preferred embodiment uses several computers to implement test system 110.
  • Application under test 114 is a software application as known in the art. It includes middleware 116 that encapsulates some business logic. A user accesses the application through a client device. Many types of client devices are possible, with the list growing as networks become more prevalent. Personal computers, telephone systems and even household appliances with micro- controllers could be the client device. For simplicity, the client device is illustrated herein as a personal computer (PC) 120, though the specific type of client device is not important to the invention. PC 120 is connected to network 122 and can therefore access application under test 114. In use, it is contemplated that there would be multiple users connected to application under test 114, but only one user is shown for simplicity. The number of users simultaneously accessing application under test 114 is one indication ofthe "load" on the application.
  • PC personal computer
  • GUI Graphical User Interface
  • Software to manage interactions between multiple users and an application is known. Such software is sometimes called a web server. Web servers operate in conjunction with a browser, which is software commonly found on most PC's.
  • the web server and browser exchange information in a standard format known as HTML.
  • An HTML file contains tags, with specific information associated with each tag.
  • the tag signals to the browser a type associated with the information, which allows the browser to display the information in an appropriate format. For example, the tag might signal whether the information is a title for the page or whether the information is a link to another web page.
  • the browser creates a screen display in a particular window running on PC 120 based on one or more HTML pages sent by the web server.
  • GUI 124 passes the information and commands it receives on to middleware
  • middleware 116 is depicted as a middleware application created with EJBs.
  • Containers 130 are, in a preferred embodiment, commercially available containers. Within a container, are numerous enterprise Java beans 132. Each Java bean can more generally be thought of as a component.
  • GUI 124 based on the information from user PC 120, passes the information to the appropriate EJB 132. Outputs from application under test 114 are provided back through GUI 124 to PC 120 for display to a user.
  • EJB's 132 in the illustrated example, collectively implement a database application.
  • EJB's 132 manage interactions with and process data from databases 126 and 128. They will perform such database functions as setting values in a particular record or getting values from a particular record. Other functions are creating rows in the database and finding rows in the database.
  • EJB's that access the database are often referred to as "entity beans.”
  • Session beans perform such functions as validating data entries or reporting to a user that an entry is erroneous. Session beans generally call entity beans to perform database access.
  • Test system 110 is able to access the EJB's 132 of application under test 114 over network 122. In this way, each bean can be exercised for testing.
  • the tests are predominately directed at determining the response time ofthe beans - or more generally determining the response time of components or objects used to create the application under test. Knowing the response time of a bean can allow conclusions about the performance of an application. The details of test system 110 are described below.
  • test system 110 is software installed on one or more servers. It is conceptually much like application under test 114.
  • test system 110 is a JAVA application.
  • test system 110 is controlled through a graphical user interface 150. GUI 150 might be a web server as known in the art.
  • One or more application developers or test engineers might access test system over the network 122.
  • PC's 152 and 154 are PC's used by testers who will control the test process.
  • Test system 110 might be testing different applications at the same time.
  • Test system 1 10 performs several functions. One function is the generation of test code. A second function is to execute the test code to exercise one or more EJB's in the application under test. Another function is to record and analyze the results of executing the test code. These functions are performed by software running on one or more computers connected to network 122. The software is written using a commercially available language to perform the functions described herein.
  • FIG. 2 shows that test system 110 has a distributed architecture.
  • Software components are installed on several different computers. Multiple computers are used both to provide capability for multiple users, to a allow a user to perform multiple tasks and also to run very large tests. The specific number of computers and the distribution of software components ofthe test system on those computers is not important to the invention.
  • Coordinator 210 is a software application that interfaces with GUI 150. The main purpose of coordinator 210 is to route user requests to an appropriate server in a fashion that is transparent to a user. Turning to FIG. 3, coordinator 210 is shown in greater detail. It should be appreciated, though, that FIG. 3 shows the conceptual structure of coordinator 210. Coordinator 210 might not be a single, separately identified piece of software. It might, for example, be implemented as coordination software within the various other components of test system 110. Also, it should be realized that a web server used to implement GUI 150 also provides coordination functions, such as queuing multiple requests from an individual or coordinating multiple users.
  • Coordinator 210 contains distribution unit 312.
  • Distribution unit 312 is preferably a software program running on a server. As user requests are received from
  • GUI 150 they are received by distribution unit 312.
  • distribution unit 312 determines the type of resource needed to process the request. For example, a request to generate code must be sent to a server that is running a code generator.
  • Coordinator 210 includes several queues to hold the pending requests. Each queue is implemented in the memory ofthe server implementing coordinator 210. In FIG. 3, queues 318A...318C are illustrated. Each queue 318A...318C corresponds to a particular type of resource. For example, queue 318A could contain code generator requests, queue 318B could contain test engine requests and queue 318C could contain data analysis requests.
  • Distribution unit sends each request to one ofthe queues 318A...318C, based on the type of resources needed to process the request.
  • queue manager 320A...320C Associated with each queue 318A...318C is queue manager 320A...320C.
  • Each queue manager is preferably implemented as software running on the server implementing coordinator 210 or the server implementing the relevant piece of coordinator 210.
  • Each queue manager maintains a list of servers within test system 110 that can respond to the requests in its associated queue.
  • a queue manager sends the request at the top ofthe queue to a server that is equipped to handle the request.
  • the connection between the queue manager and the servers equipped to handle the requests is over network 122. If there are other servers available and still more requests in the queue, the queue manager will send the next request in the queue to an available server. When there are no available servers, each queue manager waits for one ofthe servers to complete the processing of its assigned request.
  • the servers such as the code generators and the test engines report back to the queue managers.
  • the queue managers send another request from the queue and also provide the results back to the distribution unit 312.
  • Distribution unit 312 can then reply back to the user that issued the request, indicating that the request was completed and either giving the results or giving the location ofthe results.
  • the user might receive an indication of where the test code is stored.
  • the user might receive a report ofthe average execution time for the test or the location of a file storing each measurement made during the test.
  • GUI 150 will allow a user to enter a command that indicates code should be generated to test a particular application. Once the code is generated, GUI 150 allows the user to specify that a test should be run using that test code. It is possible that some requests will require the coordination of multiple hardware elements. As will be described hereafter, one ofthe functions ofthe test engines is to apply a load that simulates multiple users.
  • FIG. 4 shows the process by which queue manager 320B coordinates the actions of test engines located on separate servers.
  • queue manager 320B waits for the required number of test engines to become available.
  • queue manager 320B sends commands to each test engine that will be involved in the test to download the test code from the appropriate one ofthe code generators 212A and 212B.
  • Queue manager 320B then begins the process of synchronizing the test engines located on different servers. Various methods could be used to synchronize the servers.
  • each server could be equipped with radio receivers that receive satellite transmissions that are intended to act as time reference signals, such as in a GPS system. Calibration processes could alternatively be used to determine the amount of time it takes for a command to reach and be processed by each server. Commands could then be sent to each server at times offset to make up for the time differences. In the preferred embodiment, a simple process is used.
  • queue manager sends a message to each server to be acting as a test engine. The message asks that server to report the time as kept by its own internal circuitry.
  • queue manager 320B Upon receiving the internal time as kept by each ofthe servers, at step 416 queue manager 320B adds the same offset to each local time. At step 418, queue manager 418 sends the offset times back to the servers.
  • the offset local times become the local starting time ofthe test for each server. Each server is instructed to start the test when its local time matches the offset local time. In this way, all the servers start the test simultaneously.
  • Queue manager 320B waits for the execution ofthe test code at block 420.
  • Some test cases will entail multiple tests.
  • a check is made of whether the particular test case being executed requires more tests.
  • one kind of test case that test system 110 will run is a load test.
  • multiple test engines each executing multiple client threads, execute to simulate multiple users accessing application under test 114.
  • An operating parameter of application under test 114 is then measured. In the preferred embodiment, the number of simultaneous users being simulated can be varied and the operating parameter measured again. Taking data at multiple load conditions allows test system 110 to determine the affect of load on application under test 114. Measurements of these types require that a full test case include multiple repetitions of the same test with different load conditions.
  • step 424 the number of client threads is increased as needed for the new test case. Execution then loops back to step 410.
  • queue manager 320B waits for the required number of test engines to be available. When the hardware is available, the test is performed through steps 412, 414, 416, 418 and 420 in the same manner as for the first test condition.
  • step 422 a check is for whether the request entails multiple test conditions. If there are no further test conditions, the request from queue 318B is considered complete. If there are further test conditions that need to be run to complete the request, the process is again repeated.
  • test code generators 212A and 212B are used in the preferred embodiment to create the code. Turning to FIG. 5, greater details of a code generator 212 are shown.
  • Code generator 212 contains several scripts 512. Each script is a sequence of steps that code generator 212 must perform to create code that performs a certain type of test.
  • the scripts can be prepared in any convenient programming language. For each type of test that test system 110 will perform, a script is provided. User input on the type of test that is desired specifies which script 512 is to be used for generating code at any given time. Appendix 1 contains an example of a script.
  • the selected script 512 assembles test code 516. The information needed to assemble test code 516 comes from several sources. One source of information is the test templates 514. There are some steps that are needed in almost any kind of test. For example, the object being tested must be deployed and some initialization sequence is required.
  • test code that starts the test at a specified start time and an ending time ofthe test must be recorded. Also, there must be code that causes the required data to be logged during the test. After the test, there might also be some termination steps that are required. For example, where the initialization started with a request for a reference to a particular EJB, the test code will likely terminate with that reference being released. The test code to cause these steps to be performed is captured in the set of templates 514.
  • test code 516 there might be different templates to ensure that the test code 516 appropriately reflects inputs provided by the user.
  • different containers might require different command formats to achieve the same result.
  • One way these different formats can be reflected in the test code 516 is by having different templates for each container.
  • a user might be able to specify the type of information that is to be recorded during a test.
  • a data logging preference might be implemented by having a set of templates that differ in the command lines that cause data to be recorded during a test.
  • An example template is shown in Appendix 2.
  • code generator 212 generates code to test a specific EJB in an application under test.
  • One piece of information that will need to be filled in for many templates is a description ofthe EJB being tested.
  • Another piece of information that might be included is user code to put the application under test in the appropriate state for a test. For example, in testing a component of an application that manages a database of account information for a bank, it might be necessary to have a specific account created in the database to use for test purposes or it might otherwise be necessary to initialize an application before testing it.
  • the code needed to cause these events might be unique to the application and will therefore be best inserted into the test code by the tester testing the application. In the illustrated embodiment, this code is inserted into the template and is then carried through to the final test code.
  • the template might also contain spaces for a human tester to fill in other information, such as specific data sets to use for a test.
  • data sets are provided by the human user in the form of a data table.
  • Code generator 212 could generate functional tests. Functional tests are those tests that are directed at determining whether the bean correctly performs its required functions. In a functional test, the software under test is exercised with many test cases to ensure that it operates correctly in every state. Data tables indicating expected outputs for various inputs are used to create functional test software. However, in the presently preferred embodiment, code generator 212 primarily generates test code that performs load tests. In a load test, it is not necessary to stimulate the software under test to exercise every possible function and combination of functions the software is intended to perform. Rather, it is usually sufficient to provide one test condition. The objective ofthe load test is to measure how operation ofthe software degrades as the number of simultaneous users ofthe application increases.
  • test system 110 contains scripts 512 to implement various types of load tests.
  • One type of load test determines response time of an EJB. This allows the test system to vary the load on the EJB and determine degradation of response time in response to increased load.
  • Another type of load test is a regression type load test. In a regression type test, the script runs operations to determine whether the EJB responds the same way as it did to some baseline stimulus. In general, the response to the baseline stimulus represents the correct operation ofthe EJB. Having a regression type test allows the test system 110 to increase the load on a bean and determine the error rate as a function of load.
  • the script 512 To generate test code 516 for these types of load tests, the script 512 must create test code that is specific to the bean under test.
  • the user provides information on which bean to test through GUI 150. In the preferred embodiment, this information is provided by the human tester providing the name of the file within the application under test that contains the "deployment descriptor" for the specific bean under test. This information specifies where in the network to find the bean under test. Script 512 uses this information to ascertain what test code must be generated to test the bean.
  • Script 512 can generate code by using the attributes ofthe platform independent language in which the bean is written.
  • each bean has an application program interface called a "reflection.” More particularly, each bean has a "home” interface and a "remote” interface.
  • the "home” interface reveals information about the methods for creating or finding a remote interface in the bean.
  • the remote interface reveals how this code can be accessed from client software.
  • the home and remote interfaces provide the information needed to create a test program to access the bean.
  • any program can determine what are known as the "properties" and "methods" of a bean.
  • the properties of a bean describe the data types and attributes for a variable used in the bean. Every variable used in the bean must have a property associated with it.
  • script 512 can automatically determine what methods need to be exercised to test a bean and the variables that need to be generated in order to provide stimulus to the methods.
  • the variables that will be by the methods as they are tested can also be determined. In the preferred embodiment, this information is stored in symbol table 515.
  • Symbol table 515 is a file in any convenient file format. Many formats for storing tabular data are known, such as .xml format. Once the information on the methods and properties are captured in a table, script 515 can use this information to create test code that exercises the methods and properties ofthe particular component under test. In particular, script 515 can automatically create a variable of the correct data type and assign it a value consistent with that type for any variable used in the bean.
  • FIG. 5 shows a data generator 518.
  • Data generator 518 uses the information derived from the reflection interface to generate values for variables used during testing of a bean. There are many ways that appropriate values could be generated for each variable used in the test of a particular bean. However, in the commercial embodiment ofthe present invention, the user is given a choice of three different algorithms that data generator 518 will use to generate data values. The user can specify "maximum,” “minimum” or “random.” If the maximum choice is specified, data generator 518 analyzes the property description obtained through the reflection interface and determines the maximum permissible value. If the user specifies "minimum” then data generator 518 generates the smallest value possible. If the user specifies random, data generator 518 selects a value at random between the maximum and the minimum.
  • a load test In many instances where a load test is desired, the exact value of a particular variable is not important. For example, when testing whether a bean can properly store and retrieve a value from a database, it usually does not matter what value is stored and retrieved. It only matters that the value that is read from the database is the same one that was stored. Or, when timing the operation of a particular bean, it will often not matter what values are input to the method. In these scenarios, data generator 518 can automatically generate the values for variables used in the test code. In cases where the specific values ofthe variables used in a test are important, code generator 212 provides the user with another option. Rather than derive values of variables from data generator 518, script 512 can be instructed to derive data values from a user provided data table 520. A user might, for example, want to provide a data table even for a load test when the execution time of a particular function would depend on the value ofthe input data.
  • a data table is implemented simply as a file on one ofthe computers on network 122.
  • the entries in the table specifying values for particular variables to use as inputs and outputs to particular methods, are separated by delimiters in the file.
  • a standard format for such a table is "comma separated values" or CSV.
  • test system 110 includes a file editor - ofthe type using conventional technology - for creating and editing such a file.
  • test system 110 would likely include the ability to import a file - again using known techniques - that has the required format.
  • the methods of a bean describe the functions that bean can perform. Part of the description ofthe method is the properties ofthe variables that are inputs or outputs to the method.
  • script 512 can determine the code needed to invoke any method and, as described above, can generate data values suitable to provide as inputs to that method, script 512 can generate code to call any method in the bean.
  • script 512 can automatically generate useful test code by invoking each method ofthe bean.
  • the order in which the methods are invoked does not matter if the only parameter that is measured is the time it takes the methods to execute.
  • entity beans for controlling access to a database should have methods that have a prefix "set” or “get”. These prefixes signal that the method is either to write data into a database or to read data from the database.
  • the suffix ofthe method name indicates which value is to be written or read in the database. For example, a method named setSSN should perform the function of writing into a database a value for a parameter identified as SSN. A method named getSSN should read the value from the parameter named SSN.
  • script 512 can generate code to exercise and verify operation of both methods.
  • a piece of test code generated to test these methods would first exercise the method setSSN by providing it an argument created by data generator 518. Then, the method getSSN might be exercised. If the get method returns the same value as the argument that was supplied to the set method, then it can be ascertained that the database access executed as expected.
  • Some beans also contain methods that create or find rows in a database. By convention, methods that create or find rows in a database are named starting with “create” or "find.” Thus, by reflecting the interface ofthe bean, script 512 can also determine how to test these methods. These methods can be exercised similarly to the set and get methods. The properties revealed through the application interface will described the format of each row in the database. Thus, when a create method is used, data can be automatically generated to fill that row, thereby fully exercising the create method. In a preferred embodiment, find methods are exercised using data from a user supplied data table 520. Often, databases have test rows inserted in them specifically for testing. Such a test row would likely be written into data table 520. However, it would also be possible to create a row, fill it with data and then exercise a find method to locate that row.
  • script 512 can also insert into the client test code 516 the commands that are necessary to record the outputs of the test. If a test is checking for numbers of errors, then test code 516 needs to contain instructions that record errors in log 216. Likewise, if a test is measuring response time, the test code 516 must contain instructions that write into log 216 the information from which response time can be determined.
  • all major database functions can be exercised with no user supplied test code. In some instances, it might be possible to exercise all the functions with all test data automatically generated. All the required information could be generated from just the object code ofthe application under test.
  • An important feature ofthe preferred embodiment is that it is "minimally invasive" - meaning that very little is required ofthe user in order to conduct a test and the test does not impact the customer's environment. There is no invasive test harness. The client code runs exactly like the code a user would write.
  • Test system 110 allows for the user to edit test code 516. For example, some beans will perform calculations or perform functions other than database access. In these instances, the user might chose to insert test code that exercises these functions. However, because bottlenecks in the operation of many N- tiered applications occur in the entity beans that manage database transactions, the simple techniques described herein are very useful.
  • Appendix 3 gives an example of test code that has been generated.
  • test system 110 is ready to execute a test.
  • User input to coordinator 210 triggers the test.
  • coordinator 210 queues up requests for tests and tests are executed according to the process pictured in FIG. 4.
  • the test code 516 is executed as a single thread on one ofthe test engines 214A...214C.
  • multiple threads are initiated on one or more ofthe test engines 214A...214C.
  • the results of any tests are stored in log 216.
  • FIG. 2 shows log 216 as a separate hardware element attached to network 122.
  • Log 216 could be any type of storage traditionally found in a network, such as a tape or disk drive attached to a computer server. For simplicity, it is shown as a separate unit, but could easily be part of any other server in the network.
  • response time could be measured.
  • the startup time is the time it takes from when the bean is first accessed until the first method is able to execute.
  • response time is to measure the time it takes for each method to execute.
  • response time could be measured based on how long it takes just the "get-" methods to execute or just the "set-” methods to execute.
  • test code simply records the time that the portion ofthe test code that exercises all the methods starts and stops executing. If the startup response time is required, then the client test code must record the time that it first accesses the bean under test and the time when the first method in the test sequence is ready to be called. On the other hand, if the response time is going to be computed for each method, the client test code must record the time before and after it calls each method and some indication ofthe method being called must also be logged.
  • there are multiple values for each data point For example, if test system 110 is simulating 100 users, the time that it takes for the bean to respond to each simulated user could be different, leading to up to 100 different measurements of response time. The response time for 100 users could be presented as the maximum response time, i.e. the time it takes for all 100 simulated users to finish exercising the bean under test. Alternative, the average time to complete could be reported as the response time. As another variation, the range of values could be reported.
  • the client test code 516 contains the instructions that record all ofthe information that would be needed for any possible measure of response time and every possible display format. The time is recorded before and after the execution of every method. Also, an indication that allows the method to be identified is recorded in log 216. To support analysis based on factors other than delay, the actual and expected results of the execution of each method are recorded so that errors can be detected. Also, the occurrences of exceptions are also recorded in the log. Then, data analyzer 218 can review the log and display the response time according to any format and using any definition of response time desired. Or the data analyzer can count the number of exceptions or errors.
  • test system 110 has the ability to present the results of tests graphically to aid the tester in understanding the operations - particularly performance bottleneck - of application under test 114.
  • Data analyzer 218 generates output in several useful formats.
  • One important output is a response time versus load graph.
  • Log file 216 contains the starting and stopping times of execution tests for a particular test case.
  • the test case includes the same measurements at several different load conditions (i.e. with the test engines 214A...214C simulating different numbers of simultaneous users).
  • data analyzer can read through the data in log 216 and identify results obtained at different load conditions. This data can be graphed.
  • Another useful analysis is the number of errors per second that are generated as a function ofthe number of simultaneous users. To perform this analysis, test code 516 could contain instructions that write an error message into log 216 whenever a test statement produces an incorrect result.
  • data analyzer 218 can pass through the log file, reading the numbers of errors at different simulated load conditions. If desired, the errors can be expressed as an error count, or as an error rate by dividing the error count by the time it took for the test to run.
  • Some examples ofthe types of outputs that might be provided are graphs showing: transactions per second versus number of users; response time versus number of users; exceptions versus numbers of users; errors versus numbers of users; response time by method; response time versus run time and transactions per second versus run time. Different ways to measure response time were discussed above.
  • a transaction is defined as the execution of one method, though other definitions are possible.
  • Run time is defined as the total elapsed time in running the test case, and would include the time to set up the execution of EJBs . Viewing response time as a function of elapsed time is useful , for example, in revealing problems such as "memory leaks" .
  • a memory leak refers to a condi tion in which portions of the memory on the server running the applica tion under test gets used for unproductive things . As more memory is used unproductively, there is less memory available for running the application under test and execution slows over time . Al ternatively, viewing resul ts in this format might reveal tha t the applica tion under tes t is effectively utilizing caching.
  • FIG.6 illustrates the tester interface to be a traditional web browser interface as it would appear on the terminal of PC 152 or 154. This type of interface is the result of using a web server to implement GUI 150 with commercially available browser software on the tester PC's 152 and 154.
  • Element 612 is the browser toolbar.
  • the browser provides the ability for the tester to perform such functions as printing and connecting to pages presented by various servers in the system. The tester performs these functions through the use of the tool bar 612.
  • Field 614 indicates the server to which the PC 152 or 154 is currently connected. In the illustrated example, field 614 contains the address on network 122 for the server containing GUI 150.
  • Window 610 is the area of the screen of PC 152 or 154 where information specific to test system 110 is displayed. As with traditional web based applications, this information is displayed as the result of HTML files that are downloaded over network 122. Certain elements displayed in window 610 represent hyperlinks, which when accessed by a tester cause other pages to be downloaded so that different information is displayed for the tester. Other elements displayed in window 610 represent data entry fields. When a human tester fills in these fields, information is submitted to test system 1 10. Tabs 616 represent choices a tester can select for operating mode. FIG. 6 shows three tabs. There is one tab for "setup", one for "test case” and one for "results”. Each ofthe tabs is hyperlinked.
  • These hyperlinks connect to pages on servers in the network that are programmed to manage the test system through a particular phase.
  • Setup is for configuring the test system, such as by identifying addresses for servers on network 122 that contain test engines.
  • Test case is for creating and running a particular test case, which in the preferred embodiment means test code for a particular bean and parameters specifying how that test code is to run.
  • Results is for viewing the results of a test case that has executed.
  • the "SETUP" tab is shown selected, meaning that the balance of window 610 displays hyperlinks and fields that are appropriate for entering data or commands for the SETUP phase.
  • Region 618 contains a set of hyperlinks. These hyperlinks present a list of choices to the user about what can be setup. Selecting one of these hyperlinks will change the information appearing in region 620. It is well known in the art of web based software to have hyperlinks on one part of window change information appearing in another part ofthe window.
  • region 620 contains fields that can be used to identify a machine to be added to test system 110.
  • FIG. 6 a screen to add a client host computer is shown.
  • a client host computer acts as a test engine 214A...214C.
  • Region 618 also contains hyperlinks for "Projects” and "Data Tables".
  • test system 110 can be used by multiple testers or can be used to test multiple applications under test. To segregate the information relating to various tasks, a "Project" is defined. All information relating to a particular project is identified with the project. In this way. test system 110 can identify information that is logically related. For example, test code developed to test a particular bean in an application will be given the same project name as test code to test other beans in that application. In this way, general information about the application - such as the path to the server through which the application can be accessed is stored once with the project rather than with each piece of test code. As another example, the test results can be associated with the test code that generated them through the use of projects.
  • the hyperlink for "Data Tables" allows the tester to identify to the system where data tables are stored to test specific beans.
  • the tester will create the data tables by hand or automatically apart from test system 110 based on the features that the tester desires to have tested.
  • the data tables will be stored in files on a computer on network 122. Before use by test system 1 10. the tester must provide the network address for that file.
  • Field 622 is a menu field. It is a drop down menu box. When accessed, it will display a menu of choices ofthe actions the tester can specify for test system 110. The contents ofthe menu is context sensitive, meaning that for any given page, only the menu choices that are appropriate are displayed. Actions that a user might want to choose from the menu are things such as edit, delete, add new, rename, etc.
  • FIG. 7 a screen of tester PC 152 or 154 is shown when the TEST CASE tab selected. Selecting the TEST CASE tab allows the tester to specify what is to be tested and how the test is to be conducted.
  • window 610 contains information that describes a test case. This particular page is displayed when the "edit" has been selected in menu field 622.
  • Field 710 indicates the name ofthe project to which the test case is associated.
  • Field 710 is a screen display element known as a drop down box. When activated by a tester, field 710 will become a list of all projects that have been previously created by and tester using test system 110. As shown in FIG. 7, a project called "Demo Project" is being used.
  • Field 712 identifies the particular test case by name.
  • Field 712 is also a drop down box, allowing previously defined test cases to be selected from a list.
  • one ofthe options that appears when the drop down box is accessed is " ⁇ New Test Case>”.
  • This type of menu is common for programs with graphical user interfaces and is used at several points in the interface for presenting choices to a human tester.
  • test case "Customer Test” has been created and information has already been entered.
  • region 714 there are a series of fields in which data can be entered or changed to define or modify the test case.
  • field 716 a name can be provided for the test case. This name allows the test case to be referenced in other parts ofthe test system.
  • field 718 a description ofthe test case can be entered. This description is not used by the automatic features ofthe test system, but can act as a note to a human tester to signify what the test case does or why it was created.
  • Field 720 is likewise not used by the automated system. However, this field holds the name of an individual who authored the test case to facilitate other administrative functions.
  • Field 722 is a "radio button" type field. It presents a tester with a list of choices, only one of which can be selected at a time. In this example, field 722 allows the tester to specify the type of test.
  • code generators 212A and 212B contain a plurality of scripts 512 that generate test code.
  • the script assembles templates and generate command lines for a particular type of test that is to be conducted.
  • the tester must specify the test type in order to allow the appropriate script to be selected.
  • Field 724 allows the tester to specify a "deployment descriptor.”
  • Every bean has associated with it a deployment descriptor.
  • the deployment descriptor is a file that identifies the bean and defines the parameters with which the EJB will be deployed on the applications server. Examples of the parameters are the number of instantiations of the EJB with which to start the applications server (sometimes called a "pooling number") and how much memory is allocated to the bean. These functions are performed by the container 130.
  • the tester provides the deployment descriptor by entering the path to a file on network 122.
  • the test system 110 reads the deployment descriptor to find the name of the bean under test and then to access the bean through reflection to determine its methods and properties.
  • Field 726 allows the tester to specify the type of data to be used in creating the test code 516.
  • the data can be automatically generated by test system 110 or can be supplied through a data table.
  • the data can be generated by using the maximum possible value of each variable, the minimum possible value or a value randomly selected between the maximum and the minimum.
  • the data can be specified by a data table.
  • field 726 indicates that tester desires test system 110 to generate data using the data table named dd.csv. If the tester had wanted the test system to automatically generate data, the tester would specify in field 726 whether the data was to be generated randomly or whether the maximum or minimum values were to be used.
  • FIG. 1 shows that the beans 132 ofthe application under test are within a container 130.
  • server refers to the software that creates the container for the application.
  • Test system 110 contains script files that will generate the appropriate test code for any server. While most ofthe client test code will be server independent, it is possible that servers will implement certain functions in unique ways. Thus, there needs to be script files that account for the functions that are implemented differently on certain servers.
  • FIG. 8 shows a screen display when the tester has used menu field 622 and selected to have the deployment descriptor for a test case to be displayed. If desired, the tester can then edit the deployment descriptor to try alternative configurations.
  • FIG. 9 shows a screen display when a tester has selected from menu field 622 to have the generated test code displayed.
  • the test code 516 is identified as "client code” because it will simulate the operation of a client 120 ofthe application under test 114.
  • the displayed code corresponds to the code generated for the project and test case identified in fields 710 and 712.
  • the tester can also edit the test code.
  • One instance when a tester might desire to edit test code is when most ofthe commands in the test code can be automatically generated, but certain commands must have specific data values for the application under test to function properly.
  • the tester could have test system 110 automatically generate the test code, but then manually edit the few data values that matter.
  • a particular bean might include a method that processes positive and negative values differently. The tester might know that processing negative numbers takes longer and therefore change a randomly generated value to a negative number.
  • test code 516 An alternative scenario in which a tester might wish to edit test code 516 is when the bean being tested contains methods to be tested other than those that follow a pattern, such as the "set”, “get”, “create” and “find” methods described above. The tester might create test code that tests methods that do not follow the pattern. This code could then be inserted by the human tester into the generated test code.
  • FIG. 10 shows a screen display for another function performed by a human tester while specifying the TEST CASE, also selected through menu field 622.
  • FIG. 10 is a screen display used when the test case is to be run.
  • the project and specific test case is specified by entries in fields 710 and 712. Information about how to run the test case is entered in region 1010.
  • Region 1010 contains several fields.
  • Field 1012 indicates the name ofthe file in log 216 where the results of executing the test case will be stored. In this way, data analyzer 218 will be able to locate the data for a particular test case to analyze and present to a tester in the desired form.
  • Field 1014 gives the URL - or network address - for the application under test. This information could be identified by using the name for a particular machine that was previously set-up by the human tester.
  • Field 1016 gives the URL - or network address - for a server to use as a test engine. Again, the server could be identified by using the name for a particular machine that was previously set-up by the human tester.
  • the screen displayed in FIG. 10 is used for an embodiment ofthe invention where all test simultaneously executing copies ofthe client test code are run on a single machine. If test system 110 includes multiple test engines and automatically schedules execution of test code as described in conjunction with FIG. 4 above, then field 1016 is not required.
  • Field 1018 gives the maximum number of concurrent users to be simulated at one time during the execution ofthe test case.
  • Field 1020 allows the user to specify the rate at which the number of simultaneous users will be simulated during a test.
  • the test case will be completed after 100 users have been simulated simultaneously and the number of simultaneous users will increase by 10 each time the test code is run.
  • 10 copies ofthe client test code shown in FIG. 9 will first be executed simultaneously. Then 20 copies of that test code will be executed simultaneously. The test code will be repeated for 30, 40, 50, 60, 70, 80, 90 and 100 simultaneous users before the test case is completed. After the test case has been run, the tester can view and analyze the results.
  • FIG. 11 shows a page displayed when the RESULTS tab is selected.
  • the page shown in FIG. 11 is for when the tester has requested to see summary data through use of menu field 622.
  • Fields 710 and 712 are also present on the page shown in FIG. 11. This information is used by the test system to locate the appropriate data to display.
  • the user specifies in field 1012 the name of a results file to hold the results of a particular run of a test case.
  • the name ofthe results file for desired run of the test is entered in filed 11 10.
  • FIG. 11 shows that window 610 contains a region 1112 that lists the results of a run of a particular test case in summary fashion.
  • Part ofthe information in region 1112 is simply a display of information input by the tester when editing or running the test case.
  • the target container, or the container 130 of application under test 114 is listed.
  • the maximum number of concurrent users simulated during the test is also displayed.
  • the file containing the test data use for the run ofthe test case that generated the results is also displayed as is the deployment descriptor. These values are displayed for ease of use by the human tester.
  • Region 1112 also includes information that was developed by data analyzer 218 from the data gathered for the specified run of the test case.
  • the pieces of information that are displayed are the average response time and the maximum response time.
  • the start and stop times ofthe execution ofthe test code is recorded in log 216.
  • the start and stop time is recorded for each number of users.
  • Data analyzer can determine the response time by simply computing the difference between the start and stop times. Once values are determined, they can be averaged or the maximum can be identified.
  • FIG. 12 shows a different page that can be selected by the user to see the results in graphical format.
  • fields 710, 712 and 1110 allow the user to specify which run of a particular test case is used to create the results.
  • the graphical display in FIG. 12 is a graph showing numbers of transactions per second as the dependent variable with the number of simultaneous users as the independent variable.
  • the information needed to compute the data values for this graph is stored in log 216 after the test case is run and data analyzer 218 can retrieve it.
  • transactions per second was defined as the average number of methods executed per second per user. This value is essentially the reciprocal of response time.
  • FIG. 13 shows a screen useful for displaying results to a human tester in a slightly different embodiment.
  • the screen display in FIG. 12 is accessed when the "RESULTS" tab is selected.
  • the page shown in FIG. 13 includes fields 710, 712 and 1110 that allow the human tester to specify which results are to be displayed.
  • the page shown in FIG. 13 includes an alternative way for the user to specify the format ofthe display.
  • the screen in FIG. 13 includes menu fields 1310 and 1312.
  • Menu field 1310 allows the tester to specify the manner in which response time is to be calculated.
  • a value of "total" has been selected in field 1310.
  • the "total" response time is measured as the time from first access of a bean until all methods ofthe bean have been exercised.
  • Other choices in menu field 1310 allow a tester to specify that result be displayed for different measures of response time.
  • the presently preferred embodiment can measure response time just on the start up time or response time for individual methods or for get- functions and set-functions.
  • Field 1312 allows a user to specify the format ofthe display.
  • the display is in HiLo format.
  • results are displayed as bars, such as bar 1316. Each bar spans from the fastest response time to the slowest response time.
  • a tick mark showing the average is also included in the illustration of FIG. 13.
  • Other choices in menu field 1312 would, for example, allow the human tester to see results in a line chart format as in FIG. 12 or in tabular format.
  • Field 1312 indicates that the display format of "Log File” has been selected. This format corresponds to the list shown in region 1412. The list contains a column for the name ofthe methods in the bean being tested. In this example, the data shown for each method reveals the minimum, maximum and average execution time for that method.
  • test system 110 measures response time at various load conditions.
  • the displayed data represents response times at a particular load condition.
  • the tester must specify the load condition for which data is to be displayed.
  • the page displayed in FIG. 14 contains a field 1410. In this field, a user can enter the load condition for which data is to be displayed.
  • the human tester has entered a value of 500, indicating that 500 threads executing the test code were initiated in order to obtain the displayed data.
  • test system 110 Having described the structure of test system 110 and giving examples of its application, several important features ofthe test system 110 can be seen.
  • One feature is that information about the performance of an application under test can be easily obtained, with much ofthe data being derived in an automated fashion.
  • a software developer could use the test system to find particular beans that are likely to be performance bottlenecks in an application. The developer could then rewrite these beans or change their deployment descriptors. For example, one aspect ofthe deployment descriptor indicates the number of copies ofthe bean that are to be instantiated within application under test 114. The developer could increase the number of instantiations of a bean if that bean is the bottleneck.
  • the test system described herein provides an easy and accurate tool to test EJBs for scalability. It creates a user specified number of virtual users that call the EJB while it is deployed on the applications server. The tool does this by inspecting the EJB under test and automatically generating a client test program, using either rules based data or data supplied by an a human tester, and then multithreading the client test program to drive the EJB under test. The result is a series of graphs reporting on the performance versus the number of users, which provide useful information in an easy to use format.
  • test code 516 exercises the bean in the application under test using remote procedure calls.
  • test system 110 is scalable. To increase the number of tests that could simultaneously be run or the size ofthe tests that could be run, more test engines could be added. Likewise, more code generators could be added to support the simulation of a larger number of simultaneous users.
  • the specific number of copies of each component is not important to the invention. The actual number of each component in any given embodiment is likely to vary from installation to installation. The more users an application is intended to support, the more test engines are likely to be required.
  • Another feature ofthe described embodiment is that testing is done on the simplest construct in the application under test - the beans in the illustrated example. There are two benefits to this approach. First, it allows tests to be generated very simply, with minimal human intervention. Second, it allows a software developer to focus in on the point ofthe software that needs to be changed or adjusted in order to improve performance.
  • test system 110 One enhancement that might be made to test system 110 is that the data analyzer 218 could be programmed to perform further analysis. It has been recognized that, as the load increases, there is often some point at which the performance ofthe system drastically changes. In some instances, the time to complete a transaction drastically increases. A drastic increase in transaction processing time indicates that the system was not able to effectively handle the load.
  • a decrease in processing time can also indicate the load limit was reached.
  • a system under test will respond with an error message more quickly than it would take to generate a correct response.
  • response time a decrease in processing time as a function of load can also signal that the maximum load has been exceeded.
  • an increase in errors or error rate can also signal that the maximum load was exceeded.
  • Data analyzer 218 could be used to identify automatically a maximum load for a particular test case. By running multiple test cases, each test case focusing on a different bean, test system 110 could automatically determine the bean that is the performance bottleneck and could also assign a load rating to application under test 114. Having described one embodiment, numerous alternative embodiments or variations might be made.
  • test system 110 automatically generates test code to exercise beans that follow a pattern for database access. These beans are sometimes called “entity beans.” In general, there will be other beans in an application that perform computations on the data or that control the timing ofthe execution ofthe entity beans. These beans are sometimes called “session beans.” Session beans are less likely to follow prescribed programming patterns that make the generation of test code for entity beans simple. As a result, the automatically generated test code for session beans might not fully test those beans. In the described embodiment, it is expected that the human tester supply test code to test session beans where the automatically generated tests are inadequate. One possible modification to the described embodiment is that the completeness of tests for session beans might be increased.
  • test code is generated to test a particular bean, which is a simple construct or "component" ofthe application under test. The testing could focus on different constructs, such as specific methods in a bean. Test code could be generated to test specific methods within beans. Or, it was described that the system records start and stop time ofthe execution ofthe test code. The times of other events could be recorded instead or in addition. For example, start and stop times of individual methods might be recorded, allowing performance of individual methods to be determined.
  • the complexity ofthe constructs being tested could be increased.
  • Multiple beans might be tested simultaneously to determine interactions between beans.
  • multiple test cases might be executed at the same time, with one test case exercising a specified instances of one bean and a different test case exercising a specified number of instances of a second bean.
  • a human tester can insert code into a template to do such things as put the application under test into a predictable state. Lines of code might be inserted directly, for example by the user simply typing the lines of code. Or, the tester might insert a "tag" into the template. The tag would identify a code segment stored elsewhere within the test system. In this way, the same code segment could be included at multiple places in the template or in multiple templates.
  • test code would be created by filling in a single template.
  • each template might contain only the steps needed to perform one function, such as initialization, testing or termination.
  • test code would be created by stringing together multiple templates.
  • test system 110 provides outputs indicating the performance of an application under test as a function of load. These outputs in graphical or tabular form can be used by an application developer to identify a number of concurrent users at which problems with the application are likely to be encountered. Potential problems are manifested in various ways, such as by a sudden change in response time or error rate as a function of load. Test system 110 could readily be programmed to automatically identify patterns in the output indicating these problem points.
  • test system 110 could aid in identifying settings for various parameters in the deployment descriptor.
  • the deployment descriptor for a bean identifies parameters such as memory usage and a "pooling number" indicating the number of instances of a bean that are created at the initialization of an application. These and other settings in the deployment descriptor might have an impact on the performance time and maximum load that an application could handle.
  • One use ofthe test system described above is that it allows a test case to be repeated for different settings in the deployment descriptor. A human tester can analyze changes in performance for different settings in the deployment descriptor.
  • test system 110 could be programmed to automatically edit the deployment descriptor of a bean by changing parameters affecting pooling or memory usage. Test system 110 could then automatically gather and present data showing the impact of a deployment descriptor on performance of an application.
  • test system 110 might test the beans in an application and analyze the results of testing each bean. Test system 110 might identify the bean or beans that reflect performance bottlenecks (i.e. that exhibited unacceptable response times for the lowest numbers of simultaneous users). Then, test system 110 could run tests on those beans to find settings in the deployment descriptors that would balance the performance ofthe beans in the application (i.e. to adaptively adjust the settings in the deployment descriptors so that the bottleneck beans performed no worse than other beans.) It should also be appreciated that computer technology is rapidly evolving and improved or enhanced versions ofthe hardware and software components making up the application under test and the test system are likely to become available.
  • FIG. 11 shows summary data of a test after execution is completed. It will be appreciated, though, that data on a test case execution might be displayed to a human tester while a test is in process. For example, the summary screen might contain a field that shows the percentage ofthe test case that is completed. This value would update as the test runs. Likewise, the values for average and maximum response time could be updated as the data is gathered.
  • EJBs which are written in the Java language.
  • the same techniques are equally applicable to applications having components implemented in other languages. For example, applications written according to the COM standard might be written in Visual Basic and applications written for the CORBA standard might be written in C++.
  • test system 1 Regardless ofthe specific language used, these standards are intended to allow separately developed components to operate together. Thus, each must provide a mechanism for other applications, such as test system 1 10, to determine how to access the methods and properties of their components. However, there could be differences in the specific commands used to access components.
  • code generator 212 is implemented in a way that will make it easy to modify for generating test code for applications written in a different language.
  • code generator 212 stores intermediate results as a symbol table that is independent ofthe specific language used to program the application under test.
  • the symbol table lists methods and properties for the component being tested. When to access these methods and what data to use for a particular test and what kinds of data to record can be determined from the information in the symbol table and input from the user.
  • much ofthe functioning of code generator 212 is independent ofthe specific language used to implement the application under test. In this way, the language specific aspects of code generator 212 are easily segregated and represent a relatively small part ofthe code generator 212.
  • language specific information is needed to access the application under test to derive the information for the symbol table.
  • Language specific information is also needed to format the generated client test code. But, it is intended that these parts of code generator 212 could be replaced to allow test system 110 to test applications written in other languages. Also, it is possible that test system 110 will contain multiple versions ofthe language specific parts and the user could specify as an input the language of the application under test.
  • Double tempDouble new Double(O.O);
  • Double tempDouble new Double(O.O);
  • ⁇ sTime new Date().getTime(); h.setName("qkgthfpw”); logErrorC'setTime", “setName”, “qkgthfpw”. "");
  • I sTime new Date().getTime(); h.setProduct("dpnoolww”); logErrorC'setTime”, “setProduct”, “dpnoolww”, “");
  • ⁇ sTime new Date()-getTime(); h.setCost(568); logError("SetTime”, “setCost”, “568”, “”);
  • ⁇ sTime new Date()-getTime(); h.setName("qkgthfpw”); logErrorC'setTime", “setName”, “qkgthfpw”, “”);

Abstract

A system for testing middleware of applications in the N-tiered model. The test system contains test code generators, test engines to execute multiple copies of the test code and a data analyzer to analyze and present the results to a human user. The system is able to automatically generate test code to exercise components of the middleware using information about these components that would otherwise be available to the application under test. Multiple copies of the test code are executed in a synchronized fashion. Execution times of multiple events are recorded and then presented in one of several formats. With the system, an application developer can identify components that represent performance bottlenecks or can gather information on deployment properties of individual components that can be used to enhance the performance of the application under test.

Description

METHOD AND SYSTEM FOR SOFTWARE OBJECT TESTING Cross Reference to Related Applications
This application claims priority from provisional US application 60/151,418 filed August 30, 1999, for Method and System for Software Object Testing, which is hereby incorporated by reference.
Field of Invention
This invention relates generally to computer software applications and more specifically to testing computer software applications.
Background
Distributed computing has been used for many years. Distributed computing is very prevalently used in "enterprise-wide" applications. An enterprise-wide application is an application that allows a large group of people to work together on a common task. Usually, an enterprise-wide application performs functions that are essential to a company's business. For example, in a bank, people at every bank branch must be able to access a database of accounts for every bank customer. Likewise, at an insurance company, people all over the company must be able to access a database containing information about every policyholder. The software that performs these functions is generally known as enterprise-wide applications.
As available hardware and software has evolved, the architecture of enterprise wide applications has changed. An architecture which is currently popular is called the N-Tier enterprise model. The most prevalent N-tier enterprise model is a three tier model. The three tiers are the front end, the middleware and the back end. The back end is the database. The front end is sometimes referred to as a "client" or a Graphical User Interface (GUI). The middleware is the software that manages interactions with the database and captures the "business logic." Business logic tells the system how to validate, process and report on the data in a fashion that is useful for the people in the enterprise. The middleware resides on a computer called a server. The database might be on the same computer or a different computer. The "client" is usually on an individual's personal computer. All ofthe computers are connected together through a network. Because many people use the enterprise wide application, such systems are set up to allow simultaneous users and there would be many clients connected to a single server. Often, many clients will be connected to the server simultaneously. Those familiar with internet commerce will recognize that the N-tiered model also describes many internet web sites that sell goods or services. For example, a web site that auctions cars is likely to fit the N-tiered model. In such an application, databases are provided to track buyers, sellers and objects being auctioned. Also, a database must be provided to track the bids as they are entered. The middleware provides the access to these databases and encapsulates the business logic around such transactions as when to accept a bid, when to declare an item sold, etc. In the world of distributed computing, it makes no difference whether the "clients" using the application are employees of a single company or many Internet users throughout the world. Herein, examples of applications under test will be given, but they are not intended to imply limitations on the use ofthe invention. The inventions described herein could be used by developers of enterprise-wide applications or web based applications.
One advancement in the N-tiered model is that the middleware is very likely to be componentized and is very likely to be written to a component standard so that it will easily integrate with software at other tiers. Enterprise JavaBeans (EJB) by Sun Microsystems, COM, DCOM and COM+ by Microsoft Corporation and CORBA by IBM are examples of component specification standards that are commercially available. Herein, EJB is used as an example of a component standard used to implement middleware in an N-tiered model , but it should be appreciated that and the concepts described herein could be used with other component standards.
EJBs are written in the JAVA language, which is intended to be "platform independent." Platform independent means that an application is intended to perform the same regardless ofthe hardware and operating system on which it is operating. Platform independence is achieved through the use of a "container." A container is software that is designed for a specific platform. It provides a standardized environment that ensures the application written in the platform independent language operates correctly. The container is usually commercially available software and the application developer will buy the container rather create it. Componentized software is software that is designed to allow different pieces ofthe application, or "objects", to be created separately but still to have the objects work together. For this to happen, the objects must have standard interfaces that can be understood and accessed by other objects. Some parts of these interfaces are enforced by the software language. If the interfaces are not used, the software objects will not be able to work with other objects. Other practices are imposed by convention. Usually, one company has "control" over the language and specifies programming practices that should be followed by anyone writing platform independent software in that language. Because these programming practices are known to everyone, the companies that create the containers can rely on them when creating the container. As a result, if these practices are not followed, the container might not operate properly. Thus, there is an indirect mechanism for enforcing these practices.
Typically, applications have been tested in one of two ways. The objects are tested as they are written. Each is tested to ensure that it performs the intended function. When the objects are assembled into a completed application, the entire application is then usually tested. Heretofore, application testing has generally been done by applying test inputs at the client end and observing the response ofthe application. There are several shortcomings with this process. One is that it is relatively labor intensive, particularly to develop a load or scalability test. There has been no easy way to create the test program, instantiate it with test data, execute the test and aggregate the results.
Some tools, called "profilers," have been available. However, these tools track things such as disk usage, memory usage or thread usage ofthe application under test. They do not provide data about performance ofthe application based on load.
Other tools are available to automate the execution of tests on applications. For example, RSW Software, Inc. of Waltham, MA, provides a product called e-Load. This tool simulates load on an application under test and provides information about the performance ofthe application. However, this tool does not provide information about the components in an application. We have recognized that a software developer would find such information very useful. Automatic test generation tools, such as TestMaster available from Teradyne
Software and System Test of Nashua, NH, are also available. Tools of this type provide a means to reduce the manual effort of generating a test. TestMaster works from a state model ofthe application under test. Such an application is very useful for generating functional tests during the development of an application. Once the model ofthe application is specified, TestMaster can be instructed to generate a suite of tests that can be tailored for a particular task - such as to fully exercise some portion ofthe application that has been changed. Model based testing is particularly useful for functional testing of large applications, but is not fully automatic because it requires the creation of a state model ofthe application being tested. We have recognized that a second shortcoming of testing enterprise wide applications is the critical performance criteria to measure often relates to how the application behaves as the number of simultaneous users increases. There are examples of websites crashing or operating so slow as to frustrate an ordinary user when too many users log on simultaneously. In the past, load has been simulated informally, such as by having several people try to use the application at the same time. Some tools exist to provide a load on an application for testing, such as e-Load available from RSW of Waltham, MA.
However, it has generally not been until the application is deployed into its intended operating environment that the performance ofthe application under load is known. Thus, the biggest problem facing an application developer might not be testing to see whether each object performs as designed or even whether the objects work together as a system. Heretofore there has been no available tool that will help an application developer ascertain how many simultaneous users a middleware application can accommodate given a specified transaction response time or identify which object in the application, given real world load conditions, is causing the bottleneck.SUMMARY OF THE INVENTION With the foregoing background in mind, it is an object ofthe invention to provide testing tools to facilitate load based testing of N-tiered applications. It is also an object to provide automatic testing.
The foregoing and other objects are achieved by a test system that simulates use of a particular software object within an application by a plurality of simultaneous users. The number of simultaneous users simulated is varied.
In the preferred embodiment, load testing is done on individual components in the application.
In a presently preferred embodiments, the test system analyzes response time measurements from plural software objects within the application and predicts which software object within the application that is likely to be a performance bottleneck.
In yet other presently preferred embodiments, performance settings within the application can also be varied to determine optimum settings to reduce performance bottlenecks.
In still other preferred embodiments, the format ofthe output may be specified by the user to aid in understanding the performance ofthe application under test in response to load.
BRIEF DESCRIPTION OF THE DRAWINGS The invention will be better understood by reference to the following more detailed description and accompanying drawings in which
FIG. 1 is an illustration of an application under test by the test system ofthe invention;
FIG. 2 is an illustration showing the test system ofthe invention in greater detail; FIG. 3 is an illustration showing the coordinator of FIG. 2 in greater detail; FIG. 4 is a flow chart illustrating the process of coordinating execution of load tests;
FIG. 5 is an illustration showing the code generator of FIG. 2 in greater detail; FIG. 6 illustrates a user interface of test system 110 during a setup phase; FIG. 7 illustrates a user interface of test system 110 during the specification of a test case; FIG. 8 illustrates a user interface of test system 110 during a different action as part ofthe specification of a test case; FIG. 9 illustrates a user interface of test system 110 during a different action as part ofthe specification of a test case; FIG. 10 illustrates a user interface of test system 110 during a different action as part of the specification of a test case;
FIG. 11 illustrates a user interface of test system 110 during a phase for reviewing the results of a test case in tabular format; FIG. 12 illustrates a user interface of test system 110 during a phase for reviewing the results of a test case in graphical format; FIG. 13 illustrates a user interface of test system 110 during a phase for reviewing the results of a test case in an alternative graphical format; and FIG. 14 illustrates a user interface of test system 110 during a phase for reviewing the results of a test case in a tabular format.
DESCRIPTION OF THE PREFERRED EMBODIMENT FIG. 1 illustrates a test system 110 according to the present invention. The system is testing application under test 114. Here application under test 1 14 is an application in the N-tiered model. More specifically, it is a three tiered database application. Application under test 114 could represent a database for a bank or an insurance company or it could represent an Internet application. The specific function of application under test 114 are not important to the invention.
Also, the specific hardware on which test system 110 and the application under test 114 reside is not important to the invention. It is sufficient if there is some connection between the two. In the illustration, that connection is provided by network 122. In the illustrated embodiment, it is contemplated that network 122 is part of a LAN operated by the owner of application under test 114, such as an Ethernet network. In this scenario, test system 110 is installed on a server within the LAN. However, many other implementations are possible. For example, network 122 could be a WAN owned by the owner of application under test 114.
A further variation is that network 122 could the Internet. In that scenario, test system 110 could be located in a server owned by a testing company.
A further possibility is that test system and application under test could be located in computers owned by a testing company. Many applications are written using platform independent technology such that the application will perform the same on many different platforms. Platform independent technology is intended to make it easy to run an application on any platform. Therefore, the application under test could be sent to a testing company, owning the hardware to implement test system 110, such as by uploading over the Internet. Thereafter, the application under test could be tested as described herein while running on a platform provided by the testing company with the results ofthe test being downloaded over the Internet.
Still other variations are possible. Test system 110 and application under test 114 could physically be implemented in the same computer. However, that implementation is not presently preferred because a single computer would have to be very large or would be limited in the size of applications that could be tested. The presently preferred embodiment uses several computers to implement test system 110.
Application under test 114 is a software application as known in the art. It includes middleware 116 that encapsulates some business logic. A user accesses the application through a client device. Many types of client devices are possible, with the list growing as networks become more prevalent. Personal computers, telephone systems and even household appliances with micro- controllers could be the client device. For simplicity, the client device is illustrated herein as a personal computer (PC) 120, though the specific type of client device is not important to the invention. PC 120 is connected to network 122 and can therefore access application under test 114. In use, it is contemplated that there would be multiple users connected to application under test 114, but only one user is shown for simplicity. The number of users simultaneously accessing application under test 114 is one indication ofthe "load" on the application.
Access to the application under test is, in the illustrated embodiment, through Graphical User Interface (GUI) 124 ofthe type known in the art. Software to manage interactions between multiple users and an application is known. Such software is sometimes called a web server. Web servers operate in conjunction with a browser, which is software commonly found on most PC's.
The web server and browser exchange information in a standard format known as HTML. An HTML file contains tags, with specific information associated with each tag. The tag signals to the browser a type associated with the information, which allows the browser to display the information in an appropriate format. For example, the tag might signal whether the information is a title for the page or whether the information is a link to another web page. The browser creates a screen display in a particular window running on PC 120 based on one or more HTML pages sent by the web server.
When a user inputs commands or data into the window ofthe browser, the browser uses the information on the HTML page to format this information and send it to the web server. In this way, the web server knows how to process the commands and data that comes from the user. GUI 124 passes the information and commands it receives on to middleware
116. In the example of FIG. 1, middleware 116 is depicted as a middleware application created with EJBs. Containers 130 are, in a preferred embodiment, commercially available containers. Within a container, are numerous enterprise Java beans 132. Each Java bean can more generally be thought of as a component. GUI 124, based on the information from user PC 120, passes the information to the appropriate EJB 132. Outputs from application under test 114 are provided back through GUI 124 to PC 120 for display to a user.
EJB's 132, in the illustrated example, collectively implement a database application. EJB's 132 manage interactions with and process data from databases 126 and 128. They will perform such database functions as setting values in a particular record or getting values from a particular record. Other functions are creating rows in the database and finding rows in the database. EJB's that access the database are often referred to as "entity beans."
Other types of EJB's perform computation or control functions. These are called "session beans." Session beans perform such functions as validating data entries or reporting to a user that an entry is erroneous. Session beans generally call entity beans to perform database access.
It will be appreciated that, while it is generally preferable to segregate programming ofthe application in such a way that each type of database transaction is controlled by a single bean that performs only that function, some entity beans will perform functions not strictly tied to database access. Likewise, some session beans will perform database access functions without calling an entity bean. Thus, while different testing techniques will be described herein for testing session beans and entity beans, it is possible that some EJB's will have attributes of both entity and session beans. Consequently, a full test of any bean might employ techniques of testing entity beans and testing session beans.
Test system 110 is able to access the EJB's 132 of application under test 114 over network 122. In this way, each bean can be exercised for testing. In the preferred embodiment, the tests are predominately directed at determining the response time ofthe beans - or more generally determining the response time of components or objects used to create the application under test. Knowing the response time of a bean can allow conclusions about the performance of an application. The details of test system 110 are described below.
In the illustrated embodiment, test system 110 is software installed on one or more servers. It is conceptually much like application under test 114. In a preferred embodiment, test system 110 is a JAVA application. Like application under test 114, test system 110 is controlled through a graphical user interface 150. GUI 150 might be a web server as known in the art. One or more application developers or test engineers might access test system over the network 122. In FIG. 1, PC's 152 and 154 are PC's used by testers who will control the test process.
Like application under test 114, multiple individuals might use test system 110 simultaneously. For example, multiple testers might be testing a single application. Each tester might be focused on testing different aspects ofthe application. Alternatively, each tester might be testing a different application. Numerous applications might be installed on computers on network 122. Test system 110 might be testing different applications at the same time. Turning now to FIG. 2, details of test system 1 10 are shown. Test system 1 10 performs several functions. One function is the generation of test code. A second function is to execute the test code to exercise one or more EJB's in the application under test. Another function is to record and analyze the results of executing the test code. These functions are performed by software running on one or more computers connected to network 122. The software is written using a commercially available language to perform the functions described herein.
FIG. 2 shows that test system 110 has a distributed architecture. Software components are installed on several different computers. Multiple computers are used both to provide capability for multiple users, to a allow a user to perform multiple tasks and also to run very large tests. The specific number of computers and the distribution of software components ofthe test system on those computers is not important to the invention.
Coordinator 210 is a software application that interfaces with GUI 150. The main purpose of coordinator 210 is to route user requests to an appropriate server in a fashion that is transparent to a user. Turning to FIG. 3, coordinator 210 is shown in greater detail. It should be appreciated, though, that FIG. 3 shows the conceptual structure of coordinator 210. Coordinator 210 might not be a single, separately identified piece of software. It might, for example, be implemented as coordination software within the various other components of test system 110. Also, it should be realized that a web server used to implement GUI 150 also provides coordination functions, such as queuing multiple requests from an individual or coordinating multiple users.
Coordinator 210 contains distribution unit 312. Distribution unit 312 is preferably a software program running on a server. As user requests are received from
GUI 150, they are received by distribution unit 312. As the requests are received, distribution unit 312 determines the type of resource needed to process the request. For example, a request to generate code must be sent to a server that is running a code generator. Coordinator 210 includes several queues to hold the pending requests. Each queue is implemented in the memory ofthe server implementing coordinator 210. In FIG. 3, queues 318A...318C are illustrated. Each queue 318A...318C corresponds to a particular type of resource. For example, queue 318A could contain code generator requests, queue 318B could contain test engine requests and queue 318C could contain data analysis requests. Distribution unit sends each request to one ofthe queues 318A...318C, based on the type of resources needed to process the request. Associated with each queue 318A...318C is queue manager 320A...320C. Each queue manager is preferably implemented as software running on the server implementing coordinator 210 or the server implementing the relevant piece of coordinator 210. Each queue manager maintains a list of servers within test system 110 that can respond to the requests in its associated queue. A queue manager sends the request at the top ofthe queue to a server that is equipped to handle the request. The connection between the queue manager and the servers equipped to handle the requests is over network 122. If there are other servers available and still more requests in the queue, the queue manager will send the next request in the queue to an available server. When there are no available servers, each queue manager waits for one ofthe servers to complete the processing of its assigned request.
As the requests are processed, the servers, such as the code generators and the test engines report back to the queue managers. In response, the queue managers send another request from the queue and also provide the results back to the distribution unit 312. Distribution unit 312 can then reply back to the user that issued the request, indicating that the request was completed and either giving the results or giving the location ofthe results. For example, after test code is generated, the user might receive an indication of where the test code is stored. After a test is executed, the user might receive a report ofthe average execution time for the test or the location of a file storing each measurement made during the test.
It will be appreciated by one of skill in the art that software systems that process user commands, including commands from multiple users, are well known. Such systems must have an interface for receiving commands from a user, processing those commands and presenting results to the user. Such interfaces also allow those results to be used by the user for implementing further commands. Such an interface is employed here as well and is depicted generally as GUI 150. For example, GUI 150 will allow a user to enter a command that indicates code should be generated to test a particular application. Once the code is generated, GUI 150 allows the user to specify that a test should be run using that test code. It is possible that some requests will require the coordination of multiple hardware elements. As will be described hereafter, one ofthe functions ofthe test engines is to apply a load that simulates multiple users. In some instances, one computer can simulate multiple users by running multiple client threads. However, there is a limit to the number of client threads that can run on a server. FIG. 4 shows the process by which queue manager 320B coordinates the actions of test engines located on separate servers. At step 410, queue manager 320B waits for the required number of test engines to become available. Once the test engines are available, at step 412 queue manager 320B sends commands to each test engine that will be involved in the test to download the test code from the appropriate one ofthe code generators 212A and 212B. Queue manager 320B then begins the process of synchronizing the test engines located on different servers. Various methods could be used to synchronize the servers. For example, if very great accuracy is required, each server could be equipped with radio receivers that receive satellite transmissions that are intended to act as time reference signals, such as in a GPS system. Calibration processes could alternatively be used to determine the amount of time it takes for a command to reach and be processed by each server. Commands could then be sent to each server at times offset to make up for the time differences. In the preferred embodiment, a simple process is used. At step 414, queue manager sends a message to each server to be acting as a test engine. The message asks that server to report the time as kept by its own internal circuitry.
Upon receiving the internal time as kept by each ofthe servers, at step 416 queue manager 320B adds the same offset to each local time. At step 418, queue manager 418 sends the offset times back to the servers. The offset local times become the local starting time ofthe test for each server. Each server is instructed to start the test when its local time matches the offset local time. In this way, all the servers start the test simultaneously.
Queue manager 320B waits for the execution ofthe test code at block 420. Some test cases will entail multiple tests. At step 422, a check is made of whether the particular test case being executed requires more tests. For example, as will be described in greater detail below, one kind of test case that test system 110 will run is a load test. During a load test, multiple test engines, each executing multiple client threads, execute to simulate multiple users accessing application under test 114. An operating parameter of application under test 114 is then measured. In the preferred embodiment, the number of simultaneous users being simulated can be varied and the operating parameter measured again. Taking data at multiple load conditions allows test system 110 to determine the affect of load on application under test 114. Measurements of these types require that a full test case include multiple repetitions of the same test with different load conditions.
If there are more conditions under which the test should be run, execution proceeds to step 424. At step 424, the number of client threads is increased as needed for the new test case. Execution then loops back to step 410. At step 410, queue manager 320B waits for the required number of test engines to be available. When the hardware is available, the test is performed through steps 412, 414, 416, 418 and 420 in the same manner as for the first test condition. At step 422, a check is for whether the request entails multiple test conditions. If there are no further test conditions, the request from queue 318B is considered complete. If there are further test conditions that need to be run to complete the request, the process is again repeated.
For test system 110 to operate, it is necessary that there be test code. A user could provide test code. Or, test code could be provided by automatic code generation systems, such as TESTMASTER sold by Teradyne Software and System Test of Nashua, NH. However, FIG. 2 illustrates that code generators 212A and 212B are used in the preferred embodiment to create the code. Turning to FIG. 5, greater details of a code generator 212 are shown.
Code generator 212 contains several scripts 512. Each script is a sequence of steps that code generator 212 must perform to create code that performs a certain type of test. The scripts can be prepared in any convenient programming language. For each type of test that test system 110 will perform, a script is provided. User input on the type of test that is desired specifies which script 512 is to be used for generating code at any given time. Appendix 1 contains an example of a script. The selected script 512 assembles test code 516. The information needed to assemble test code 516 comes from several sources. One source of information is the test templates 514. There are some steps that are needed in almost any kind of test. For example, the object being tested must be deployed and some initialization sequence is required. If the tests are timed, there must be code that starts the test at a specified start time and an ending time ofthe test must be recorded. Also, there must be code that causes the required data to be logged during the test. After the test, there might also be some termination steps that are required. For example, where the initialization started with a request for a reference to a particular EJB, the test code will likely terminate with that reference being released. The test code to cause these steps to be performed is captured in the set of templates 514.
In addition, there might be different templates to ensure that the test code 516 appropriately reflects inputs provided by the user. For example, different containers might require different command formats to achieve the same result. One way these different formats can be reflected in the test code 516 is by having different templates for each container. Alternatively, a user might be able to specify the type of information that is to be recorded during a test. In that instance, a data logging preference might be implemented by having a set of templates that differ in the command lines that cause data to be recorded during a test. An example template is shown in Appendix 2.
The templates are written so that certain spaces can be filled in to customize the code for the specific object to be tested. In the preferred embodiment, code generator 212 generates code to test a specific EJB in an application under test. One piece of information that will need to be filled in for many templates is a description ofthe EJB being tested. Another piece of information that might be included is user code to put the application under test in the appropriate state for a test. For example, in testing a component of an application that manages a database of account information for a bank, it might be necessary to have a specific account created in the database to use for test purposes or it might otherwise be necessary to initialize an application before testing it. The code needed to cause these events might be unique to the application and will therefore be best inserted into the test code by the tester testing the application. In the illustrated embodiment, this code is inserted into the template and is then carried through to the final test code.
The template might also contain spaces for a human tester to fill in other information, such as specific data sets to use for a test. However, in the presently preferred embodiment, data sets are provided by the human user in the form of a data table.
Code generator 212 could generate functional tests. Functional tests are those tests that are directed at determining whether the bean correctly performs its required functions. In a functional test, the software under test is exercised with many test cases to ensure that it operates correctly in every state. Data tables indicating expected outputs for various inputs are used to create functional test software. However, in the presently preferred embodiment, code generator 212 primarily generates test code that performs load tests. In a load test, it is not necessary to stimulate the software under test to exercise every possible function and combination of functions the software is intended to perform. Rather, it is usually sufficient to provide one test condition. The objective ofthe load test is to measure how operation ofthe software degrades as the number of simultaneous users ofthe application increases.
In the preferred embodiment, test system 110 contains scripts 512 to implement various types of load tests. One type of load test determines response time of an EJB. This allows the test system to vary the load on the EJB and determine degradation of response time in response to increased load. Another type of load test is a regression type load test. In a regression type test, the script runs operations to determine whether the EJB responds the same way as it did to some baseline stimulus. In general, the response to the baseline stimulus represents the correct operation ofthe EJB. Having a regression type test allows the test system 110 to increase the load on a bean and determine the error rate as a function of load.
To generate test code 516 for these types of load tests, the script 512 must create test code that is specific to the bean under test. The user provides information on which bean to test through GUI 150. In the preferred embodiment, this information is provided by the human tester providing the name of the file within the application under test that contains the "deployment descriptor" for the specific bean under test. This information specifies where in the network to find the bean under test. Script 512 uses this information to ascertain what test code must be generated to test the bean.
Script 512 can generate code by using the attributes ofthe platform independent language in which the bean is written. For the example of Sun JAVA language being used here, each bean has an application program interface called a "reflection." More particularly, each bean has a "home" interface and a "remote" interface. The "home" interface reveals information about the methods for creating or finding a remote interface in the bean. The remote interface reveals how this code can be accessed from client software. Of particular interest in the preferred embodiment, the home and remote interfaces provide the information needed to create a test program to access the bean.
Using the reflection, any program can determine what are known as the "properties" and "methods" of a bean. The properties of a bean describe the data types and attributes for a variable used in the bean. Every variable used in the bean must have a property associated with it. In this way, script 512 can automatically determine what methods need to be exercised to test a bean and the variables that need to be generated in order to provide stimulus to the methods. The variables that will be by the methods as they are tested can also be determined. In the preferred embodiment, this information is stored in symbol table 515.
Symbol table 515 is a file in any convenient file format. Many formats for storing tabular data are known, such as .xml format. Once the information on the methods and properties are captured in a table, script 515 can use this information to create test code that exercises the methods and properties ofthe particular component under test. In particular, script 515 can automatically create a variable of the correct data type and assign it a value consistent with that type for any variable used in the bean.
FIG. 5 shows a data generator 518. Data generator 518 uses the information derived from the reflection interface to generate values for variables used during testing of a bean. There are many ways that appropriate values could be generated for each variable used in the test of a particular bean. However, in the commercial embodiment ofthe present invention, the user is given a choice of three different algorithms that data generator 518 will use to generate data values. The user can specify "maximum," "minimum" or "random." If the maximum choice is specified, data generator 518 analyzes the property description obtained through the reflection interface and determines the maximum permissible value. If the user specifies "minimum" then data generator 518 generates the smallest value possible. If the user specifies random, data generator 518 selects a value at random between the maximum and the minimum. In many instances where a load test is desired, the exact value of a particular variable is not important. For example, when testing whether a bean can properly store and retrieve a value from a database, it usually does not matter what value is stored and retrieved. It only matters that the value that is read from the database is the same one that was stored. Or, when timing the operation of a particular bean, it will often not matter what values are input to the method. In these scenarios, data generator 518 can automatically generate the values for variables used in the test code. In cases where the specific values ofthe variables used in a test are important, code generator 212 provides the user with another option. Rather than derive values of variables from data generator 518, script 512 can be instructed to derive data values from a user provided data table 520. A user might, for example, want to provide a data table even for a load test when the execution time of a particular function would depend on the value ofthe input data.
A data table is implemented simply as a file on one ofthe computers on network 122. The entries in the table, specifying values for particular variables to use as inputs and outputs to particular methods, are separated by delimiters in the file. A standard format for such a table is "comma separated values" or CSV. In a preferred embodiment, test system 110 includes a file editor - ofthe type using conventional technology - for creating and editing such a file. In addition, test system 110 would likely include the ability to import a file - again using known techniques - that has the required format. The methods of a bean describe the functions that bean can perform. Part of the description ofthe method is the properties ofthe variables that are inputs or outputs to the method. A second part of the description of each method - which can also be determined through the reflection interface - is the command needed to invoke this method. Because script 512 can determine the code needed to invoke any method and, as described above, can generate data values suitable to provide as inputs to that method, script 512 can generate code to call any method in the bean.
In the preferred embodiment, directed at load testing, the order in which the methods of a bean are called is not critical to an effective test. Thus, script 512 can automatically generate useful test code by invoking each method ofthe bean. The order in which the methods are invoked does not matter if the only parameter that is measured is the time it takes the methods to execute.
More sophisticated tests can be automatically built by relying on the prescribed pattern for the language. In Sun JAVA, entity beans for controlling access to a database should have methods that have a prefix "set" or "get". These prefixes signal that the method is either to write data into a database or to read data from the database. The suffix ofthe method name indicates which value is to be written or read in the database. For example, a method named setSSN should perform the function of writing into a database a value for a parameter identified as SSN. A method named getSSN should read the value from the parameter named SSN.
By taking advantage of these prescribed patterns, script 512 can generate code to exercise and verify operation of both methods. A piece of test code generated to test these methods would first exercise the method setSSN by providing it an argument created by data generator 518. Then, the method getSSN might be exercised. If the get method returns the same value as the argument that was supplied to the set method, then it can be ascertained that the database access executed as expected.
For many types of enterprise wide applications, the beans most likely to be sensitive to load are those that access the database. Thus, testing only set and get methods provides very useful load test information.
However, the amount of testing done can be expanded where required. Some beans also contain methods that create or find rows in a database. By convention, methods that create or find rows in a database are named starting with "create" or "find." Thus, by reflecting the interface ofthe bean, script 512 can also determine how to test these methods. These methods can be exercised similarly to the set and get methods. The properties revealed through the application interface will described the format of each row in the database. Thus, when a create method is used, data can be automatically generated to fill that row, thereby fully exercising the create method. In a preferred embodiment, find methods are exercised using data from a user supplied data table 520. Often, databases have test rows inserted in them specifically for testing. Such a test row would likely be written into data table 520. However, it would also be possible to create a row, fill it with data and then exercise a find method to locate that row.
Once the commands that exercise the methods of an EJB are created, script 512 can also insert into the client test code 516 the commands that are necessary to record the outputs of the test. If a test is checking for numbers of errors, then test code 516 needs to contain instructions that record errors in log 216. Likewise, if a test is measuring response time, the test code 516 must contain instructions that write into log 216 the information from which response time can be determined. In the described embodiment, all major database functions can be exercised with no user supplied test code. In some instances, it might be possible to exercise all the functions with all test data automatically generated. All the required information could be generated from just the object code ofthe application under test. An important feature ofthe preferred embodiment is that it is "minimally invasive" - meaning that very little is required ofthe user in order to conduct a test and the test does not impact the customer's environment. There is no invasive test harness. The client code runs exactly like the code a user would write.
In some scenarios, it will be necessary or desirable for a user to insert specific steps into the test code 516. Test system 110 allows for the user to edit test code 516. For example, some beans will perform calculations or perform functions other than database access. In these instances, the user might chose to insert test code that exercises these functions. However, because bottlenecks in the operation of many N- tiered applications occur in the entity beans that manage database transactions, the simple techniques described herein are very useful.
Appendix 3 gives an example of test code that has been generated. Once test code 516 is generated, with or without editing by a user, test system 110 is ready to execute a test. User input to coordinator 210 triggers the test. As described above, coordinator 210 queues up requests for tests and tests are executed according to the process pictured in FIG. 4. To simulate one user, the test code 516 is executed as a single thread on one ofthe test engines 214A...214C. To simulate multiple users, multiple threads are initiated on one or more ofthe test engines 214A...214C. The results of any tests are stored in log 216. FIG. 2 shows log 216 as a separate hardware element attached to network 122. Log 216 could be any type of storage traditionally found in a network, such as a tape or disk drive attached to a computer server. For simplicity, it is shown as a separate unit, but could easily be part of any other server in the network.
Many types of data could be stored in log 216. For example, there are several possible ways that "response time" could be measured. One way is that the total time to execute all the methods in a bean could be measured. Another way is that the start up time of a bean could be measured. The startup time is the time it takes from when the bean is first accessed until the first method is able to execute. Another way to measure response time is to measure the time it takes for each method to execute. As another variation, response time could be measured based on how long it takes just the "get-" methods to execute or just the "set-" methods to execute.
Different measurements must be recorded, depending on which measure of response time is used. For example, if only the total response time is required, it is sufficient if the test code simply records the time that the portion ofthe test code that exercises all the methods starts and stops executing. If the startup response time is required, then the client test code must record the time that it first accesses the bean under test and the time when the first method in the test sequence is ready to be called. On the other hand, if the response time is going to be computed for each method, the client test code must record the time before and after it calls each method and some indication ofthe method being called must also be logged. Similar information must be recorded if responses of just "get-" or "set-" functions are to be measured, though the information needs to be recorded for only a subset ofthe methods in these cases. In addition, when there are multiple users being simulated, there are multiple values for each data point. For example, if test system 110 is simulating 100 users, the time that it takes for the bean to respond to each simulated user could be different, leading to up to 100 different measurements of response time. The response time for 100 users could be presented as the maximum response time, i.e. the time it takes for all 100 simulated users to finish exercising the bean under test. Alternative, the average time to complete could be reported as the response time. As another variation, the range of values could be reported.
In the preferred embodiment, the client test code 516 contains the instructions that record all ofthe information that would be needed for any possible measure of response time and every possible display format. The time is recorded before and after the execution of every method. Also, an indication that allows the method to be identified is recorded in log 216. To support analysis based on factors other than delay, the actual and expected results of the execution of each method are recorded so that errors can be detected. Also, the occurrences of exceptions are also recorded in the log. Then, data analyzer 218 can review the log and display the response time according to any format and using any definition of response time desired. Or the data analyzer can count the number of exceptions or errors.
Once the data is stored, the user can specify the desired format in which the data is to be presented. Data analyzer 218 accepts commands from the tester concerning the format ofthe output and analyzes the data appropriately. In a preferred embodiment, test system 110 has the ability to present the results of tests graphically to aid the tester in understanding the operations - particularly performance bottleneck - of application under test 114.
Data analyzer 218 generates output in several useful formats. One important output is a response time versus load graph. Log file 216 contains the starting and stopping times of execution tests for a particular test case. The test case includes the same measurements at several different load conditions (i.e. with the test engines 214A...214C simulating different numbers of simultaneous users). Thus, data analyzer can read through the data in log 216 and identify results obtained at different load conditions. This data can be graphed. Another useful analysis is the number of errors per second that are generated as a function ofthe number of simultaneous users. To perform this analysis, test code 516 could contain instructions that write an error message into log 216 whenever a test statement produces an incorrect result. In the database context, incorrect results could be identified when the "get" function does not return the same value as was passed as an argument to the "set" function. Or, errors might be identified when a bean, when accessed, does not respond or responds with an exception condition. As above, data analyzer 218 can pass through the log file, reading the numbers of errors at different simulated load conditions. If desired, the errors can be expressed as an error count, or as an error rate by dividing the error count by the time it took for the test to run.
Some examples ofthe types of outputs that might be provided are graphs showing: transactions per second versus number of users; response time versus number of users; exceptions versus numbers of users; errors versus numbers of users; response time by method; response time versus run time and transactions per second versus run time. Different ways to measure response time were discussed above. In the preferred embodiment, a transaction is defined as the execution of one method, though other definitions are possible.
Run time is defined as the total elapsed time in running the test case, and would include the time to set up the execution of EJBs . Viewing response time as a function of elapsed time is useful , for example, in revealing problems such as "memory leaks" . A memory leak refers to a condi tion in which portions of the memory on the server running the applica tion under test gets used for unproductive things . As more memory is used unproductively, there is less memory available for running the application under test and execution slows over time . Al ternatively, viewing resul ts in this format might reveal tha t the applica tion under tes t is effectively utilizing caching. If caching is used effectively, the execution time might decrease as elapsed time increases . Turning now to FIG. 6, operation of test system 110 from the tester's perspective is illustrated. FIG.6 illustrates the tester interface to be a traditional web browser interface as it would appear on the terminal of PC 152 or 154. This type of interface is the result of using a web server to implement GUI 150 with commercially available browser software on the tester PC's 152 and 154.
Element 612 is the browser toolbar. The browser provides the ability for the tester to perform such functions as printing and connecting to pages presented by various servers in the system. The tester performs these functions through the use of the tool bar 612. Field 614 indicates the server to which the PC 152 or 154 is currently connected. In the illustrated example, field 614 contains the address on network 122 for the server containing GUI 150.
Window 610 is the area of the screen of PC 152 or 154 where information specific to test system 110 is displayed. As with traditional web based applications, this information is displayed as the result of HTML files that are downloaded over network 122. Certain elements displayed in window 610 represent hyperlinks, which when accessed by a tester cause other pages to be downloaded so that different information is displayed for the tester. Other elements displayed in window 610 represent data entry fields. When a human tester fills in these fields, information is submitted to test system 1 10. Tabs 616 represent choices a tester can select for operating mode. FIG. 6 shows three tabs. There is one tab for "setup", one for "test case" and one for "results". Each ofthe tabs is hyperlinked. These hyperlinks connect to pages on servers in the network that are programmed to manage the test system through a particular phase. Setup is for configuring the test system, such as by identifying addresses for servers on network 122 that contain test engines. "Test case" is for creating and running a particular test case, which in the preferred embodiment means test code for a particular bean and parameters specifying how that test code is to run. "Results" is for viewing the results of a test case that has executed. In FIG. 6, the "SETUP" tab is shown selected, meaning that the balance of window 610 displays hyperlinks and fields that are appropriate for entering data or commands for the SETUP phase.
Region 618 contains a set of hyperlinks. These hyperlinks present a list of choices to the user about what can be setup. Selecting one of these hyperlinks will change the information appearing in region 620. It is well known in the art of web based software to have hyperlinks on one part of window change information appearing in another part ofthe window.
In the Example of FIG. 6, the hyperlink "Machine" in region 618 has been selected. Therefore, region 620 contains fields that can be used to identify a machine to be added to test system 110. In FIG. 6, a screen to add a client host computer is shown. A client host computer acts as a test engine 214A...214C.
Region 618 also contains hyperlinks for "Projects" and "Data Tables". As described above, test system 110 can be used by multiple testers or can be used to test multiple applications under test. To segregate the information relating to various tasks, a "Project" is defined. All information relating to a particular project is identified with the project. In this way. test system 110 can identify information that is logically related. For example, test code developed to test a particular bean in an application will be given the same project name as test code to test other beans in that application. In this way, general information about the application - such as the path to the server through which the application can be accessed is stored once with the project rather than with each piece of test code. As another example, the test results can be associated with the test code that generated them through the use of projects.
The hyperlink for "Data Tables" allows the tester to identify to the system where data tables are stored to test specific beans. In general, the tester will create the data tables by hand or automatically apart from test system 110 based on the features that the tester desires to have tested. The data tables will be stored in files on a computer on network 122. Before use by test system 1 10. the tester must provide the network address for that file.
Field 622 is a menu field. It is a drop down menu box. When accessed, it will display a menu of choices ofthe actions the tester can specify for test system 110. The contents ofthe menu is context sensitive, meaning that for any given page, only the menu choices that are appropriate are displayed. Actions that a user might want to choose from the menu are things such as edit, delete, add new, rename, etc.
Turning now to FIG. 7, a screen of tester PC 152 or 154 is shown when the TEST CASE tab selected. Selecting the TEST CASE tab allows the tester to specify what is to be tested and how the test is to be conducted. In the example of FIG. 7, window 610 contains information that describes a test case. This particular page is displayed when the "edit" has been selected in menu field 622. Field 710 indicates the name ofthe project to which the test case is associated. Field 710 is a screen display element known as a drop down box. When activated by a tester, field 710 will become a list of all projects that have been previously created by and tester using test system 110. As shown in FIG. 7, a project called "Demo Project" is being used.
Field 712 identifies the particular test case by name. Field 712 is also a drop down box, allowing previously defined test cases to be selected from a list. In addition, one ofthe options that appears when the drop down box is accessed is "<New Test Case>". When selected, a new test case is created and information about this test case can be entered. This type of menu is common for programs with graphical user interfaces and is used at several points in the interface for presenting choices to a human tester.
In the example of FIG. 7, a test case "Customer Test" has been created and information has already been entered. In region 714, there are a series of fields in which data can be entered or changed to define or modify the test case. In field 716, a name can be provided for the test case. This name allows the test case to be referenced in other parts ofthe test system.
In field 718, a description ofthe test case can be entered. This description is not used by the automatic features ofthe test system, but can act as a note to a human tester to signify what the test case does or why it was created. Field 720 is likewise not used by the automated system. However, this field holds the name of an individual who authored the test case to facilitate other administrative functions. Field 722 is a "radio button" type field. It presents a tester with a list of choices, only one of which can be selected at a time. In this example, field 722 allows the tester to specify the type of test. As previously described, code generators 212A and 212B contain a plurality of scripts 512 that generate test code. As described above, the script assembles templates and generate command lines for a particular type of test that is to be conducted. Thus, the tester must specify the test type in order to allow the appropriate script to be selected. In this example, only two types of tests are presented as options to a tester - a load test and a functional test. These types of tests were discussed above. Field 724 allows the tester to specify a "deployment descriptor." In the JAVA language, every bean has associated with it a deployment descriptor. The deployment descriptor is a file that identifies the bean and defines the parameters with which the EJB will be deployed on the applications server. Examples of the parameters are the number of instantiations of the EJB with which to start the applications server (sometimes called a "pooling number") and how much memory is allocated to the bean. These functions are performed by the container 130.
The tester provides the deployment descriptor by entering the path to a file on network 122. The test system 110 reads the deployment descriptor to find the name of the bean under test and then to access the bean through reflection to determine its methods and properties.
Field 726 allows the tester to specify the type of data to be used in creating the test code 516. As described above, in the preferred embodiment, the data can be automatically generated by test system 110 or can be supplied through a data table. For automatic data generation, the data can be generated by using the maximum possible value of each variable, the minimum possible value or a value randomly selected between the maximum and the minimum. Alternatively, the data can be specified by a data table. In FIG. 7, field 726 indicates that tester desires test system 110 to generate data using the data table named dd.csv. If the tester had wanted the test system to automatically generate data, the tester would specify in field 726 whether the data was to be generated randomly or whether the maximum or minimum values were to be used.
Field 728 allows the user to specify the server on which the application under test runs. FIG. 1 shows that the beans 132 ofthe application under test are within a container 130. In this context, "server" refers to the software that creates the container for the application. For each platform independent language, there are a limited number of commercially available servers. Test system 110 contains script files that will generate the appropriate test code for any server. While most ofthe client test code will be server independent, it is possible that servers will implement certain functions in unique ways. Thus, there needs to be script files that account for the functions that are implemented differently on certain servers. FIG. 8 shows a screen display when the tester has used menu field 622 and selected to have the deployment descriptor for a test case to be displayed. If desired, the tester can then edit the deployment descriptor to try alternative configurations.
FIG. 9 shows a screen display when a tester has selected from menu field 622 to have the generated test code displayed. In FIG. 9, the test code 516 is identified as "client code" because it will simulate the operation of a client 120 ofthe application under test 114. The displayed code corresponds to the code generated for the project and test case identified in fields 710 and 712. In this screen, the tester can also edit the test code. One instance when a tester might desire to edit test code is when most ofthe commands in the test code can be automatically generated, but certain commands must have specific data values for the application under test to function properly. The tester could have test system 110 automatically generate the test code, but then manually edit the few data values that matter. As an example, a particular bean might include a method that processes positive and negative values differently. The tester might know that processing negative numbers takes longer and therefore change a randomly generated value to a negative number.
An alternative scenario in which a tester might wish to edit test code 516 is when the bean being tested contains methods to be tested other than those that follow a pattern, such as the "set", "get", "create" and "find" methods described above. The tester might create test code that tests methods that do not follow the pattern. This code could then be inserted by the human tester into the generated test code.
FIG. 10 shows a screen display for another function performed by a human tester while specifying the TEST CASE, also selected through menu field 622. FIG. 10 is a screen display used when the test case is to be run. The project and specific test case is specified by entries in fields 710 and 712. Information about how to run the test case is entered in region 1010.
Region 1010 contains several fields. Field 1012 indicates the name ofthe file in log 216 where the results of executing the test case will be stored. In this way, data analyzer 218 will be able to locate the data for a particular test case to analyze and present to a tester in the desired form.
Field 1014 gives the URL - or network address - for the application under test. This information could be identified by using the name for a particular machine that was previously set-up by the human tester.Field 1016 gives the URL - or network address - for a server to use as a test engine. Again, the server could be identified by using the name for a particular machine that was previously set-up by the human tester. The screen displayed in FIG. 10 is used for an embodiment ofthe invention where all test simultaneously executing copies ofthe client test code are run on a single machine. If test system 110 includes multiple test engines and automatically schedules execution of test code as described in conjunction with FIG. 4 above, then field 1016 is not required.
Field 1018 gives the maximum number of concurrent users to be simulated at one time during the execution ofthe test case. Field 1020 allows the user to specify the rate at which the number of simultaneous users will be simulated during a test. In the example of FIG.10, the test case will be completed after 100 users have been simulated simultaneously and the number of simultaneous users will increase by 10 each time the test code is run. For the examples given here, 10 copies ofthe client test code shown in FIG. 9 will first be executed simultaneously. Then 20 copies of that test code will be executed simultaneously. The test code will be repeated for 30, 40, 50, 60, 70, 80, 90 and 100 simultaneous users before the test case is completed. After the test case has been run, the tester can view and analyze the results.
The human tester can access the functions ofthe test system 110 that display and analyze results by selecting the RESULTS tab from tabs 616. FIG. 11 shows a page displayed when the RESULTS tab is selected. The page shown in FIG. 11 is for when the tester has requested to see summary data through use of menu field 622. Fields 710 and 712 are also present on the page shown in FIG. 11. This information is used by the test system to locate the appropriate data to display. In addition, there is a field 1110 that allows the user to enter the name ofthe results file to display. As described in conjunction with FIG. 10, the user specifies in field 1012 the name of a results file to hold the results of a particular run of a test case. The name ofthe results file for desired run of the test is entered in filed 11 10.
FIG. 11 shows that window 610 contains a region 1112 that lists the results of a run of a particular test case in summary fashion. Part ofthe information in region 1112 is simply a display of information input by the tester when editing or running the test case. For example, the target container, or the container 130 of application under test 114 is listed. The maximum number of concurrent users simulated during the test is also displayed. The file containing the test data use for the run ofthe test case that generated the results is also displayed as is the deployment descriptor. These values are displayed for ease of use by the human tester.
Region 1112 also includes information that was developed by data analyzer 218 from the data gathered for the specified run of the test case. In FIG. 11 , the pieces of information that are displayed are the average response time and the maximum response time. As described above, as a test case runs, the start and stop times ofthe execution ofthe test code is recorded in log 216. When the test code is run multiple times, each time simulating a different number of users, the start and stop time is recorded for each number of users. Data analyzer can determine the response time by simply computing the difference between the start and stop times. Once values are determined, they can be averaged or the maximum can be identified.
FIG. 12 shows a different page that can be selected by the user to see the results in graphical format. As above, fields 710, 712 and 1110 allow the user to specify which run of a particular test case is used to create the results. The graphical display in FIG. 12 is a graph showing numbers of transactions per second as the dependent variable with the number of simultaneous users as the independent variable. As described above, the information needed to compute the data values for this graph is stored in log 216 after the test case is run and data analyzer 218 can retrieve it. For creation ofthe graph in FIG. 12, transactions per second was defined as the average number of methods executed per second per user. This value is essentially the reciprocal of response time.
FIG. 13 shows a screen useful for displaying results to a human tester in a slightly different embodiment. As with FIG. 12, the screen display in FIG. 12 is accessed when the "RESULTS" tab is selected. Also like in FIG. 12, the page shown in FIG. 13 includes fields 710, 712 and 1110 that allow the human tester to specify which results are to be displayed.
The page shown in FIG. 13 includes an alternative way for the user to specify the format ofthe display. The screen in FIG. 13 includes menu fields 1310 and 1312. Menu field 1310 allows the tester to specify the manner in which response time is to be calculated. In FIG. 13, a value of "total" has been selected in field 1310. As described above, the "total" response time is measured as the time from first access of a bean until all methods ofthe bean have been exercised. Other choices in menu field 1310 allow a tester to specify that result be displayed for different measures of response time. As described above, the presently preferred embodiment can measure response time just on the start up time or response time for individual methods or for get- functions and set-functions.
Field 1312 allows a user to specify the format ofthe display. In FIG. 13, the display is in HiLo format. In this format, results are displayed as bars, such as bar 1316. Each bar spans from the fastest response time to the slowest response time. A tick mark showing the average is also included in the illustration of FIG. 13. Other choices in menu field 1312 would, for example, allow the human tester to see results in a line chart format as in FIG. 12 or in tabular format.
Turning to FIG. 14, results in a tabular format are shown. Field 1312 indicates that the display format of "Log File" has been selected. This format corresponds to the list shown in region 1412. The list contains a column for the name ofthe methods in the bean being tested. In this example, the data shown for each method reveals the minimum, maximum and average execution time for that method.
As described above, test system 110 measures response time at various load conditions. The displayed data represents response times at a particular load condition. Thus, to make the list, the tester must specify the load condition for which data is to be displayed. To allow this selection to be made, the page displayed in FIG. 14 contains a field 1410. In this field, a user can enter the load condition for which data is to be displayed. In this example, the human tester has entered a value of 500, indicating that 500 threads executing the test code were initiated in order to obtain the displayed data.
Having described the structure of test system 110 and giving examples of its application, several important features ofthe test system 110 can be seen. One feature is that information about the performance of an application under test can be easily obtained, with much ofthe data being derived in an automated fashion. A software developer could use the test system to find particular beans that are likely to be performance bottlenecks in an application. The developer could then rewrite these beans or change their deployment descriptors. For example, one aspect ofthe deployment descriptor indicates the number of copies ofthe bean that are to be instantiated within application under test 114. The developer could increase the number of instantiations of a bean if that bean is the bottleneck.
The test system described herein provides an easy and accurate tool to test EJBs for scalability. It creates a user specified number of virtual users that call the EJB while it is deployed on the applications server. The tool does this by inspecting the EJB under test and automatically generating a client test program, using either rules based data or data supplied by an a human tester, and then multithreading the client test program to drive the EJB under test. The result is a series of graphs reporting on the performance versus the number of users, which provide useful information in an easy to use format.
Another feature ofthe invention is that the tests are run without requiring changes in the application under test or even the installation of special test agents on the server containing the software under test. The generated test code 516 exercises the bean in the application under test using remote procedure calls.
Another feature ofthe described embodiment of test system 110 is that it is scalable. To increase the number of tests that could simultaneously be run or the size ofthe tests that could be run, more test engines could be added. Likewise, more code generators could be added to support the simulation of a larger number of simultaneous users. The specific number of copies of each component is not important to the invention. The actual number of each component in any given embodiment is likely to vary from installation to installation. The more users an application is intended to support, the more test engines are likely to be required. Another feature ofthe described embodiment is that testing is done on the simplest construct in the application under test - the beans in the illustrated example. There are two benefits to this approach. First, it allows tests to be generated very simply, with minimal human intervention. Second, it allows a software developer to focus in on the point ofthe software that needs to be changed or adjusted in order to improve performance.
It should be appreciated that displays of specific kinds of information have been described. Various other analyses might be performed. It was described that response times and error rates as a function of load could be graphed for display to a human tester for further analysis. One enhancement that might be made to test system 110 is that the data analyzer 218 could be programmed to perform further analysis. It has been recognized that, as the load increases, there is often some point at which the performance ofthe system drastically changes. In some instances, the time to complete a transaction drastically increases. A drastic increase in transaction processing time indicates that the system was not able to effectively handle the load.
However, a decrease in processing time can also indicate the load limit was reached. Sometimes, a system under test will respond with an error message more quickly than it would take to generate a correct response. Thus, if the only parameter being tracked is response time, a decrease in processing time as a function of load can also signal that the maximum load has been exceeded. Of course, an increase in errors or error rate can also signal that the maximum load was exceeded. Data analyzer 218 could be used to identify automatically a maximum load for a particular test case. By running multiple test cases, each test case focusing on a different bean, test system 110 could automatically determine the bean that is the performance bottleneck and could also assign a load rating to application under test 114. Having described one embodiment, numerous alternative embodiments or variations might be made. For example, it was described that test system 110 automatically generates test code to exercise beans that follow a pattern for database access. These beans are sometimes called "entity beans." In general, there will be other beans in an application that perform computations on the data or that control the timing ofthe execution ofthe entity beans. These beans are sometimes called "session beans." Session beans are less likely to follow prescribed programming patterns that make the generation of test code for entity beans simple. As a result, the automatically generated test code for session beans might not fully test those beans. In the described embodiment, it is expected that the human tester supply test code to test session beans where the automatically generated tests are inadequate. One possible modification to the described embodiment is that the completeness of tests for session beans might be increased. One way to increase the accuracy of tests for session beans would be to capture data about the execution of those beans during actual operation ofthe application under test 114. This data could allow an automated system to determine things like appropriate data values, which might then be used to build a data table. Or, the captured data could allow the automated system to determine the order in which a session bean accesses other session beans or entity beans to create a realistic test. Also, as described, test code is generated to test a particular bean, which is a simple construct or "component" ofthe application under test. The testing could focus on different constructs, such as specific methods in a bean. Test code could be generated to test specific methods within beans. Or, it was described that the system records start and stop time ofthe execution ofthe test code. The times of other events could be recorded instead or in addition. For example, start and stop times of individual methods might be recorded, allowing performance of individual methods to be determined.
Alternatively, the complexity ofthe constructs being tested could be increased. Multiple beans might be tested simultaneously to determine interactions between beans. For example, multiple test cases might be executed at the same time, with one test case exercising a specified instances of one bean and a different test case exercising a specified number of instances of a second bean.
As another example of a possible variation, it was described that a human tester can insert code into a template to do such things as put the application under test into a predictable state. Lines of code might be inserted directly, for example by the user simply typing the lines of code. Or, the tester might insert a "tag" into the template. The tag would identify a code segment stored elsewhere within the test system. In this way, the same code segment could be included at multiple places in the template or in multiple templates.
As another example of possible variations, the number of templates used to construct test code might be varied. One possibility is that each template contains all ofthe steps needed to initialize, run and terminate a test. Thus, test code would be created by filling in a single template. Alternatively, each template might contain only the steps needed to perform one function, such as initialization, testing or termination. In this implementation, test code would be created by stringing together multiple templates.
Also, it was described that in running a test that a number of simultaneous users is "synchronized". Simultaneous users are simulated by synchronizing copies ofthe test code on different servers and on the same server. The term "synchronized" should not be interpreted in so limited a way as to imply that multiple copies are each performing exactly the same function at exactly the same time. Thus, when described herein that execution is synchronized, all that is required is that each copy ofthe code is making requests ofthe application under test during the window of time when the test is being executed. Some copies ofthe code will likely start execution sooner or end sooner than the others. However, as long as there is overlap in the timing of execution, the test programs can be said to be synchronized or running concurrently. As a further variation, it was described that the test system 110 provides outputs indicating the performance of an application under test as a function of load. These outputs in graphical or tabular form can be used by an application developer to identify a number of concurrent users at which problems with the application are likely to be encountered. Potential problems are manifested in various ways, such as by a sudden change in response time or error rate as a function of load. Test system 110 could readily be programmed to automatically identify patterns in the output indicating these problem points.
Another useful modification would allow test system 110 to aid in identifying settings for various parameters in the deployment descriptor. As described above, the deployment descriptor for a bean identifies parameters such as memory usage and a "pooling number" indicating the number of instances of a bean that are created at the initialization of an application. These and other settings in the deployment descriptor might have an impact on the performance time and maximum load that an application could handle. One use ofthe test system described above is that it allows a test case to be repeated for different settings in the deployment descriptor. A human tester can analyze changes in performance for different settings in the deployment descriptor. However, test system 110 could be programmed to automatically edit the deployment descriptor of a bean by changing parameters affecting pooling or memory usage. Test system 110 could then automatically gather and present data showing the impact of a deployment descriptor on performance of an application.
Even higher levels of automation could be achieved by test system 110. For example, test system 110 might test the beans in an application and analyze the results of testing each bean. Test system 110 might identify the bean or beans that reflect performance bottlenecks (i.e. that exhibited unacceptable response times for the lowest numbers of simultaneous users). Then, test system 110 could run tests on those beans to find settings in the deployment descriptors that would balance the performance ofthe beans in the application (i.e. to adaptively adjust the settings in the deployment descriptors so that the bottleneck beans performed no worse than other beans.) It should also be appreciated that computer technology is rapidly evolving and improved or enhanced versions ofthe hardware and software components making up the application under test and the test system are likely to become available. It should also be appreciated that the description of one device in a class is intended to be illustrative rather than limiting and that other devices within the same class might be substituted with ordinary skill in the art. For example, the application under test was described in the context of a conventional application accessed through a client on a PC using a web browser as a graphical user interface. It should be appreciated, though, that if the clients are intended to be household appliances with microcontrollers, a different interface might be readily substituted for the graphical user interface.
Also, it was described that FIG. 11 shows summary data of a test after execution is completed. It will be appreciated, though, that data on a test case execution might be displayed to a human tester while a test is in process. For example, the summary screen might contain a field that shows the percentage ofthe test case that is completed. This value would update as the test runs. Likewise, the values for average and maximum response time could be updated as the data is gathered.
Also, it was described that the objects being tested are EJBs, which are written in the Java language. The same techniques are equally applicable to applications having components implemented in other languages. For example, applications written according to the COM standard might be written in Visual Basic and applications written for the CORBA standard might be written in C++.
Regardless ofthe specific language used, these standards are intended to allow separately developed components to operate together. Thus, each must provide a mechanism for other applications, such as test system 1 10, to determine how to access the methods and properties of their components. However, there could be differences in the specific commands used to access components.
In one embodiment, code generator 212 is implemented in a way that will make it easy to modify for generating test code for applications written in a different language. Specifically, code generator 212 stores intermediate results as a symbol table that is independent ofthe specific language used to program the application under test. The symbol table lists methods and properties for the component being tested. When to access these methods and what data to use for a particular test and what kinds of data to record can be determined from the information in the symbol table and input from the user. Thus, much ofthe functioning of code generator 212 is independent ofthe specific language used to implement the application under test. In this way, the language specific aspects of code generator 212 are easily segregated and represent a relatively small part ofthe code generator 212. In particular, language specific information is needed to access the application under test to derive the information for the symbol table. Language specific information is also needed to format the generated client test code. But, it is intended that these parts of code generator 212 could be replaced to allow test system 110 to test applications written in other languages. Also, it is possible that test system 110 will contain multiple versions ofthe language specific parts and the user could specify as an input the language of the application under test.
Therefore, the invention should be limited only by the spirit and scope ofthe appended claims.
(A Script) Generate new script
0=generateStandardPreamble l=generateText(Java EJB imports,Common.ini,random,vendor.csv) 2=generateText(com.testmybeans.vendor. Vendor imports,ctor.ini,random,vendor.csv)
3=generateText(TestThread header,Common.ini,random,vendor.csv)
4=generateText(TestThread.run implementation,Common.ini,random,vendor.csv)
5=generateText(com.testmybeans.vendor. Vendor create,com.testmybeans. vendor. Vendor.ctor.ini,random,vendor.csv) 6=generateMethod(timingGetSet)
7=generateText(Bean end create,Common.ini,random,vendor.csv)
8=generateText(WebLogic getInitialContext,AppServer.ini,random,vendor.csv)
9=generateText(Close log,Common.ini,random,vendor.csv)
10=generateText(Log to disk,Common.ini,random,vendor.csv) 11 =generateStandardTrailer
(A Template)
[Java EJB imports] //[Java EJB imports] package com.testmybeans. client; import javax.naming.InitialContext; import javax.naming.Context; import javax.naming.NamingException; import java.rmi.RemoteException; import java.util.*; import java.io.*;
[Hello imports] //[Hello imports] import com.softbridge.hello.HelloHome; import com.softbridge.hello.Hello; import com.softbridge.hello.HelloPK;
[TestThread header] //[TestThread header] public class $system.programName$ extends Thread
{ int templnt=0;
Double tempDouble=new Double(O.O);
Float tempFloat=new Float(O.O); short tempShort=0; long tempLong=0; String tempString- "'; long sTime=0; long sStartTime=0; int m_instanceNumber = 0; static String m_urlName = null;
String threadNumber=""; public $system.programName$(int inst. String url)
{ m_instanceNumber = inst; m_urlName = url; threadNumber=Integer.toString(m_instanceNumber); //System. out.printlnC'Starting Instance: " + m_instanceNumber);
}
[TestThread. run implementation] //[TestThread.run implementation] public void run()
{ sTime = new Date().getTime(); sStartTime=sTime; logErrorC'elapsedStart", Long.toString(new Date().getTime()), "", "");
[Bean end create] //[Bean end create] } catch (Exception e)
{ logError("except", "beanMethods", e.toString(), "");
} finally
{ try
{ closeLog(); h.remove (); h = null;
} catch (Exception e)
{ logErrorC'except", "closeHome", e.toString(), "") .
} } }
[Bean end findByPrimaryKey] //[Bean end findByPrimaryKey] ) catch (Exception e)
{ logErrorC'except", "beanMethods". e.toStringO, ""); } finally
{ try
{ closeLogO; h = null;
} catch (Exception e)
{ logErrorC'except", "closeHome", e.toStringO, "");
} } }
[Log to disk]
//[Log to disk] public void logError(String key, String functionName. String expected, String actual)
{ try {
String elapsedTime = Long.toString(new Date().getTime() - sTime); com.testmybeans.client.CThreadWriter.write(key + "~" + functionName + "~" + expected + "~" + actual + "~" + elapsedTime + "~" + threadNumber);
} catch (Exception e)
{
System.out.println("IOException: " + e.getMessage());
System.exit(l);
} }
} [Close log]
//[Close log] public void closeLogO
{ sTime=sStartTime; logErrorC'elapsedEnd", Long.toString(new Date().getTime()), "", ""); }
- 3 ! (Test Code)
//
//Bean under test: com. testmybeans. vendor. Vendor
//Author: TestMyBeans Code Generator vθ.8 //Creation Date: Fri Dec 10 16:53 : 15 EST 1999
//Copyright (c) 1999 TestMyBeans, Inc. All rights reserved.
//
//[Java EJB imports] package com.testmybeans. client; import javax.naming.InitialContext; import javax.naming. Context; import javax.naming.NamingExcepti on; import java.rmi.RemoteException; import java.util.*; import java.io.*;
//[TestThread header] public class invoice extends Thread { int templnt=0;
Double tempDouble=new Double(O.O);
Float tempFloat=new Float(O.O); short tempShort=0; long tempLong=0;
String temp String- "'; long sTime=0; long sStartTime=0; int m_instanceNumber = 0; static String m_urlName = null;
String threadNumber="": public invoice(int inst, String url)
{ m_instanceNumber = inst; m_urlName = url; threadNumber=Integer.toString(m_instanceNumber); //System. out.printlnC'Starting Instance: " + m_instanceNumber); }
//[TestThread.run implementation] public void run()
{ sTime = new Date().getTime(); sStartTime=sTime; logErrorC'elapsedStart". Long.toString(new Date().getTime()), "", "");
//[com.testmybeans.vendor.Vendor create] int msValue = (-100000) * ((int) new Date Q.getTime ()) % (Integer.MAX_VALUE/l 00000); int argO = rn nstanceNumber + ms Value; com.testmybeans.vendor.Vendor h = null; try
{
Context jndi = getInitialContext(); com.testmybeans.vendor.VendorHome home = (com.testmybeans.vendor.VendorHome) jndi.lookup("OEVendor"); h = home.create(argθ);
} catch (Exception e)
{ System.out.println ("com.testmybeans. vendor. VendorHome" + e.toStringO);
} try
{
// com.testmybeans.vendor.Vendor(setCost) try
{ sTime = new Date().getTime(); h.setCost(568); logError("SetTime". "setCost", "568", ""); } catch (Exception e) logErrorC'except", "setCost", e.toStringO, "");
} // com.testmybeans.vendor.Vendor(setName) try
{ sTime = new Date().getTime(); h.setName("qkgthfpw"); logErrorC'setTime", "setName", "qkgthfpw". "");
catch (Exception e)
{ logErrorC'except", "setName", e.toStringO, "");
// com.testmybeans.vendor.Vendor(setProduct) try
I sTime = new Date().getTime(); h.setProduct("dpnoolww"); logErrorC'setTime", "setProduct", "dpnoolww", "");
} catch (Exception e)
{ logErrorC'except", "setProduct", e.toStringO, ""); }
// com.testmybeans.vendor.Vendor(getCost) try
{ sTime = new Date().gefTime(); if((templnt = h.getCost()) != 555) logErrorC'getFailed", "getCost", "555", Integer.toString(tempInt)); else logErrorC'getPassed", "getCost", "555", Integer.toString(tempInt));
} catch (Exception e)
{ logErrorC'except", "getCost", e.toStringO, "");
} // com.testmybeans.vendor.Vendor(getName) try { sTime = new Date().getTime(); if(! (tempString = h.getName()).equals("icgnhoje")) logErrorC'getFailed", "getName", "icgnhoje", tempString); else logError("getPassed", "getName", "icgnhoje", tempString);
} catch (Exception e)
{ logErrorC'except", "getName", e.toStringO, ""); }
// com.testmybeans. vendor. Vendor(getProduct) try
{ sTime = new Date().getTime(); if(! (tempString = h.getProduct()).equals("zwlhjoxk")) logErrorC'getFailed", "getProduct". "zwlhjoxk", tempString); else logError("getPassed", "getProduct", "zwlhjoxk", tempString);
} catch (Exception e)
{ logErrorC'except", "getProduct", e.toStringO, "");
} //[Bean end create] } catch (Exception e)
{ logErrorC'except", "beanMethods", e.toStringO, "");
} finally
{ try
{ closeLogO; h.remove (); h = null;
} catch (Exception e)
{ logErrorC'except", "closeHome", e.toStringO, ""); }
} }
//[WebLogic getlnitialContext] public static Context getInitialContext() throws javax.naming.NamingException
{ try
{
Properties p = new Properties(); p.put(Context.INITIAL_CONTEXT_FACTORY,
"weblogic.jndi.T3InitialContextFactory"); p.put(Context.PROVIDER_URL, m_urlName); return new javax.naming.InitialContext(p);
} catch (Exception e)
{
System.out.println("getInitialContext Exception: "+e);
System.exit(l);
} return null;
} //[Close log] public void closeLogO
{ sTime=sStartTime; logErrorC'elapsedEnd", Long.toString(new Date().getTime()), "", ""); }
//[Log to disk] public void logError(String key, String functionName, String expected, String actual)
{ try
{
String elapsedTime = Long.toString(new Date().getTime() - sTime); com.testmybeans. client.CThreadWriter.write(key + "~" + functionName + "~"
+ expected + "~" + actual + "~" + elapsedTime + "~" + threadNumber);
} catch (Exception e)
{ System.out.println("IOException: " + e.gefMessage());
System.exit(l);
}
}
} //
//Bean under test: com.testmybeans.vendor.Vendor
//Author: TestMyBeans Code Generator vθ.8
//Creation Date: Fri Dec 10 16:53:15 EST 1999
//Copyright (c) 1999 TestMyBeans, Inc. All rights reserved. //
//[Java EJB imports] package com.testmybeans. client; import javax.naming.InitialContext; import javax.naming. Context; import javax.naming.NamingException; import java.rmi.RemoteException; import java.util.*; import java.io.*;
//[TestThread header] public class invoice extends Thread
I int templnt=0;
Double tempDouble=new Double(O.O); Float tempFloat=new Float(O.O); short tempShort=0; long tempLong=0;
String tempString- "'; long sTime=0; long sStartTime=0; int m_instanceNumber = 0; static String m_urlName = null;
String threadNumber==""; public invoice(int inst, String url) { m_instanceNumber = inst; m_urlName = url; threadNumber=Integer.toString(m_instanceNumber);
//System.out.println("Starting Instance: " + m_instanceNumber); }
//[TestThread.run implementation] public void run()
{ sTime = new Date().getTime(); sStartTime=sTime; logErrorC'elapsedStart", Long.toStringriiew Date().getTime()), "", "");
//[com.testmybeans.vendor.Vendor create] int msValue = (- 100000) * ((int) new Date ()-getTime ()) %
(Integer.MAX_VALUE/l 00000); int argO = m_instanceNumber + msValue; com.testmybeans.vendor.Vendor h = null; try {
Context jndi = getInitialContext(); com.testmybeans.vendor. VendorHome home = (com.testmybeans. vendor. VendorHome) jndi.lookupC'OEVendor"); h = home.create(argθ);
} catch (Exception e)
{
System.out.println ("com.testmybeans. vendor. VendorHome" + e.toStringO);
} try {
// com.testmybeans. vendor. Vendor(setCost) — — try
{ sTime = new Date()-getTime(); h.setCost(568); logError("SetTime", "setCost", "568", "");
} catch (Exception e)
{ logErrorC'except", "setCost", e.toStringO, "");
} // com.testmybeans.vendor.Vendor(setName) try
{ sTime = new Date()-getTime(); h.setName("qkgthfpw"); logErrorC'setTime", "setName", "qkgthfpw", "");
} catch (Exception e) { logErrorC'except", "setName", e.toStringO, "");
} // com.testmybeans.vendor.Vendor(setProduct) try { sTime = new Date().getTime(); h.setProductC'dpnoolww"); logErrorC'setTime", "setProduct", "dpnoolww", "");
} catch (Exception e) { logErrorC'except", "setProduct", e.toStringO, "");
} // com.testmybeans.vendor.Vendor(getCos ) try { sTime = new Date().getTime(); if((templnt = h.getCost()) != 555) logErrorC'getFailed", "getCost", "555", Integer.toString(tempInt)); else logErrorC'getPassed", "getCost", "555", Integer.toString(tempInt));
} catch (Exception e)
{ logErrorC'except", "getCost", e.toStringO, ""); }
// com.testmybeans. vendor. Vendor(gefName) try
{ sTime = new Date().getTime(); if(! (tempString = h.getName()).equals("icgnhoje")) logErrorC'getFailed", "getName", "icgnhoje", tempString); else logErrorC'getPassed", "getName", "icgnhoje", tempString);
} catch (Exception e)
{ logErrorC'except", "getName", e.toStringO, "");
} // com.testmybeans. vendor.Vendor(getProduct) try
{ sTime = new Date().getTime(); if(! (tempString = h.getProduct())-equals("zwlhjoxk")) logErrorC'getFailed", "getProduct", "zwlhjoxk", tempString); else logErrorC'getPassed", "getProduct", "zwlhjoxk", tempString);
} catch (Exception e)
{ logErrorC'except", "getProduct", e.toStringO, ""); }
//[Bean end create]
} catch (Exception e)
{ logErrorC'except", "beanMethods", e.toStringO, "");
} finally
{ try { closeLogO; h.remove (); h = null;
} catch (Exception e)
{ logErrorC'except", "closeHome", e.toStringO, "");
} } }
//[WebLogic getlnitialContext] public static Context getInitialContext() throws javax.naming.NamingException
{ try
{ Properties p = new Properties(); p.put(Context.INITIAL_CONTEXT_FACTORY. "weblogic.jndi.T3InitialContextFactory"); p.put(Context.PROVIDER_URL, m_urlName); return new javax.naming.InitialContext(p);
} catch (Exception e) r
System.out.println("getInitialContext Exception: "+e); System.exit(l);
} return null;
} //[Close log] public void closeLogO
{ sTime=sStartTime; logErrorC'elapsedEnd", Long.toString(new Date().getTime()), "", "");
}
//[Log to disk] public void logError(String key, String functionName, String expected, String actual)
{ try
{
String elapsedTime = Long.toString(new Date().getTime() - sTime); com.testmybeans.client.CThreadWriter.write(key + "~" + functionName + "~" + expected + "~" + actual + "~" + elapsedTime + "~" + threadNumber); } catch (Exception e)
{
System.out.println("IOException: " + e.getMessage());
System.exit(l); }
}

Claims

CLAIMS 1. A method of testing a computerized application under test that allows simultaneous users over a computer network, the method comprising the steps of: a) providing test code that exercises a component ofthe application under test; b) synchronizing and executing a plurality of instances ofthe test code and recording performance data on the component ofthe application under test; c) repeating step b) multiple times, with a different number of instances ofthe test code; d) analyzing the recorded performance data to indicate a performance characteristic ofthe component ofthe application under test in response to load.
2. The method of claim 1 wherein the step of providing test code includes generating test code automatically.
3. The method of claim 1 wherein the application under test is an object oriented language and the step of providing test code comprises providing test code to exercise one object in the application.
4. The method of claim 1 wherein the step of synchronizing comprises starting each instance ofthe test code at the same time.
5. The method of claim 1 wherein the step of synchronizing and executing comprises executing a portion ofthe plurality of instances of test code on a first computer and a portion ofthe plurality of instance of test code on a second computer connected to the network.
6. The method of claim 1 wherein the step of analyzing includes preparing a graphical display having as an independent variable the number of instances of the test code and the dependent variable is the performance data.
7. The method of claim 1 wherein the step of analyzing includes preparing a graphical display having as an independent variable the number of instances of the test code and the dependent variable is derived from the performance data.
8. The method of claim 1 wherein the application under test is resident on a first server within the network and the application has a remote interface and the test code is resident on at least a second computer within the network and exercises the application under test using the remote interface ofthe application under test.
9. The method of claim 1 wherein the step of analyzing includes displaying the analyzed data to a human user using a graphical user interface.
10. A method of testing a computerized application under test that allows simultaneous users over a computer network, the method comprising the steps of: a) specifying test conditions through a user interface to a test system; b) initiating through a user interface to the test system the gathering of test data on the performance of a at least one component of the application under test at a plurality of load conditions; c) specifying through a user interface to the test system the output format ofthe test data; and d) displaying in the specified format the response of at least one component ofthe application under test to load.
11. The method of claim 10 wherein the specified format is a graphical format indicating response time as a function of load conditions.
12. The method of claim 1 1 wherein the specified graphical format is a Hi-Lo plot.
13. The method of claim 11 wherein the step of gathering data under a plurality of load conditions comprises initiating the execution of a plurality of copies of a test program, with the number of copies executing simultaneously relates to the load condition.
14. The method of claim 13 wherein the step of specifying an output format includes specifying a method by which response is measured.
15. The method of claim 13 wherein the step of gathering test data includes recording the execution time between selected points in the test program for each simultaneously executing copy ofthe test program and analyzing the recorded execution times for all copies ofthe test program.
16. The method of claim 15 wherein the step of analyzing comprises determining the average and maximum execution times for each ofthe load conditions.
17. The method of claim 10 wherein: a) the computerized application under test comprises software resident on a server controlling access to a computerized database; b) the server is connected to a network and the application under test is simultaneously accessed by a plurality of clients over the network; and c) the test system is resident on at least a second server connected to the network.
18. A method of testing a computerized application under test that allows simultaneous users over a computer network, the application under test having a plurality of software components, the method comprising the steps of: a) providing test code to exercise a component; b) creating a first plurality of copies of the test code; c) simultaneously executing the first plurality of copies of test code while recording times between events in each of the first plurality of copies of test code; d) creating a second plurality of copies of test code, e) simultaneously executing the second plurality of copies of test code while recording times between events in each ofthe second plurality of copies of test code; f) repeating a predetermined number of times the steps of creating plural copies ofthe test code and simultaneously executing the plural copies while recording event times; and g) analyzing the recorded times to present information on the performance of the component of the application under test as a function of load.
19. The method of claim 18 wherein the components comprise enterprise Java beans.
20. The method of claim 19 wherein each component has a plurality of functions therein and the test code exercises functions ofthe components.
21. The method of claim 20 wherein the events at which times are recorded includes times at which commands are issued to access functions ofthe components and times at which execution ofthe commands are completed.
22. A system for determining performance of an application under test in response ttoo load, the system comprising: a) coordination software; b) at least one code generator, receiving as an input commands from the coordination software and having as an output client test code; c) at least one test engine, receiving as an input commands from the coordination software, the test engine comprising a computer server having a plurality of threads thereon, each thread executing an instance ofthe client test code; d) at lease one data log having computerized memory, the memory holding timing data created by the instances ofthe client test code in the plurality of threads; and e) at least one data analyzer software, operatively connected to the data log, having an output that represents performance ofthe application under test in response to load.
PCT/US2000/023303 1999-08-30 2000-08-24 Method and system for software object testing WO2001016752A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU69348/00A AU6934800A (en) 1999-08-30 2000-08-24 Method and system for software object testing

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US15141899P 1999-08-30 1999-08-30
US60/151,418 1999-08-30
US09/482,178 2000-01-12
US09/482,178 US6934934B1 (en) 1999-08-30 2000-01-12 Method and system for software object testing

Publications (1)

Publication Number Publication Date
WO2001016752A1 true WO2001016752A1 (en) 2001-03-08

Family

ID=26848615

Family Applications (3)

Application Number Title Priority Date Filing Date
PCT/US2000/023239 WO2001016751A1 (en) 1999-08-30 2000-08-24 Method and system for web based software object testing
PCT/US2000/023303 WO2001016752A1 (en) 1999-08-30 2000-08-24 Method and system for software object testing
PCT/US2000/023237 WO2001016755A1 (en) 1999-08-30 2000-08-24 Method of providing software testing services

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/US2000/023239 WO2001016751A1 (en) 1999-08-30 2000-08-24 Method and system for web based software object testing

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/US2000/023237 WO2001016755A1 (en) 1999-08-30 2000-08-24 Method of providing software testing services

Country Status (7)

Country Link
US (2) US6934934B1 (en)
EP (1) EP1214656B1 (en)
AT (1) ATE267419T1 (en)
AU (3) AU6934800A (en)
CA (1) CA2383919A1 (en)
DE (1) DE60010906T2 (en)
WO (3) WO2001016751A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7490319B2 (en) 2003-11-04 2009-02-10 Kimberly-Clark Worldwide, Inc. Testing tool comprising an automated multidimensional traceability matrix for implementing and validating complex software systems
WO2014048851A3 (en) * 2012-09-25 2014-06-19 Telefonica, S.A. A method and a system to surveil applications of online services
WO2014171979A1 (en) * 2013-04-20 2014-10-23 Concurix Corporation Marketplace for monitoring services

Families Citing this family (193)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7673323B1 (en) 1998-10-28 2010-03-02 Bea Systems, Inc. System and method for maintaining security in a distributed computer network
US6158010A (en) * 1998-10-28 2000-12-05 Crosslogix, Inc. System and method for maintaining security in a distributed computer network
US20090222508A1 (en) * 2000-03-30 2009-09-03 Hubbard Edward A Network Site Testing
US20020049864A1 (en) * 2000-03-31 2002-04-25 Jochen Kappel Corba jellybeans system and method
DE60137038D1 (en) * 2000-05-10 2009-01-29 Schlumberger Technology Corp APPLICATION SERVICE PROVIDER, METHOD AND DEVICE
US7171588B2 (en) * 2000-10-27 2007-01-30 Empirix, Inc. Enterprise test system having run time test object generation
US20040015861A1 (en) * 2001-04-18 2004-01-22 Nagalkar Dhananjay A. Managing content with multi-site and single point of control
GB2378530B (en) * 2001-05-15 2005-03-30 Accenture Properties Benchmark testing
US7392546B2 (en) * 2001-06-11 2008-06-24 Bea Systems, Inc. System and method for server security and entitlement processing
US8234156B2 (en) 2001-06-28 2012-07-31 Jpmorgan Chase Bank, N.A. System and method for characterizing and selecting technology transition options
IL159693A0 (en) * 2001-07-06 2004-06-20 Computer Ass Think Inc Method and system for providing a virtual user interface
JP2003050641A (en) * 2001-08-07 2003-02-21 Nec Corp Program management system, its program management method, and information management program
WO2003036609A1 (en) * 2001-10-24 2003-05-01 Bea Systems, Inc. Portal administration tool
US7828551B2 (en) 2001-11-13 2010-11-09 Prometric, Inc. Method and system for computer based testing using customizable templates
US7350226B2 (en) * 2001-12-13 2008-03-25 Bea Systems, Inc. System and method for analyzing security policies in a distributed computer network
KR100404908B1 (en) * 2001-12-27 2003-11-07 한국전자통신연구원 Apparatus and method for testing interfaces of enterprise javabeans components
US7299451B2 (en) * 2002-01-24 2007-11-20 International Business Machines Corporation Remotely driven system for multi-product and multi-platform testing
US7200738B2 (en) 2002-04-18 2007-04-03 Micron Technology, Inc. Reducing data hazards in pipelined processors to provide high processor utilization
US7100158B2 (en) * 2002-04-30 2006-08-29 Toshiba Tec Kabushiki Kaisha Program management apparatus, program management system, and program management method
US7725560B2 (en) 2002-05-01 2010-05-25 Bea Systems Inc. Web service-enabled portlet wizard
US7987246B2 (en) 2002-05-23 2011-07-26 Jpmorgan Chase Bank Method and system for client browser update
US8020114B2 (en) * 2002-06-07 2011-09-13 Sierra Wireless, Inc. Enter-then-act input handling
US7174541B2 (en) * 2002-06-28 2007-02-06 Sap Aktiengesellschaft Testing of applications
US8020148B2 (en) * 2002-09-23 2011-09-13 Telefonaktiebolaget L M Ericsson (Publ) Bi-directional probing and testing of software
US20040083158A1 (en) * 2002-10-09 2004-04-29 Mark Addison Systems and methods for distributing pricing data for complex derivative securities
US20040073890A1 (en) * 2002-10-09 2004-04-15 Raul Johnson Method and system for test management
US7340650B2 (en) * 2002-10-30 2008-03-04 Jp Morgan Chase & Co. Method to measure stored procedure execution statistics
US7203720B2 (en) * 2002-11-27 2007-04-10 Bea Systems, Inc. Web server hit multiplier and redirector
US7152123B2 (en) * 2002-12-23 2006-12-19 Micron Technology, Inc. Distributed configuration storage
US7210066B2 (en) * 2002-12-31 2007-04-24 Sun Microsystems, Inc. Method and system for determining computer software test coverage
US7653930B2 (en) 2003-02-14 2010-01-26 Bea Systems, Inc. Method for role and resource policy management optimization
US7591000B2 (en) 2003-02-14 2009-09-15 Oracle International Corporation System and method for hierarchical role-based entitlements
US8831966B2 (en) * 2003-02-14 2014-09-09 Oracle International Corporation Method for delegated administration
US20040167880A1 (en) * 2003-02-20 2004-08-26 Bea Systems, Inc. System and method for searching a virtual repository content
US7293286B2 (en) 2003-02-20 2007-11-06 Bea Systems, Inc. Federated management of content repositories
US7840614B2 (en) * 2003-02-20 2010-11-23 Bea Systems, Inc. Virtual content repository application program interface
US20040230917A1 (en) * 2003-02-28 2004-11-18 Bales Christopher E. Systems and methods for navigating a graphical hierarchy
US20040230557A1 (en) * 2003-02-28 2004-11-18 Bales Christopher E. Systems and methods for context-sensitive editing
US7810036B2 (en) 2003-02-28 2010-10-05 Bea Systems, Inc. Systems and methods for personalizing a portal
US20050066248A1 (en) * 2003-09-18 2005-03-24 Reid Hayhow Methods and systems for determining memory requirements for device testing
US7305654B2 (en) * 2003-09-19 2007-12-04 Lsi Corporation Test schedule estimator for legacy builds
US7523447B1 (en) * 2003-09-24 2009-04-21 Avaya Inc. Configurator using markup language
US20050071447A1 (en) * 2003-09-25 2005-03-31 International Business Machines Corporation Method, system and program product for testing a server application using a reentrant test application
US7594224B2 (en) * 2003-10-10 2009-09-22 Bea Systems, Inc. Distributed enterprise security system
US20050251852A1 (en) * 2003-10-10 2005-11-10 Bea Systems, Inc. Distributed enterprise security system
US7644432B2 (en) * 2003-10-10 2010-01-05 Bea Systems, Inc. Policy inheritance through nested groups
US20050257245A1 (en) * 2003-10-10 2005-11-17 Bea Systems, Inc. Distributed security system with dynamic roles
US7096012B2 (en) * 2003-10-23 2006-08-22 Microsoft Corporation System and method for emulating a telephony driver
US7337432B2 (en) * 2004-02-03 2008-02-26 Sharp Laboratories Of America, Inc. System and method for generating automatic test plans for graphical user interface applications
US20050188295A1 (en) * 2004-02-25 2005-08-25 Loren Konkus Systems and methods for an extensible administration tool
US7702767B2 (en) * 2004-03-09 2010-04-20 Jp Morgan Chase Bank User connectivity process management system
US7340725B1 (en) * 2004-03-31 2008-03-04 Microsoft Corporation Smart test attributes and test case scenario in object oriented programming environment
US7774601B2 (en) 2004-04-06 2010-08-10 Bea Systems, Inc. Method for delegated administration
US7236975B2 (en) * 2004-04-13 2007-06-26 Bea Systems, Inc. System and method for controlling access to anode in a virtual content repository that integrates a plurality of content repositories
US7162504B2 (en) * 2004-04-13 2007-01-09 Bea Systems, Inc. System and method for providing content services to a repository
US7236989B2 (en) 2004-04-13 2007-06-26 Bea Systems, Inc. System and method for providing lifecycles for custom content in a virtual content repository
US7236990B2 (en) 2004-04-13 2007-06-26 Bea Systems, Inc. System and method for information lifecycle workflow integration
US7240076B2 (en) 2004-04-13 2007-07-03 Bea Systems, Inc. System and method for providing a lifecycle for information in a virtual content repository
US20050256906A1 (en) * 2004-05-14 2005-11-17 Bea Systems, Inc. Interface for portal and webserver administration-efficient updates
US20050256899A1 (en) * 2004-05-14 2005-11-17 Bea Systems, Inc. System and method for representing hierarchical data structures
US7660888B2 (en) * 2004-05-21 2010-02-09 International Business Machines Corporation Indicating network resource availability methods, system and program product
US7665127B1 (en) 2004-06-30 2010-02-16 Jp Morgan Chase Bank System and method for providing access to protected services
JP4388427B2 (en) * 2004-07-02 2009-12-24 オークマ株式会社 Numerical control device that can call programs written in script language
US7346807B1 (en) * 2004-07-07 2008-03-18 Sprint Communications Company L.P. Optimizing transaction data for load testing
US7366974B2 (en) * 2004-09-03 2008-04-29 Jp Morgan Chase Bank System and method for managing template attributes
US7688723B1 (en) 2004-09-16 2010-03-30 Avaya Inc. Procedural XML-based telephony traffic flow analysis and configuration tool
US7830813B1 (en) 2004-09-30 2010-11-09 Avaya Inc. Traffic based availability analysis
US20090132466A1 (en) * 2004-10-13 2009-05-21 Jp Morgan Chase Bank System and method for archiving data
US20060085492A1 (en) * 2004-10-14 2006-04-20 Singh Arun K System and method for modifying process navigation
US7979849B2 (en) * 2004-10-15 2011-07-12 Cisco Technology, Inc. Automatic model-based testing
US7783670B2 (en) * 2004-11-18 2010-08-24 Bea Systems, Inc. Client server conversion for representing hierarchical data structures
US7444397B2 (en) * 2004-12-21 2008-10-28 International Business Machines Corporation Method of executing test scripts against multiple systems
US7296197B2 (en) * 2005-02-04 2007-11-13 Microsoft Corporation Metadata-facilitated software testing
CN100349133C (en) * 2005-03-18 2007-11-14 中国工商银行股份有限公司 Bank host operation pressure test system
US7703074B2 (en) * 2005-05-20 2010-04-20 Oracle America, Inc. Method and apparatus for tracking changes in a system
US7493520B2 (en) * 2005-06-07 2009-02-17 Microsoft Corporation System and method for validating the graphical output of an updated software module
US8065606B1 (en) 2005-09-16 2011-11-22 Jpmorgan Chase Bank, N.A. System and method for automating document generation
US7818344B2 (en) 2005-09-26 2010-10-19 Bea Systems, Inc. System and method for providing nested types for content management
US7953734B2 (en) 2005-09-26 2011-05-31 Oracle International Corporation System and method for providing SPI extensions for content management system
US7917537B2 (en) 2005-09-26 2011-03-29 Oracle International Corporation System and method for providing link property types for content management
US7752205B2 (en) 2005-09-26 2010-07-06 Bea Systems, Inc. Method and system for interacting with a virtual content repository
US7603669B2 (en) * 2005-09-27 2009-10-13 Microsoft Corporation Upgrade and downgrade of data resource components
US7596720B2 (en) * 2005-09-27 2009-09-29 Microsoft Corporation Application health checks
US7676806B2 (en) * 2005-09-27 2010-03-09 Microsoft Corporation Deployment, maintenance and configuration of complex hardware and software systems
US7730452B1 (en) * 2005-11-01 2010-06-01 Hewlett-Packard Development Company, L.P. Testing a component of a distributed system
US7721259B2 (en) * 2005-11-15 2010-05-18 Oracle International Corporation Configurable and customizable software application system and metadata
US20070168973A1 (en) * 2005-12-02 2007-07-19 Sun Microsystems, Inc. Method and apparatus for API testing
US7412349B2 (en) * 2005-12-09 2008-08-12 Sap Ag Interface for series of tests
US20070168981A1 (en) * 2006-01-06 2007-07-19 Microsoft Corporation Online creation of object states for testing
US7970691B1 (en) * 2006-02-13 2011-06-28 Magma Management, Inc. Method for securing licensing agreements on new products
US8914679B2 (en) * 2006-02-28 2014-12-16 International Business Machines Corporation Software testing automation framework
US7908590B1 (en) * 2006-03-02 2011-03-15 Parasoft Corporation System and method for automatically creating test cases through a remote client
US7913249B1 (en) 2006-03-07 2011-03-22 Jpmorgan Chase Bank, N.A. Software installation checker
CN101038566A (en) * 2006-03-17 2007-09-19 鸿富锦精密工业(深圳)有限公司 Computer diagnose testing system
US7856619B2 (en) * 2006-03-31 2010-12-21 Sap Ag Method and system for automated testing of a graphic-based programming tool
US20070256067A1 (en) * 2006-04-26 2007-11-01 Cisco Technology, Inc. (A California Corporation) Method and system for upgrading a software image
US8375013B2 (en) * 2006-05-17 2013-02-12 Oracle International Corporation Server-controlled testing of handheld devices
US7860918B1 (en) * 2006-06-01 2010-12-28 Avaya Inc. Hierarchical fair scheduling algorithm in a distributed measurement system
US7813911B2 (en) * 2006-07-29 2010-10-12 Microsoft Corporation Model based testing language and framework
US7793264B2 (en) * 2006-08-25 2010-09-07 International Business Machines Corporation Command-line warnings
US20080077982A1 (en) * 2006-09-22 2008-03-27 Bea Systems, Inc. Credential vault encryption
US8463852B2 (en) 2006-10-06 2013-06-11 Oracle International Corporation Groupware portlets for integrating a portal with groupware systems
US20080115114A1 (en) * 2006-11-10 2008-05-15 Sashank Palaparthi Automated software unit testing
US7877732B2 (en) * 2006-11-29 2011-01-25 International Business Machines Corporation Efficient stress testing of a service oriented architecture based application
US8898636B1 (en) * 2007-02-14 2014-11-25 Oracle America, Inc. Method and apparatus for testing an application running in a virtual machine
US8997048B1 (en) * 2007-02-14 2015-03-31 Oracle America, Inc. Method and apparatus for profiling a virtual machine
US8225287B2 (en) * 2007-03-13 2012-07-17 Microsoft Corporation Method for testing a system
US8296732B2 (en) * 2007-03-23 2012-10-23 Sas Institute Inc. Computer-implemented systems and methods for analyzing product configuration and data
US8935669B2 (en) * 2007-04-11 2015-01-13 Microsoft Corporation Strategies for performing testing in a multi-user environment
US7953674B2 (en) * 2007-05-17 2011-05-31 Microsoft Corporation Fuzzing system and method for exhaustive security fuzzing within an SQL server
US8087001B2 (en) * 2007-06-29 2011-12-27 Sas Institute Inc. Computer-implemented systems and methods for software application testing
US8020151B2 (en) * 2007-07-31 2011-09-13 International Business Machines Corporation Techniques for determining a web browser state during web page testing
US20090037883A1 (en) * 2007-08-01 2009-02-05 International Business Machines Corporation Testing framework to highlight functionality component changes
US7831865B1 (en) * 2007-09-26 2010-11-09 Sprint Communications Company L.P. Resource allocation for executing automation scripts
US7876690B1 (en) 2007-09-26 2011-01-25 Avaya Inc. Distributed measurement system configurator tool
US20090144699A1 (en) * 2007-11-30 2009-06-04 Anton Fendt Log file analysis and evaluation tool
US8037163B1 (en) 2008-01-08 2011-10-11 Avaya Inc. Alternative methodology in assessing network availability
FR2928669B1 (en) * 2008-03-12 2012-01-13 Cie Du Sol CURING MACHINE
US20090271214A1 (en) * 2008-04-29 2009-10-29 Affiliated Computer Services, Inc. Rules engine framework
US20100005341A1 (en) * 2008-07-02 2010-01-07 International Business Machines Corporation Automatic detection and notification of test regression with automatic on-demand capture of profiles for regression analysis
US8549475B1 (en) * 2008-07-08 2013-10-01 Adobe Systems Incorporated System and method for simplifying object-oriented programming
US8191048B2 (en) * 2008-11-21 2012-05-29 Oracle America, Inc. Automated testing and qualification of software-based, network service products
US9081881B2 (en) 2008-12-18 2015-07-14 Hartford Fire Insurance Company Computer system and computer-implemented method for use in load testing of software applications
US9575878B2 (en) * 2009-03-16 2017-02-21 International Business Machines Corporation Data-driven testing without data configuration
US8943423B2 (en) * 2009-07-07 2015-01-27 International Business Machines Corporation User interface indicators for changed user interface elements
US8386207B2 (en) * 2009-11-30 2013-02-26 International Business Machines Corporation Open-service based test execution frameworks
US8543932B2 (en) 2010-04-23 2013-09-24 Datacert, Inc. Generation and testing of graphical user interface for matter management workflow with collaboration
US8549479B2 (en) * 2010-11-09 2013-10-01 Verisign, Inc. Test automation tool for domain registration systems
US9038177B1 (en) 2010-11-30 2015-05-19 Jpmorgan Chase Bank, N.A. Method and system for implementing multi-level data fusion
ES2707230T3 (en) * 2011-01-31 2019-04-03 Tata Consultancy Services Ltd Life cycle test
US9292588B1 (en) 2011-07-20 2016-03-22 Jpmorgan Chase Bank, N.A. Safe storing data for disaster recovery
US9075914B2 (en) 2011-09-29 2015-07-07 Sauce Labs, Inc. Analytics driven development
JPWO2013145628A1 (en) * 2012-03-30 2015-12-10 日本電気株式会社 Information processing apparatus and load test execution method
EP2831741A4 (en) * 2012-07-31 2015-12-30 Hewlett Packard Development Co Constructing test-centric model of application
US9971676B2 (en) * 2012-08-30 2018-05-15 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for state based test case generation for software validation
US8789030B2 (en) 2012-09-18 2014-07-22 Concurix Corporation Memoization from offline analysis
US9489277B2 (en) * 2012-11-07 2016-11-08 Software Development Technologies Application component testing
US8826254B2 (en) * 2012-11-08 2014-09-02 Concurix Corporation Memoizing with read only side effects
US9262416B2 (en) 2012-11-08 2016-02-16 Microsoft Technology Licensing, Llc Purity analysis using white list/black list analysis
US8752034B2 (en) 2012-11-08 2014-06-10 Concurix Corporation Memoization configuration file consumed at runtime
US8839204B2 (en) 2012-11-08 2014-09-16 Concurix Corporation Determination of function purity for memoization
US8752021B2 (en) 2012-11-08 2014-06-10 Concurix Corporation Input vector analysis for memoization estimation
US9213613B2 (en) * 2012-11-16 2015-12-15 Nvidia Corporation Test program generator using key enumeration and string replacement
US9053070B1 (en) * 2012-12-10 2015-06-09 Amazon Technologies, Inc. Automated tuning of a service configuration using load tests on hosts
US9720655B1 (en) 2013-02-01 2017-08-01 Jpmorgan Chase Bank, N.A. User interface event orchestration
US10002041B1 (en) 2013-02-01 2018-06-19 Jpmorgan Chase Bank, N.A. System and method for maintaining the health of a machine
US9158848B2 (en) * 2013-02-11 2015-10-13 International Business Machines Corporation Web testing tools system and method
US9088459B1 (en) 2013-02-22 2015-07-21 Jpmorgan Chase Bank, N.A. Breadth-first resource allocation system and methods
US10540373B1 (en) 2013-03-04 2020-01-21 Jpmorgan Chase Bank, N.A. Clause library manager
US9342333B2 (en) * 2013-03-14 2016-05-17 Microsoft Technology Licensing, Llc Backend custom code extensibility
US10255063B2 (en) 2013-03-15 2019-04-09 Microsoft Technology Licensing, Llc Providing source control of custom code for a user without providing source control of host code for the user
KR101534153B1 (en) * 2013-08-23 2015-07-06 주식회사 엘지씨엔에스 Method of designing business logic, server performing the same and storage media storing the same
US9619410B1 (en) 2013-10-03 2017-04-11 Jpmorgan Chase Bank, N.A. Systems and methods for packet switching
US20150128103A1 (en) * 2013-11-07 2015-05-07 Runscope, Inc. System and method for automating application programming interface integration
US11310165B1 (en) * 2013-11-11 2022-04-19 Amazon Technologies, Inc. Scalable production test service
US9542259B1 (en) 2013-12-23 2017-01-10 Jpmorgan Chase Bank, N.A. Automated incident resolution system and method
US9868054B1 (en) 2014-02-10 2018-01-16 Jpmorgan Chase Bank, N.A. Dynamic game deployment
US10133996B2 (en) * 2014-04-22 2018-11-20 International Business Machines Corporation Object lifecycle analysis tool
US9442832B2 (en) * 2014-07-07 2016-09-13 Sap Se User workflow replication for execution error analysis
US9141515B1 (en) * 2014-07-15 2015-09-22 Sap Se Limiting display content in editor for large data volumes
US10200866B1 (en) 2014-12-12 2019-02-05 Aeris Communications, Inc. Method and system for detecting and minimizing harmful network device and application behavior on cellular networks
US10353809B2 (en) * 2015-12-01 2019-07-16 Tata Consultancy Services Limited System and method for executing integration tests in multiuser environment
US10440153B1 (en) 2016-02-08 2019-10-08 Microstrategy Incorporated Enterprise health score and data migration
US11283900B2 (en) * 2016-02-08 2022-03-22 Microstrategy Incorporated Enterprise performance and capacity testing
US9965377B1 (en) * 2016-03-29 2018-05-08 EMC IP Holding Company LLC Deploy pipeline for development packages
US10055200B1 (en) 2016-03-29 2018-08-21 EMC IP Holding Company LLC Creation and use of development packages
US10452774B2 (en) 2016-12-13 2019-10-22 Bank Of America Corporation System architecture framework
US10496739B2 (en) 2017-01-18 2019-12-03 Bank Of America Corporation Test case consolidator
US10229750B2 (en) 2017-01-18 2019-03-12 Bank Of America Corporation Memory management architecture for use with a diagnostic tool
US10606739B2 (en) * 2017-03-21 2020-03-31 Accenture Global Solutions Limited Automated program code analysis and reporting
US10692031B2 (en) 2017-11-02 2020-06-23 International Business Machines Corporation Estimating software as a service cloud computing resource capacity requirements for a customer based on customer workflows and workloads
US11765249B2 (en) * 2017-11-27 2023-09-19 Lacework, Inc. Facilitating developer efficiency and application quality
CN108920352B (en) * 2018-07-27 2022-06-28 百度在线网络技术(北京)有限公司 Method and device for acquiring information
US11468134B2 (en) 2018-09-26 2022-10-11 International Business Machines Corporation Provisioning a customized software stack for network-based question and answer services
US11010285B2 (en) 2019-01-24 2021-05-18 International Business Machines Corporation Fault detection and localization to generate failing test cases using combinatorial test design techniques
US11263116B2 (en) 2019-01-24 2022-03-01 International Business Machines Corporation Champion test case generation
US11106567B2 (en) 2019-01-24 2021-08-31 International Business Machines Corporation Combinatoric set completion through unique test case generation
US11010282B2 (en) 2019-01-24 2021-05-18 International Business Machines Corporation Fault detection and localization using combinatorial test design techniques while adhering to architectural restrictions
US11099975B2 (en) 2019-01-24 2021-08-24 International Business Machines Corporation Test space analysis across multiple combinatoric models
US11263111B2 (en) 2019-02-11 2022-03-01 Microstrategy Incorporated Validating software functionality
US10970197B2 (en) 2019-06-13 2021-04-06 International Business Machines Corporation Breakpoint value-based version control
US11036624B2 (en) * 2019-06-13 2021-06-15 International Business Machines Corporation Self healing software utilizing regression test fingerprints
US11422924B2 (en) 2019-06-13 2022-08-23 International Business Machines Corporation Customizable test set selection using code flow trees
US10990510B2 (en) 2019-06-13 2021-04-27 International Business Machines Corporation Associating attribute seeds of regression test cases with breakpoint value-based fingerprints
US11232020B2 (en) 2019-06-13 2022-01-25 International Business Machines Corporation Fault detection using breakpoint value-based fingerprints of failing regression test cases
US11205041B2 (en) 2019-08-15 2021-12-21 Anil Kumar Web element rediscovery system and method
US11637748B2 (en) 2019-08-28 2023-04-25 Microstrategy Incorporated Self-optimization of computing environments
US11210189B2 (en) 2019-08-30 2021-12-28 Microstrategy Incorporated Monitoring performance of computing systems
US11354216B2 (en) 2019-09-18 2022-06-07 Microstrategy Incorporated Monitoring performance deviations
US11360881B2 (en) 2019-09-23 2022-06-14 Microstrategy Incorporated Customizing computer performance tests
US11438231B2 (en) 2019-09-25 2022-09-06 Microstrategy Incorporated Centralized platform management for computing environments
US11169907B2 (en) 2020-01-15 2021-11-09 Salesforce.Com, Inc. Web service test and analysis platform
CN111309609B (en) * 2020-02-13 2023-10-03 抖音视界有限公司 software processing system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19511252C1 (en) * 1995-03-27 1996-04-18 Siemens Nixdorf Inf Syst Processing load measuring system for computer network design
GB2307318A (en) * 1995-11-14 1997-05-21 Mitsubishi Electric Corp Testing a networked system of servers
US5784553A (en) * 1996-01-16 1998-07-21 Parasoft Corporation Method and system for generating a computer program test suite using dynamic symbolic execution of JAVA programs
US5812780A (en) * 1996-05-24 1998-09-22 Microsoft Corporation Method, system, and product for assessing a server application performance

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4617663A (en) * 1983-04-13 1986-10-14 At&T Information Systems Inc. Interface testing of software systems
US5359546A (en) 1992-06-30 1994-10-25 Sun Microsystems, Inc. Automatic generation of test drivers
US5371883A (en) 1993-03-26 1994-12-06 International Business Machines Corporation Method of testing programs in a distributed environment
US5537560A (en) 1994-03-01 1996-07-16 Intel Corporation Method and apparatus for conditionally generating a microinstruction that selects one of two values based upon control states of a microprocessor
US5841670A (en) 1994-03-09 1998-11-24 Texas Instruments Incorporated Emulation devices, systems and methods with distributed control of clock domains
US5671351A (en) * 1995-04-13 1997-09-23 Texas Instruments Incorporated System and method for automated testing and monitoring of software applications
US5742754A (en) * 1996-03-05 1998-04-21 Sun Microsystems, Inc. Software testing apparatus and method
US5881269A (en) * 1996-09-30 1999-03-09 International Business Machines Corporation Simulation of multiple local area network clients on a single workstation
US5974572A (en) * 1996-10-15 1999-10-26 Mercury Interactive Corporation Software system and methods for generating a load test using a server access log
US6618854B1 (en) * 1997-02-18 2003-09-09 Advanced Micro Devices, Inc. Remotely accessible integrated debug environment
US6002869A (en) 1997-02-26 1999-12-14 Novell, Inc. System and method for automatically testing software programs
US6209125B1 (en) * 1997-06-03 2001-03-27 Sun Microsystems, Inc. Method and apparatus for software component analysis
US6002871A (en) * 1997-10-27 1999-12-14 Unisys Corporation Multi-user application program testing tool
US6446120B1 (en) * 1997-11-26 2002-09-03 International Business Machines Corporation Configurable stresser for a web server
US6286046B1 (en) * 1997-12-22 2001-09-04 International Business Machines Corporation Method of recording and measuring e-business sessions on the world wide web
US6237135B1 (en) 1998-06-18 2001-05-22 Borland Software Corporation Development system with visual design tools for creating and maintaining Java Beans components
US6226788B1 (en) 1998-07-22 2001-05-01 Cisco Technology, Inc. Extensible network management system
US6401220B1 (en) * 1998-08-21 2002-06-04 National Instruments Corporation Test executive system and method including step types for improved configurability
US6397378B1 (en) 1998-08-21 2002-05-28 National Instruments Corporation Test executive system and method including distributed type storage and conflict resolution
US6182245B1 (en) 1998-08-31 2001-01-30 Lsi Logic Corporation Software test case client/server system and method
WO2000019664A2 (en) * 1998-09-30 2000-04-06 Netscout Service Level Corporation Managing computer resources
US6298478B1 (en) * 1998-12-31 2001-10-02 International Business Machines Corporation Technique for managing enterprise JavaBeans (™) which are the target of multiple concurrent and/or nested transactions
US6510402B1 (en) * 1999-02-04 2003-01-21 International Business Machines Corporation Component testing with a client system in an integrated test environment network
US6574578B1 (en) * 1999-02-04 2003-06-03 International Business Machines Corporation Server system for coordinating utilization of an integrated test environment for component testing
US6473794B1 (en) 1999-05-27 2002-10-29 Accenture Llp System for establishing plan to test components of web based framework by displaying pictorial representation and conveying indicia coded components of existing network framework
US6523027B1 (en) * 1999-07-30 2003-02-18 Accenture Llp Interfacing servers in a Java based e-commerce architecture
US6256773B1 (en) 1999-08-31 2001-07-03 Accenture Llp System, method and article of manufacture for configuration management in a development architecture framework
US6708327B1 (en) * 1999-10-14 2004-03-16 Techonline, Inc. System for accessing and testing evaluation modules via a global computer network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19511252C1 (en) * 1995-03-27 1996-04-18 Siemens Nixdorf Inf Syst Processing load measuring system for computer network design
GB2307318A (en) * 1995-11-14 1997-05-21 Mitsubishi Electric Corp Testing a networked system of servers
US5784553A (en) * 1996-01-16 1998-07-21 Parasoft Corporation Method and system for generating a computer program test suite using dynamic symbolic execution of JAVA programs
US5812780A (en) * 1996-05-24 1998-09-22 Microsoft Corporation Method, system, and product for assessing a server application performance

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"IMPROVED PROCESS FOR VISUAL DEVELOPMENT OF CLIENT/SERVER PROGRAMS", IBM TECHNICAL DISCLOSURE BULLETIN,US,IBM CORP. NEW YORK, vol. 41, no. 1, 1998, pages 281 - 283, XP000772108, ISSN: 0018-8689 *
C. NOLAN: "A Look at e-Test Suite 3.1 by RSW", SOFTWARE TESTING & QUALITY ENGINEERING, July 1999 (1999-07-01), pages 60 - 61, XP002155308, Retrieved from the Internet <URL:http://www.rswsoftware.com/news/articles/ja99.pdf> [retrieved on 20001128] *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7490319B2 (en) 2003-11-04 2009-02-10 Kimberly-Clark Worldwide, Inc. Testing tool comprising an automated multidimensional traceability matrix for implementing and validating complex software systems
WO2014048851A3 (en) * 2012-09-25 2014-06-19 Telefonica, S.A. A method and a system to surveil applications of online services
WO2014171979A1 (en) * 2013-04-20 2014-10-23 Concurix Corporation Marketplace for monitoring services
CN105247557A (en) * 2013-04-20 2016-01-13 肯赛里克斯公司 Marketplace for monitoring services

Also Published As

Publication number Publication date
ATE267419T1 (en) 2004-06-15
US6859922B1 (en) 2005-02-22
DE60010906D1 (en) 2004-06-24
WO2001016751A1 (en) 2001-03-08
AU6933000A (en) 2001-03-26
US6934934B1 (en) 2005-08-23
CA2383919A1 (en) 2001-03-08
AU6934800A (en) 2001-03-26
EP1214656A1 (en) 2002-06-19
DE60010906T2 (en) 2005-06-09
WO2001016755A1 (en) 2001-03-08
EP1214656B1 (en) 2004-05-19
AU6933100A (en) 2001-03-26
WO2001016755A9 (en) 2002-09-19

Similar Documents

Publication Publication Date Title
US6775824B1 (en) Method and system for software object testing
US6934934B1 (en) Method and system for software object testing
US7000224B1 (en) Test code generator, engine and analyzer for testing middleware applications
US6993747B1 (en) Method and system for web based software object testing
US7171588B2 (en) Enterprise test system having run time test object generation
Bryce et al. Developing a single model and test prioritization strategies for event-driven software
US6865692B2 (en) Enterprise test system having program flow recording and playback
US7757175B2 (en) Method and system for testing websites
US6192511B1 (en) Technique for test coverage of visual programs
US8001532B1 (en) System and method for generating source code-based test cases
US7774757B1 (en) Dynamic verification of application portability
US20030074423A1 (en) Testing web services as components
US20030070120A1 (en) Method and system for managing software testing
US20150254168A1 (en) Method And System For Testing Interactions Between Web Clients And Networked Servers
US20140013308A1 (en) Application Development Environment with Services Marketplace
US20060156288A1 (en) Extensible execution language
US20110022911A1 (en) System performance test method, program and apparatus
WO2002035754A2 (en) Generation of correctly ordered test code for testing software components
JP2003271418A (en) Method for testing software object of web base
Ghosh et al. A test management and software visualization framework for heterogeneous distributed applications
Ezolt Optimizing Linux Performance: A Hands-on Guide to Linux Performance Tools
Vail Stress, load, volume, performance, benchmark and base line testing tool evaluation and comparison
Pavlov Project Work
CN114579470A (en) Unit testing method and device, electronic equipment and storage medium
Hasan et al. Stress Testing and Monitoring ASP. NET Applications

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA IN JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP