CA2446434A1 - Benchmark testing - Google Patents

Benchmark testing Download PDF

Info

Publication number
CA2446434A1
CA2446434A1 CA002446434A CA2446434A CA2446434A1 CA 2446434 A1 CA2446434 A1 CA 2446434A1 CA 002446434 A CA002446434 A CA 002446434A CA 2446434 A CA2446434 A CA 2446434A CA 2446434 A1 CA2446434 A1 CA 2446434A1
Authority
CA
Canada
Prior art keywords
data
computer system
additional
test
benchmark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CA002446434A
Other languages
French (fr)
Other versions
CA2446434C (en
Inventor
Martin Croker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Accenture Global Services Ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2446434A1 publication Critical patent/CA2446434A1/en
Application granted granted Critical
Publication of CA2446434C publication Critical patent/CA2446434C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3428Benchmarking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99953Recoverability
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99955Archiving or backup

Abstract

A method of benchmark testing, using test data, a component of a first computer system, using an additional computer system to restore said data.

Description

BENCHMARK TESTING
Field of the Invention The present invention relates to a method or apparatus for benchmark testing a component of a computer system. More particularly, but not exclusively, the present invention relates to a method or apparatus for benchmark testing a component of a computer system using multiple computer systems.
Background to the Invention Large computer systems, for example database systems, such as a server have a large data storage facility. A server comprises a processor, a memory, and a disc drive. The disc drive is able to store data and a computer program to operate devices attached to the server. The servers are connected to a network via a suitable connection. It is necessary for the components of the computer system to be regularly benchmark tested. The computer system is made up of hardware and software components, such as the memory devices, central processing units and the application processing algorithm (computer program). The benchmark test ensures the computer system's components are functioning correctly by testing their performance on a regular basis.
For each benchmark test cycle, a value of a parameter of a component within the computer system can be changed so that a comparative result is given. All parameter values, apart from the value of the parameter being tested, remain constant for each subsequent test cycle.
Computer systems, such as large settlement banking systems, execute benchmark tests as follows. Test data in a database is changed to an unsettled state, and a test using the data is executed which settles the trades. This settlement procedure is cycled as necessary. For each cycle a value of the parameter being tested can be changed. Measurements are taken to provide a value of, for example, the length of time it takes to complete each settlement procedure or the amount of memory used during the settlement procedure.
The results of each run are compared with previous runs to ensure that the component of the computer system tested is operating correctly.
However, once an individual settlement procedure has been completed, the data in the database along with the control files and tablespace data files would have changed. It therefore becomes necessary to restore, or 'rollback', these parts of the database to a known state before a new settlement cycle can be executed. This provides a requirement for systems that can run benchmark tests on components of a computer system and concurrently restore the data and data files used in the test to a known state.
Typical server systems in use at present include the Sun E450TM and the Sun E10000TM. Known data storage facilities include the Sun ASOOOTM
product line of disc arrays.
Present benchmarking systems in use provide product specific dedicated storage devices attached to a server. The storage devices provide large volumes of storage space for temporary data storage. One such system is the EMC TimeFinder TM product, and works in the following manner.
The server's application environment is copied to a dedicated storage device, such as SymmetrixTM. The application environment includes some of the database system files, such as the control files, the database tablespace files and the data. It is necessary to restore the application environment to a known state before each subsequent benchmark test. This enables benchmark testing of components of the computer system to be executed using the copied application environment. The benchmark test is executed under the control of the server, and therefore uses valuable server processing power during the test execution. Using this system, it is not possible to create a duplicate copy of the data whilst the benchmark test is being executed.
Such systems are very expensive due to the large array of product specific dedicated storage devices required.
Statement of the Invention According to a first aspect of the present invention, there is provided a method of benchmark testing, using test data, a component of a first computer system, using an additional computer system to restore said data.
According to a second aspect of the present invention there is provided an apparatus for benchmark testing, using test data, a component of a first computer system comprising a first computer system, and an additional computer system that restores said data.
Further objects and advantages will be apparent to the skilled reader from the following description and claims.
Brief Description of the Drawings Specific embodiments of the present invention will now be described by way of example only, with reference to the accompanying drawings, in which:
Figure 1 shows a prior art system utilising dedicated storage discs.
Figure 2 shows how servers and discs may be interconnected according to an embodiment of the present invention.
Figure 3 shows the concept of disc groups according to an embodiment of the present invention.
Figure 4 shows the location of an application environment according to a preferred embodiment of the present invention.
Figures Sa to Se show how access to the disc groups is changed to execute a benchmark test as required by an embodiment of the present invention.
Figure 6 shows a flow diagram of the method according to an embodiment of the present invention.
Description of Preferred Embodiment Referring to Figure 2, a benchmarking system is shown consisting of two servers (102, 103) which are connected, via fibre hubs, to an array of data storage devices (104) such as physical discs. Each server has it's own boot disc (108, 111) only accessible by that server. The physical discs (106) may be accessible by any number of servers connected to the storage network. A
standard parallel connection interface, such as SCSI (Small Computer Systems Interface), is used to connect the servers to the array of discs. The host sees all the discs within the array as a SCSI target.
As shown in Figure 3, one or more physical discs (106) are combined to form disc groups. A disc group (107) may contain any number of physical discs across any number of storage arrays (105). A logical volume of data may be spread over one or more sections of one or more physical discs, allowing large volumes of data to be grouped together exceeding the boundary of a single physical disc. To control access to the storage devices, a logical volume manager, is loaded on to each server and is used to intercede between the applications and the physical discs. Such a volume manager could be VeritasTM volume manager, which is available from Veritas software Corporation, Veritas Paxk, Bittams Lane, Guildford Road, Chertsey, Surrey, I~T16 9RG. The logical volume manager prevents two or more servers from utilising the same SCSI taxget, thus preventing the same disc groups being accessed by different servers at the same time.
The logical volume manager provides the ability for each server to import or export disc groups. A server is only able to access logical discs and file systems that reside in disc groups currently imported onto that server.
Any of these disc groups, not currently attached to a server, may be imported by any server connected to the array of discs.
Referring to Figure 4, two servers (102, 103) are shown that have access to a number of disc groups. It will be apparent to the skilled man that more than two servers may have access to the disc groups. Access to these disc groups is controlled by the logical volume manager. A first computer system or server, the execution host (103), has access to any of the disc groups under control of the volume manager. An additional computer system or server, the copy host (102), has access to the master copy (109) of an application environment which includes database tablespace data files, control files and data, all of which are used as the test data during each benchmark test. The application environment in this embodiment is part of a database system, but could also be any system where restoration of data is required.
The execution host executes the benchmark test using the data and files in the application environment.
Figure 4 shows how the master copy (109) of the application environment can be created on local discs attached to the copy host. The data in the master copy (109) is not altered during the benchmark test procedure.
Both the copy host (102) and the execution host (103) have, loaded onto their respective systems, a benchmark tools suite consisting of a series of scripts. The scripts are a sequence of high level instructions used to control the volume manager. The benchmark tools suite provides the ability to import and deport logical volumes of data.
The benchmark tools suite uses the concept of a data set, and is comprised of logical volumes of data. A logical volume of data could, for example, consist of a single database. The benchmark tools suite controls a master disc group and any number of execution disc groups. The master disc group contains the master copy of the application environment i.e. the database tablespace data files, control files and data. The execution disc groups contain duplicate copies of the data set master.
The logical volume layout of the duplicate copies is determined by a configuration file in the benchmark tools suite and does not have to conform to the logical volume layout of the master copy, the layout of the duplicate copy determines the format of benchmark testing to be executed. A data set may at any time have zero or more execution disc groups assigned to it, and each execution disc group contains an execution copy upon which a benchmark test may be executed.
The execution host (103) has loaded on to its system the rest of the necessary files and data required to run the database. Fox example, the execution host (103) would have access to the application binaries, any third-party applications, application and performance logs, and any test input data.
It is not necessary to restore any of these files and data to a known state for each subsequent benchmark test.
Referring to Figures Sa to Se, an example is given of disc group management required to execute a benchmark test according to the present invention.
Figure Sa shows the benchmark tools suite executing an instruction that will enable the copy host to create a duplicate copy of the application environment master copy (109) in a first execution disc group (110). This is the first execution copy.

Figure Sb shows how the benchmark tools suite controls the logical volume manager so that access to the first execution disc group (110) is switched from the copy host (102) to the execution host (103). The volume manager severs the link between the first execution disc group (110) and the copy host (102). A connection is then established between the execution host (103) and the first execution disc group (110). The execution host (103) now has access to the same execution disc group (1I0) as the copy host (102) previously did, i.e. the execution disc group containing the first execution copy.
Referring to Figure Sc, the benchmark tools suite enables the execution host (103) to execute a benchmark test using the application environment in the first execution copy.
The benchmark test can be used to test any of the components of the computer system running the test. All the parameter values of each component are kept constant during each benchmark test cycle, except the parameter value of the component been tested. The component being tested in this embodiment could also easily be, by way of example only, any of the following: the application processing algorithm (computer program), the amount of memory in the server, the CPU speed, the number of CPUs, or indeed any component of the computer system where a value of a parameter for that component can be changed for each subsequent benchmark test.
Once the benchmark test run has been completed, the result is sent to a further processing device, not discussed here. The result could be a timing calculation taken from the beginning of the test until the end of the test, or could easily be another parameter, such as memory usage, memory system leaks or indeed any other performance characteristic.
Once the execution host (103) has executed the benchmark test using the application environment in the first execution copy, the data and files of the application environment will have been changed by the benchmark test and therefore will be invalid for further tests. It is necessary to provide a new duplicate of the application environment if a further benchmark test is required. If no further benchmark testing is required, the logical volume manager allows any server to access the used disc group.
Figure Sd refers to the case where continuous runs of benchmark tests are being executed. The copy host (102) creates a second duplicate copy of the master data within a second execution disc group (112), and is known as the second execution copy. The second duplicate copy is created after access to the first duplicated copy has been detached from the copy host (102).
Whilst the execution host (103) executes a benchmark test using the first execution copy within the first execution disc group (110), the second duplicate copy within the second execution disc group (112) may be created.
Upon completion of the benchmark test using the first execution copy, access to the disc groups (110, 112) containing the first execution copy and the second execution copy are switched between the copy host (102) and the execution host (103). The volume manager switches access to the disc groups (110, 112) under control of the benchmark tools suite in the same manner stated above. The execution host (103) will then have access to a restored set of data upon which a new benchmark test may be run. The value of the parameter being tested may at this point be changed. However, it is also possible that a repeat run using the same value for the parameter being tested is required. The first execution disc group (112) in which the first execution copy was created is now accessible by the copy host (103).
The copy host may, if required by the benchmark tools suite, create a further duplicate copy of the application environment from the master copy.
The disc groups (110, 112) can now be cycled as many times as the benchmark system requires. A benchmark test is performed using one disc group, whilst the other disc group has a restored set of data written into it, as shown in Figure Se.
If a further benchmark test is to be executed and the desired logical layout of the execution copy changes, the previous used execution copy may be deleted and a new one created in the correct logical layout. The logical volume layout is arranged according to the configuration file in the benchmark tools suite.
Figure 6 shows a sequence of events according to one aspect of the present invention.
In step S 101, a master copy of the application environment is created.
In step 5103, execution disc group 1 is allocated to the copy host. A
duplicate copy of the master is then created in execution disc group 1, in step 5105.

The link between the copy host and execution disc group 1 is then severed in step S 107.
In step S 109, the system sits in a loop until the execution host has finished its previous test. Upon the completion of the test, execution disc group 1 is allocated to the execution host, as shown in step S 111. The benchmark test can then be executed by the execution host using the data in execution disc group 1, as shown in step 5113. If no further benchmark test is to be carried out in step S 115, the task is completed (S 117).
If a further benchmark test is required, a further execution disc group, execution disc group 2, is allocated to the copy host in step S 119. In step 5121, a duplicate copy of the master copy is created in execution disc group 2. The link between the copy host and the execution disc group 2 is then severed in step S 123.
In step 5125, the system remains in a loop until the execution host has completed its current test. Upon completion of the test, the execution disc group 2 is allocated to the execution host in step S 127. The benchmark test can then be executed using the data in the execution disc group 2 by the execution host, as shown in step S 129. This further benchmark test may run using a new value for the original parameter being tested, or may use the same value for the parameter being tested, or indeed may start testing a different parameter of a component in the computer system. If no further benchmark tests are to be executed in step S 131, the task is completed (S 117). If further tests are required, the process is repeated from step S 103.

An execution disc group (110, 112) can be in one of the four following states:
Ready: the data in the disc group is a valid and correct copy of the master, and the disc group is ready to be used for execution. This disc group is not attached to any host.
Exec: the disc group is attached to the execution host, and databases and file systems may be started. Application execution rnay be in progress.
Dead: this disc group has previously been attached to the execution host, and the state of the data on the discs is unknown and is assumed to be dirty. This disc group is not attached to any host.
Prep: this disc group is attached to the copy host, and the data is currently being restored to a known state from the master disc group.
The benchmark tools suite comprises execution-host based tools, and copy-host based tools. The execution-host based tools are primarily concerned with readying the execution host prior to a test execution; and freeing the execution set, ready for copying back on to the copy-host. The copy-host based tools are concerned with attaching and detaching the execution set, and enable the transfer of data from the master volume to the execution set.
The copy-host based tools and execution-host based tools are best described by following the life cycle of a test execution set starting with a Dead test set.

A script bmautocopy is responsible for checking the disc storage to find execution sets that are ready for data to be copied in to them. This task is executed periodically by a standard UNIX utility "cron", as is well known within the art. When an appropriate disc set is found, bmautocopy invokes another script bmcopy against the discovered disc set. The script bmcopy will execute the utility script bmimportdg, which will import the disc group. The disc group will then be renamed to reflect the change of status from Dead to Prep.
Once the disc group has been successfully imported, bmcopy uses a configuration file to determine the stripe layout of the disc. This is then used to execute multiple streams of script bmcopystr°ipe, allowing each stream to target distinct discs. For each volume that is passed to bmcopystripe, a check is made to see if the volume exists. If the volume does not exist, it is created as specified by the stripe set.
The volume data is copied from the master disc group to the execution set. It is not necessary for the volume on the master to be physically laid out in the same structure as the execution set as long as both logical volumes exist. The disc group is exported from the copy host and renamed Ready.
The two primary user interface scripts are bmp~e~un and bmpost~un.
These scripts handle all operations required before and after test executions.
The script bmprerun imports Ready disc groups and renames them Exec. It also provides the capability to execute any databases residing on the execution set for the purposes of testing. Performance logging of the database is executed using a separate tool, which is not described herein. Performance logging of the UNIX host is executed by using a standard utility SAR, as is well known in the art. The bmp~eru~ script also handles queue management, allowing the possibility of scheduling multiple runs. It is also possible for bmprerun to put the job on hold until a suitable data set has been imported and made Ready.
After execution of the benchmark test, the script bmpostrun is used to stop performance logging tools and shutdown any databases. The Exec group is detached and renamed Dead.
Each execution disc group will be provided with a name. The disc group name is created by concatenating the data set name and the disc group's state. For example, if the data set name is Project and the disc group contains a clean copy of the master, the execution disc group name would be Proj ectReady.
It will be apparent to the skilled reader that various modifications and variations may be employed in relation to the above-described embodiment without departing from the scope of the present invention.
It would of course be possible to have more than one execution host connected to the array of discs. Each execution host can run its own individual benchmark test. It would also be possible to have more than one copy host, although each copy host would need access to its own master copy.
Multiple copy hosts would therefore be limited by the amount of disc space available.

Although the invention has been described specifically for use in benchmark testing a component of a database system, it will be apparent to the skilled reader that any component of a computer system where there is a requirement for data to be restored could be used to implement the invention.
It will also be apparent to the skilled reader that where a single disc is concurrently accessible by two servers any interface scheme, other than SCSI, may be used.
It will further be apparent to the skilled reader that the master copy of the software programs may alternatively be created in a mirrored logical disc group within the shared disc array connected to both hosts.

Claims (27)

1. A method of benchmark testing, using test data, a component of a first computer system, using an additional computer system to restore said data.
2. A method according to claim 1 wherein during a test using the said data, a copy of the original data is written to a data storage means for use during a subsequent test.
3. A method according to claim 1 wherein the first computer system is a database system.
4. A method according to claim 1 wherein the first and additional computer systems are servers.
5. A method according to any previous claim comprising the steps of a) creating a master copy of the data used in the test in a data storage means accessible by the additional computer system;
b) creating a link between the additional computer system and an additional data storage means, by way of a storage access control means;

c) creating a first duplicate copy of the master copy in the additional data storage means;
d) severing the link between the additional computer system and the additional data storage means, by way of the storage access control means;
e) creating a link between the first computer system and the additional data storage means, by way of the storage access control means;
f) executing a first benchmark test, using the data in the first duplicate copy stored in the additional data storage means, using the first computer system.
6. A method according to claim 5, further comprising the steps of a) creating a link between the additional computer system and a further data storage means, by way of the storage access control means;
b) creating a further duplicate copy of the data used in the test in the further data storage means;
c) severing the link between the additional computer system and the further data storage means, by way of the storage access control means;
d) severing the link between the first computer system and any data storage means storing data previously used in a benchmark test, by way of the storage access control means;

e) creating a link between the first computer system and the data storage means storing the further duplicate copy, by way of the storage access control means;
f) executing a further benchmark test, using the data in a further duplicate copy, using the first computer system.
7. A method according to claim 6 wherein the further benchmark tests are repeatedly testing the same value of a parameter.
8. A method according to claim 6 wherein the further benchmark tests are testing different values of a parameter.
9. A method according to claim 6 comprising further steps of executing further benchmark tests using one or more copies of data created by one or more additional computer systems.
10. A method according to any previous claim wherein data is stored in an array of storage devices.
11. A method according to claim 10 wherein the storage devices are physical discs.
12. A method according to claims 5 or 6 wherein the storage access control means is a logical volume manager.
13. An apparatus for benchmark testing, using test data, a component of a first computer system comprising a first computer system, and an additional computer system that restores said data.
14. An apparatus according to claim 13 further comprising a data storage means, wherein during a test using said data, the data storage means allows a copy of the original test data to be written to it for use during a subsequent test.
15. An apparatus according to claim 13 wherein a component of the first computer system is a database.
16. An apparatus according to claim 13 wherein the first and additional computer systems are servers.
17. An apparatus according to claim 14 further comprising a storage access control means, wherein the additional computer system is able to create a master copy of the data used in the test in the data storage means, a storage access control means is able to create a link between the additional computer system and an additional data storage means, the additional computer system is able to create, in the additional data storage means, a first duplicate copy of the master copy, the storage access control means is further able to sever a link between the additional computer system and the additional data storage means and create a link between the first computer system and the additional data storage means enabling the first computer system to execute at least a first benchmark test using the first duplicate copy stored in the additional data storage means.
18. An apparatus according to claim 17 wherein two or more duplicate copies of the master copy are created by the additional computer system within additional data storage means.
19. An apparatus according to claim 17 wherein one or more additional computer systems create one or more copies of data for use in executing further benchmark tests.
20. An apparatus according to claim 17 wherein the first benchmark test and any further benchmark tests are repeatedly testing the same value of a parameter.
21. An apparatus according to claim 17 wherein the first benchmark test and any further benchmark tests are testing different values of a parameter.
22. An apparatus according to claim 17 wherein the storage access control means is a logical volume manager.
23. An apparatus according to any of claims 13 to 22 wherein an array of storage devices stores data.
24. An apparatus according to claim 23 wherein the storage devices are physical discs.
25. A computer program for controlling at least two computer systems to perform the method of any of claims 1 to 12.
26. A method as hereinbefore described with reference to the accompanying drawings.
27. An apparatus as hereinbefore described with reference to the accompanying drawings.
CA2446434A 2001-05-15 2002-05-14 Benchmark testing Expired - Lifetime CA2446434C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GB0111844.7 2001-05-15
GB0111844A GB2378530B (en) 2001-05-15 2001-05-15 Benchmark testing
PCT/GB2002/002241 WO2002093379A2 (en) 2001-05-15 2002-05-14 Benchmark testing of a computer component

Publications (2)

Publication Number Publication Date
CA2446434A1 true CA2446434A1 (en) 2002-11-21
CA2446434C CA2446434C (en) 2015-01-20

Family

ID=9914668

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2446434A Expired - Lifetime CA2446434C (en) 2001-05-15 2002-05-14 Benchmark testing

Country Status (6)

Country Link
US (1) US7464123B2 (en)
EP (1) EP1390853B1 (en)
AU (1) AU2002310692A1 (en)
CA (1) CA2446434C (en)
GB (1) GB2378530B (en)
WO (1) WO2002093379A2 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8037109B2 (en) * 2003-06-30 2011-10-11 Microsoft Corporation Generation of repeatable synthetic data
US7945657B1 (en) * 2005-03-30 2011-05-17 Oracle America, Inc. System and method for emulating input/output performance of an application
US8560604B2 (en) 2009-10-08 2013-10-15 Hola Networks Ltd. System and method for providing faster and more efficient data communication
US8762108B2 (en) * 2011-07-22 2014-06-24 International Business Machines Corporation Real time device evaluation
US9953279B1 (en) 2011-10-21 2018-04-24 Motio, Inc. System and method for computer-assisted improvement of business intelligence ecosystem
US11537963B2 (en) 2011-10-21 2022-12-27 Motio, Inc. Systems and methods for decommissioning business intelligence artifacts
US9241044B2 (en) 2013-08-28 2016-01-19 Hola Networks, Ltd. System and method for improving internet communication by using intermediate nodes
EP3767495B1 (en) 2017-08-28 2023-04-19 Bright Data Ltd. Method for improving content fetching by selecting tunnel devices
EP4030318A1 (en) 2019-04-02 2022-07-20 Bright Data Ltd. System and method for managing non-direct url fetching service
US10873647B1 (en) * 2020-06-25 2020-12-22 Teso Lt, Ltd Exit node benchmark feature

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5021997A (en) * 1986-09-29 1991-06-04 At&T Bell Laboratories Test automation system
US5421004A (en) * 1992-09-24 1995-05-30 International Business Machines Corporation Hierarchical testing environment
JPH06187275A (en) 1992-12-18 1994-07-08 Fujitsu Ltd Testing method for server application
EP0646802B1 (en) * 1993-09-20 1999-08-11 Hewlett-Packard GmbH High-throughput testing apparatus
US5408408A (en) * 1994-02-22 1995-04-18 Marsico, Jr.; Michael Apparatus and method for electronically tracking and duplicating user input to an interactive electronic device
US5701471A (en) * 1995-07-05 1997-12-23 Sun Microsystems, Inc. System and method for testing multiple database management systems
US5819066A (en) * 1996-02-28 1998-10-06 Electronic Data Systems Corporation Application and method for benchmarking a database server
US6446120B1 (en) * 1997-11-26 2002-09-03 International Business Machines Corporation Configurable stresser for a web server
US5953689A (en) * 1998-03-12 1999-09-14 Emc Corporation Benchmark tool for a mass storage system
US5964886A (en) 1998-05-12 1999-10-12 Sun Microsystems, Inc. Highly available cluster virtual disk system
US6378013B1 (en) * 1998-09-17 2002-04-23 Micron Technology, Inc. System for assessing performance of computer systems
US6424999B1 (en) * 1999-03-11 2002-07-23 Emc Corporation System and method for restoring previously backed-up data in a mass storage subsystem
US6934934B1 (en) * 1999-08-30 2005-08-23 Empirix Inc. Method and system for software object testing
US7139999B2 (en) * 1999-08-31 2006-11-21 Accenture Llp Development architecture framework
US6581169B1 (en) * 1999-12-08 2003-06-17 Inventec Corporation Method and device for automatic computer testing on a plurality of computers through a local area network
US6718446B1 (en) * 2000-02-11 2004-04-06 Iomega Corporation Storage media with benchmark representative of data originally stored thereon
US6453269B1 (en) * 2000-02-29 2002-09-17 Unisys Corporation Method of comparison for computer systems and apparatus therefor
JP2002244892A (en) * 2001-02-19 2002-08-30 Mitsubishi Electric Corp Benchmark execution method

Also Published As

Publication number Publication date
US7464123B2 (en) 2008-12-09
EP1390853B1 (en) 2015-08-12
WO2002093379A3 (en) 2003-04-03
GB2378530A (en) 2003-02-12
CA2446434C (en) 2015-01-20
GB2378530B (en) 2005-03-30
AU2002310692A1 (en) 2002-11-25
WO2002093379A2 (en) 2002-11-21
EP1390853A2 (en) 2004-02-25
GB0111844D0 (en) 2001-07-04
US20040199328A1 (en) 2004-10-07

Similar Documents

Publication Publication Date Title
Snodgrass A relational approach to monitoring complex systems
US5701471A (en) System and method for testing multiple database management systems
US9904603B2 (en) Successive data fingerprinting for copy accuracy assurance
US7254595B2 (en) Method and apparatus for storage and retrieval of very large databases using a direct pipe
US8473913B2 (en) Method of and system for dynamic automated test case generation and execution
US20160342488A1 (en) Mechanism for providing virtual machines for use by multiple users
CN112534419A (en) Automatic query offload to backup databases
US20070083563A1 (en) Online tablespace recovery for export
US20040010787A1 (en) Method for forking or migrating a virtual machine
US8732419B2 (en) Maintaining multiple target copies
WO2005069163A1 (en) Method and system for a self-healing query access plan
EP1390853B1 (en) Benchmark testing of a computer component
WO2014001942A1 (en) Source cleaning cascaded volumes
Sciore Database design and implementation
EP3182278A1 (en) System for automatic preparation of integrated development environments
Weissman Fault tolerant wide-area parallel computing
Correia et al. Group-based replication of on-line transaction processing servers
US11797555B2 (en) Method for copying spanner databases from production to test environments
Borisov et al. Rapid experimentation for testing and tuning a production database deployment
Melski Burt: The Backup and Recovery Tool.
Stallone Monkey: A Distributed Orchestrator for a Virtual Pseudo-Homogenous Computational Cluster Consisting of Heterogeneous Sources
WO2014144182A2 (en) A data transfer method and apparatus
Alapati Oracle Database 11 g Architecture
Troisi NonStop SQL/MP availability and database configuration operations
Correia Júnior et al. Group-based replication of on-line transaction processing servers

Legal Events

Date Code Title Description
EEER Examination request
MKEX Expiry

Effective date: 20220516