US20020065793A1 - Sorting system and method executed by plural computers for sorting and distributing data to selected output nodes - Google Patents

Sorting system and method executed by plural computers for sorting and distributing data to selected output nodes Download PDF

Info

Publication number
US20020065793A1
US20020065793A1 US09/366,320 US36632099A US2002065793A1 US 20020065793 A1 US20020065793 A1 US 20020065793A1 US 36632099 A US36632099 A US 36632099A US 2002065793 A1 US2002065793 A1 US 2002065793A1
Authority
US
United States
Prior art keywords
sorting
sorted
nodes
buffer
storing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/366,320
Other versions
US6424970B1 (en
Inventor
Hiroshi Arakawa
Akira Yamamoto
Shigeo Honma
Hideo Ohata
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Assigned to HITACHI, LTD. reassignment HITACHI, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ARAKAWA, HIROSHI, HONMA, SHIGEO, OHATA, HIDEO, YAMAMOTO, AKIRA
Publication of US20020065793A1 publication Critical patent/US20020065793A1/en
Application granted granted Critical
Publication of US6424970B1 publication Critical patent/US6424970B1/en
Assigned to GOOGLE INC. reassignment GOOGLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HITACHI, LTD.
Assigned to GOOGLE LLC reassignment GOOGLE LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: GOOGLE INC.
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/22Arrangements for sorting or merging computer data on continuous record carriers, e.g. tape, drum, disc
    • G06F7/36Combined merging and sorting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99937Sorting

Definitions

  • the present invention relates to a sorting process to be executed by a computer system, and more particularly to a sorting process to be executed by a plurality of computers.
  • a sorting process is one of fundamental types of information processing to be executed by a computer system.
  • a most common sorting process is to rearrange a plurality of records each including one or a plurality of keys, in a certain order (e.g., ascending or descending order) in accordance with the keys.
  • a computer writes given input data in a buffer of a computer memory, refers to and compares keys of respective records to rearrange the records and obtain a sorted result.
  • Such a sorting process which uses only the buffer of a computer memory as data storage region is called internal sorting.
  • the computer reads each portion of all the sorted strings from the external storage and writes the portions in the buffer.
  • the computer compares the keys of the records stored in the buffer to obtain a sorted result of each portion of all the sorted strings.
  • Such a process for each portion of all the sorted strings is executed from the start to end of each sorted string to thereby obtain a final sorted result of all the input data.
  • Such a process of obtaining the final sorted result from each sorted result is called merging.
  • Such a sorting process using the computer buffer and external storage as data storage region is called external sorting.
  • the external storage may be a magnetic disk, a magnetic tape or the like.
  • Parallel processing is one type of high speed information processing to be executed by a computer system.
  • Parallel processing shortens a process time by preparing a plurality of resources necessary for information processing and performing a plurality of tasks at the same time, thereby attempting to reduce the processing time.
  • JP-A-8-272545 discloses techniques of improving parallel processing by using a disk array constituted of a plurality of magnetic disks as an external storage and devising the storage locations of data in the disk array.
  • a plurality of computers may be interconnected by a network to constitute one computer system and a large amount of input data may be sorted in parallel by a plurality of computers. More specifically, input data is distributed and assigned to some or all computers (hereinafter called input nodes) among a plurality of computers, and each input node externally sorts the assigned input data.
  • the externally sorted result at each input node is transferred via the network to one computer (hereinafter called an output node) among a plurality of computers.
  • the output node merges the sorted results transferred from the input nodes in a manner similar to dealing with externally sorted strings, to thereby obtain the final sorted result of all the input data.
  • the merging process is required to be executed both at the input nodes and output node. Therefore, a time required for the merging process at an input node is added to a time required for the sorting process at the input node.
  • the merging process at an input node may delay or hinder another task at the input node.
  • a time to transfer, via such a network, the externally sorted result at an input node to the output node may pose some problem.
  • a network is a LAN having a transfer speed of 100 M bits/sec such as 100 Base-T
  • the data transfer speed is about 12.5 M bytes/sec or lower. It takes therefore about a threefold transfer time, as compared to about 40 M bytes/sec or lower of an ultra wide SCSI which is commonly used I/O specifications.
  • a ratio of the data transfer time to the sorting process time may become large.
  • a first sorting system of this invention comprises: a plurality of input nodes; one output node; and a shared external storage unit connected between each of the input units and the output unit.
  • Each of the input units comprises: a first buffer; means for storing sorting target data in the first buffer; means for internally sorting data in the first buffer in accordance with a predetermined sorting rule; and means for storing as a sorted string an internally sorted result in the shared external storage unit.
  • the output node comprises: a second buffer; means for storing, in the second buffer, the sorted string stored in the shared external storage unit; means for merging a plurality of sorted strings stored in the second buffer in accordance with the predetermined sorting rule; and means for outputting a merged result as a sorted result.
  • a second sorting system of the invention comprises: a plurality of input nodes; a plurality of output nodes; and a shared external storage unit connected between each of the input units and each of the output units.
  • Each of the input units comprises: a first buffer; means for storing sorting target data in the first buffer; means for internally sorting data in the first buffer in accordance with a predetermined sorting rule; means for classifying data in the first buffer in accordance with a predetermined classification rule; and means for storing as a sorted string an internally sorted result in the shared external storage unit in accordance with the predetermined classification rule.
  • the output node comprises: a second buffer; means for storing, in the second buffer, the sorted string stored in the shared external storage unit; means for merging a plurality of sorted strings stored in the second buffer in accordance with the predetermined sorting rule; and means for outputting a merged result as a sorted result.
  • the input node may further comprise means for notifying a merge instruction to the output node, and the output node may further comprise means for receiving the merge instruction.
  • a third sorting system of the invention comprises: a plurality of nodes; a dedicated external storage unit provided for each of the plurality of nodes; and a shared external storage unit provided between each pair of nodes of the plurality of nodes.
  • At least one node among the plurality of nodes comprises: a first buffer; means for storing sorting target data in the first buffer; means for internally sorting data in the first buffer in accordance with a predetermined sorting rule; means for sorting as a sorted string an internally sorted result in the dedicated external storage unit; a second buffer; means for storing, in the second buffer, the sorted string stored in the shared external storage unit and the dedicated external storage unit; means for merging a plurality of sorted strings stored in the second buffer in accordance with the predetermined sorting rule; and means for outputting a merged result as a sorted result.
  • a fourth sorting system of the invention comprises: a plurality of nodes; a dedicated external storage unit provided for each of the plurality of nodes; and a shared external storage unit provided between each pair of nodes of the plurality of nodes.
  • At least one node among the plurality of nodes comprises: a first buffer; means for storing sorting target data in the first buffer; means for internally sorting data in the first buffer in accordance with a predetermined sorting rule; means for classifying data in the first buffer in accordance with a predetermined classification rule; means for sorting as a sorted string an internally sorted result in the dedicated external storage unit in accordance with the predetermined classification rule; a second buffer, means for storing, in the second buffer, the sorted string stored in the shared external storage unit and the dedicated external storage unit; means for merging a plurality of sorted strings stored in the second buffer in accordance with the predetermined sorting rule; and means for outputting a merged result as a sorted result.
  • the first and second buffers may be the same buffer.
  • At least one of the plurality of nodes may further comprise means for notifying another output node of a merge instruction and means for receiving the merge instruction from the other node.
  • a first sorting method of the invention for a sorting system having a plurality of input nodes, one output node, and a shared external storage unit connected between each of the input units and the output unit comprises the steps of: storing sorting target data in the input node and internally sorting the sorting target data in accordance with a predetermined sorting rule; storing as a sorted string an internally sorted result in the shared external storage unit; and reading the sorted string from the shared external storage unit, storing the sorted string in the output node, and merging the sorted string in accordance with the predetermined sorting rule.
  • a second sorting method of the invention for a sorting system having a plurality of input nodes, a plurality of output nodes, and a shared external storage unit connected between each of the input units and each of the output units comprises the steps of: storing sorting target data in each of the input nodes; classifying and internally sorting the read sorting target data in accordance with a predetermined classification rule and a predetermined sorting rule; distributing and storing a classified and sorted result in the shared external storage units shared with the output nodes in accordance with the predetermined classification rule; performing a process from the storing step to the distributing and storing step for all sorting target data; and thereafter reading the sorted string from the shared external storage units shared with the input nodes and storing the sorted string in each output node, and merging the sorted string in accordance with the predetermined sorting rule.
  • a third sorting method of the invention for a sorting system having a plurality of nodes, a dedicated external storage unit provided for each of the plurality of nodes, and a shared external storage unit provided between each pair of nodes of the plurality of nodes comprises the steps of: storing sorting target data in each of the input nodes and internally sorting the sorting target data in accordance with a predetermined sorting rule; storing, as a sorted string, an internally sorted result in the dedicated or shared external storage unit; performing the internal sorting step and the sorted string storing step for all sorting target data; and thereafter reading each sorted string from the dedicated and shared external storage units at one of the plurality of nodes and merging each sorted string in accordance with the predetermined sorting rule.
  • a fourth sorting method of the invention for a sorting system having a plurality of nodes, a dedicated external storage unit provided for each of the plurality of nodes, and a shared external storage unit provided between each pair of nodes of the plurality of nodes comprises the steps of: storing sorting target data in at least one of the plurality of nodes; classifying and internally sorting the read sorting target data in accordance with a predetermined classification rule and a predetermined sorting rule; distributing and storing as a sorted string a classified and internally stored result in the dedicated or shared external storage unit in accordance with the classification rule; performing a process from the storing step to the distributing and storing step for all sorting target data; and thereafter reading each sorted string from the dedicated and shared external storage units at one of the plurality of nodes and merging each sorted string in accordance with the predetermined sorting rule.
  • the input node, output node, and other nodes correspond, for example, to general computers.
  • the dedicated and shared external storage units correspond, for example, to a magnetic disk drive.
  • the first and second buffers and other buffers may use, for example, a portion of a main storage of each computer.
  • FIG. 1 is a block diagram of a first sorting system of the invention.
  • FIG. 2 is a flow chart illustrating the main sorting processes to be executed by the first sorting system.
  • FIG. 3 shows an example of a system configuration information table of the first sorting system.
  • FIG. 4 shows an example of a record information table.
  • FIG. 5 is a flow chart illustrating a process of reading sorting target data.
  • FIG. 6 is a flow chart illustrating a process of storing a sorted string in a shared disk 500 to be executed by the first sorting system.
  • FIG. 7 shows an example of a sorted string storing header.
  • FIG. 8 shows an example of sorted string storing information.
  • FIG. 9 is a flow chart illustrating a process of reading a sorted result to be executed by the first sorting system.
  • FIG. 10 is a flow chart illustrating a process of merging and outputting a sorted result to be executed by the first sorting system.
  • FIG. 11 is a block diagram of a second sorting system of the invention.
  • FIG. 12 shows an example of a system configuration information table of the second sorting system.
  • FIG. 13 is a flow chart illustrating an internal sorting process to be executed by the second sorting system.
  • FIG. 14 is a flow chart illustrating a process of storing a sorted string to be executed by the second sorting system.
  • FIG. 16 is a flow chart illustrating the main sorting processes to be executed by the third sorting system.
  • FIG. 17 shows an example of a system configuration information table of the third sorting system.
  • FIG. 1 is a block diagram of a first sorting system of the invention. As shown in FIG. 1, this system includes input node 100 , input local disks 200 , an output node 300 , an output local disk 400 , and shared disks 500 .
  • the input nodes 100 and output node 300 are general computers such as PC and work stations.
  • the input local disks 200 , output local disk 400 and shared disks 500 are general external storage devices such as a magnetic disk device.
  • This system has a plurality of input nodes 100 and a single output node 300 .
  • the shared disk 500 is provided between the output node 300 and each input node 100 . Namely, there are shared disks same in number as the number of input nodes 100 .
  • the input node 100 has an input node CPU 101 , an input information negotiation unit 102 , a data reading unit 103 , an internal sorting unit 104 , a shared disk writing unit 105 , a merge instructing unit 106 , an internal sorting buffer 110 , an IO bus interface (I/F) 600 and a network interface (I/F) 700 .
  • the input local disk 200 is connected via an IO bus 610 to the IO bus I/F 600 of the input node 100 .
  • the input information negotiation unit 102 , data reading unit 103 , internal sorting unit 104 , shared disk writing unit 105 and merge instructing unit 106 are realized, for example, by execution of software stored in an unrepresented memory by the input node CPU 101 .
  • the internal sorting buffer 110 is realized, for example, by using part of a memory which the input CPU 101 uses for the execution of software, or by preparing a dedicated memory.
  • the output node 300 has an output node CPU 301 , an output information negotiation unit 302 , a merge instruction receiving unit 303 , a shared disk reading unit 304 , a merge unit 305 , a sorted result output unit 306 , a merge buffer 310 , an IO bus I/F 600 and a network I/F 700 .
  • the output local disk 400 is connected via an IO bus 610 to the IO bus I/F 600 of the output node 300 .
  • the output information negotiation unit 302 , merge instruction receiving unit 303 , shared disk reading unit 304 , merge unit 305 and sorted result output unit 306 are realized, for example, by execution of software stored in an unrepresented memory by the output node CPU 301 .
  • the merge buffer 310 is realized, for example, by using part of a memory which the output CPU 301 uses for the execution of software, or by preparing a dedicated memory.
  • Each of the plurality of input nodes 100 and target data stored in the input local disk 200 and stores the division data in the internal sorting buffer 110 (S 1020 ).
  • the stored data is internally sorted to generate a sorted result (sorted string) (S 1030 ), and the sorted string is stored in the shared disk 500 (S 1040 ).
  • the input node 100 checks whether all the sorting target data assigned to the input node 100 has been processed and stored (S 1050 ). If there is still sorting target data not processed, the input node 100 repeats the above-described Steps S 1020 to S 1040 . If all the sorting target data assigned to the input node 100 has been completely processed, the input node 100 instructs the output node 300 to execute a merging process (S 1060 ).
  • the output node 300 receives the merge instruction from all input nodes 100 (S 1070 ), the output node 300 reads a portion of each of a plurality of sorted strings from the shared disk 500 and writes the read portions in the merge buffer 310 (S 1080 ). The output node 300 compares the keys of records stored in the merge buffer 310 to obtain a whole sorted result of the records (S 1090 ), and stores the sorted result in the output local disk 400 (S 1100 ). The output node 300 checks whether all the sorted strings stored in the shared disk 500 have been completely processed (S 1110 ). If there is still sorted strings not processed, the output node 300 repeats the above-described Steps S 1080 to S 1100 . With the above-described processes executed by the input and output nodes 100 and 300 , the final sorted result of all the sorting target data divisionally stored in the plurality of input local disks 200 can be obtained.
  • management information negotiation process generates and negotiates management information necessary for the sorting process, prior to executing the actual sorting process.
  • the input information negotiation unit 102 of each input node 100 and the output information negotiation unit 302 of the output node 300 generate, negotiate and store management information necessary for the sorting process, by using information preset by a user or the like and information exchanged between the input and output information negotiation units 102 and 302 via the network 710 .
  • the management information necessary for the sorting process includes information stored in a system configuration information table which stores data of the system configuration of the sorting system, information stores in a record information table which stores data of the sorting target records, and other information.
  • FIG. 3 shows an example of the system configuration table.
  • a “node name” is a name assigned to each node.
  • An “item” and “contents” at each node indicate the configuration information at each node to be used for the sorting process.
  • a the single output node 300 is connected via the network I/F 700 to the network 710 .
  • Each of the shared disks 500 is connected via the IO bus 610 to the IO bus I/F 600 of each of the input nodes 100 and output node 300 .
  • sorting target data data to be sorted
  • the amount of sorting target data stored in each local disk 200 is herein assumed to be larger than the capacity of the internal sorting buffer 110 of each input mode 100 .
  • the sorting target data is a collection of records each including one or a plurality of keys to be used for a sorting process.
  • the final sorted result is stored in the output local disk 400 of the output disk 300 .
  • FIG. 2 is a flow chart illustrating the main sorting processes to be executed by the sorting system.
  • the input nodes 100 and output node 300 generate and negotiate management information necessary for the sorting process, prior to executing the actual sorting process (S 1010 ).
  • each input node 100 divides the sorting “network number” in an “input node # 1 ” is an identification number of the input node # 1 in the network 710 (in the example shown in FIG. 3, N 1 ).
  • a “shared disk number for connection to output node” is an identification number of a shared disk connected to the output node, among the disks of the input node # 1 (in the example shown in FIG. 3, D 1 ).
  • a “input local disk number” is an identification number of the disk for storing the sorting target data, among the disks of the input node # 1 (in the example shown in FIG. 3, D 4 ).
  • a “sorting target data file name” is the name of a file of the sorting target data stored in the input local disk (in the example shown in FIG. 3, AAA).
  • a “record name” is the name assigned to the record type of the sorting target data (in the example shown in FIG. 3, XXX).
  • a “shared disk number for connection to input node # 1 ” is an identification number of the disk for the connection to the input node # 1 , among the disks connected to the input nodes (in the example shown in FIG. 3, D 1 ).
  • An “output local disk number” of the output node is an identification number of the disk for storing the sorted result, among the disks of the output node (in the example shown in FIG. 3, D 7 ).
  • FIG. 4 shows an example of the record information table.
  • a “record name” is a name assigned to a record type. This record name (type) is stored in the item “record name” of the system configuration table shown in FIG. 3.
  • a “record size” is a size (number of bytes) of a unit record.
  • a “record item number” is the number of items constituting a record unit.
  • An “item size” in each of “item # 1 ” to “item # 5 ” is a size (number of bytes) of each item.
  • An “item type” is a type of data in the item.
  • a “use as key” indicates whether the item is used as the key for rearranging record units in the sorting process, i.e., whether rearrangement is performed in accordance with the contents in the item.
  • a “key priority order” is a priority order of items among a plurality of items to be used for rearrangement which is performed in the higher priority order.
  • the record used in the record information table shown in FIG. 4 is a fixed length record.
  • One record unit has 60 bytes including 16-byte “item # 1 ”, 32-byte “item # 2 ”, and 4-byte “item # 3 ”, “item # 4 ” and “item # 5 ”.
  • FIG. 5 is a flow chart illustrating the sorting target data reading process.
  • the data reading unit 103 of each input node 100 checks whether the file name of the sorting target data is already acquired (S 1130 ). If not, the file name of the sorting target data is acquired from the system configuration information table (S 1140 ).
  • the amount of unprocessed sorting target data stored in the input local disk 200 is checked (S 1150 ) and it is checked whether the amount of unprocessed data is larger than the capacity of the internal sorting buffer 110 (S 1160 ). If larger, the unprocessed sorting target data corresponding in amount to the capacity of the internal sorting buffer 110 is read from the input local disk 200 and written in the internal sorting buffer 110 (S 1170 ). If the amount of unprocessed data is equal to or smaller than the capacity of the internal sorting buffer 110 , all the unprocessed sorting target data is read from the input local disk 200 and written in the internal sorting buffer 110 (S 1180 ).
  • the internal sorting process is performed for the data stored in the internal sorting buffer 110 .
  • the internal sorting means at each input node 100 internally sorts the sorting target data stored in the internal sorting buffer 110 to rearrange records in accordance with one or a plurality of keys contained in the records.
  • the sorted result of the data stored in the internal sorting buffer 110 is therefore obtained.
  • This rearrangement is performed in accordance with a predetermined sorting rule to rearrange the records in a key order such as an ascending or descending order.
  • a sorted string storing process is performed to store the internally sorted result of the data in the internal sorting buffer 110 , in the shared disk 500 as a sorted string.
  • FIG. 6 is a flow chart illustrating a process of storing a sorted string in the shared disk.
  • the shared disk writing unit 105 of each input node 100 generates a sorted string storing header (S 1200 ), and writes as a sorted string the header and the internally sorted result of the data in the internal sorting buffer 110 into the shared disk 500 (S 1210 ) to then update the sorted string storing information in the shared disk 500 (S 1220 ).
  • FIG. 7 is a diagram showing an example of the sorted string storing header.
  • the sorted string storing header has fields of “output destination node name”, “sorted string storing number”, “generation date/time” and “record number”.
  • the “output destination node name” is the name of the output node 300 connected to the shared disk 500 .
  • the “sorted string storing number” is a serial number sequentially assigned to each sorted string when the sorted string is generated and written.
  • the “generation data/time” is the date and time when each sorted string is generated.
  • the “record number” is the number of records contained in each sorted string.
  • FIG. 8 is a diagram showing an example of sorted string storing information.
  • This sorted string storing information is information on the sorted strings written in the shared disk 500 , and is provided for each shared disk 500 .
  • the sorted string storing information includes an “output destination node name”, an “update date/time” and a “sorted string storing number”.
  • the “output destination node name” is the name of the output node 300 connected to the shared disk 500 which stores the sorted string storing information.
  • the “update date/time” is the date and time when the sorted string storing information is updated when a sorted string is written.
  • the “sorted string storing number” is the number of sorted strings stored in the shared disk 500 for the output destination node.
  • FIG. 9 is a flow chart illustrating a process of reading a sorted string from the shared disk 500 .
  • the merge instruction receiving unit 303 of the output node 300 receives a merge instruction from the input node 100 via the network 710 (S 1240 ), it checks whether the merge instruction has been received from all the input nodes (S 1250 ). If there is an input node from which the merge instruction is still not received, the merge instruction receiving unit 303 stands by until the merge instruction is received from all the input nodes.
  • the shared disk reading unit 304 reads the sorted string storing information in each shared disk 500 connected to the output node 300 (S 1260 ) to check the total number of sorted strings stored in all the shared disks 500 .
  • a quotient is calculated by dividing the capacity of the merge buffer 310 by the total number of sorted strings. This quotient is used as the capacity of the merge buffer 310 to be assigned to each sorted string (S 1270 ).
  • the shared disk reading unit 304 selects one of sorted strings to be processed, and refers to the sorted string storing header of the selected sorted string to check the size of the sorted string. This size is set as an initial value of the unprocessed amount of the sorted string. It is then checked whether the unprocessed amount of the sorted string is larger than the assigned buffer capacity (S 1290 ). If the unprocessed amount of the sorted string is larger than the assigned buffer capacity, the unprocessed sorted string corresponding in amount to the assigned buffer capacity is read from the shared disk 500 and written in the merge buffer 310 (S 1300 ).
  • all the unprocessed sorted string is read from the shared disk 500 and written in the merge buffer 310 (S 1310 ). Reading the unprocessed sorted string is performed starting from the top of the sorted string to the last in the sorted order. The read unprocessed portion of the sorted string is called a sorted segment which constitutes the unprocessed sorted string.
  • Step 1320 After the sorted segment of one sorted string is stored in the merge buffer 310 , it is checked whether the sorted segments of all the sorted strings to be processed have been read (S 1320 ). If there is a sorted string whose segment is still not read, the above-described Steps S 1280 to S 1310 are repeated. In this manner, the sorted segments of the sorted strings stored in all the shared disks 500 are read and written in the merge buffer 310 .
  • FIG. 10 is a flow chart illustrating the merging/sorted result outputting process.
  • the merging unit 305 of the output node 300 first sets a pointer to a record in the sorted segment of each sorted string stored in the merge buffer 310 (S 1330 ). The initial value of the pointer immediately after each sorted segment is read is assumed to be the start record of the sorted segment.
  • the merging unit 305 compares the keys of records designated by pointers of all the sorted segments in the merge buffer 310 in accordance with a predetermined sorting rule, and passes the pointer to the record at the earliest position in accordance with the predetermined sorting rule, to the sorted result outputting unit 306 (S 1340 ).
  • the sorted result outputting unit 306 writes the record corresponding to the received pointer in the output local disk 400 (S 1350 ).
  • the merging unit 305 checks whether all other records of the sorted segment stored in the merge buffer 310 have been output (S 1370 ).
  • the merging unit 305 and sorted result outputting unit 306 output the sorted results of the sorted segments stored in the merge buffer 310 , to the output local disk 400 .
  • the merging unit 305 and sorted result outputting unit 306 output all the records contained in one sorted segment stored in the merge buffer 310 (S 1370 : Y), then the merging unit 305 notifies the shared disk reading unit 304 of the sorted string to which the sorted segment belongs (S 1380 ).
  • the shared disk reading unit 304 checks the unprocessed amount of the sorted string (S 1390 ) to check whether there is any unprocessed sorted segment in the sorted string (S 1410 ). If there is an unprocessed sorted segment, this segment is read from the shared disk 500 (S 1420 ) and overwritten on the already output sorted segment in the merge buffer 310 (S 1420 ).
  • the merging unit 305 initializes the pointer for the newly stored sorted segment to indicate the start record in the stored sorted segment (S 1430 ).
  • the merging means and sorted result outputting unit 306 repeat the above-described Steps S 1340 to S 1390 .
  • the unprocessed amount of the sorted string is properly updated when the sorted segment is read from the shared disk 500 and stored in the merge buffer 310 .
  • FIG. 11 is a block diagram of a second sorting system of the invention.
  • Different points from the first sorting system reside in that there are a plurality of output nodes 300 (and output local disks) 400 and that each input node 200 has an output node determining unit 107 .
  • a shared disk 500 is connected via an IO bus 610 to an IO bus I/F 600 of each of the input node 100 and output node 300 .
  • One shared disk 500 is provided between each input node 600 and each output node 300 . Namely, in this system, there are (i ⁇ j) shared disks 500 where i is the number of input nodes and j is the number of output nodes.
  • Other structures are similar to the first sorting system.
  • each output node obtains the sorted result of all input data, the sorted result having been distributed in accordance with a predetermined distribution rule.
  • the output node determining unit 107 determines the output node 300 to which records to be internally sorted are output.
  • the output node determining unit 107 can be realized, for example, by execution of software stored in an unrepresented memory by an input node CPU 101 .
  • the management information negotiation process is executed to generate and negotiate management information necessary for the sorting process, prior to executing the actual sorting process.
  • This process is generally similar to that described with the first sorting system, and the input information negotiation unit 102 of each input node 100 and the output information negotiation unit 302 of each output node 300 generate, negotiate and store management information necessary for the sorting process.
  • FIG. 12 shows an example of a system configuration table to be used by the second sorting system.
  • the system configuration table has the structure similar to that shown in FIG. 3, except that each input node has a plurality of “shared disk numbers for connection to output nodes” and a plurality of output node information pieces are provided.
  • FIG. 13 is a flow chart illustrating the internal sorting process to be executed by the second sorting system.
  • the output node determining means determines the output node to which each record stored in the internal sorting buffer 110 is output, in accordance with a predetermined node decision rule using a key contained in each record or other information (S 2010 ).
  • the predetermined node decision rule is a distribution rule of distributing records (sorting target data) to a plurality of output nodes 300 .
  • records having consecutive key values are cyclically distributed to each output node 300 , a record having a specific key value is assigned to the output node 300 which is determined by substituting the specific key value into a calculation formula or by using a table with the specific value being used as a search key, or the like.
  • the internal sorting unit 104 classifies the records stored in the internal sorting buffer into record groups each corresponding to each of the output nodes determined by the output node determining unit 107 (S 2020 ), and rearranges the order of records in each classified record group in accordance with a predetermined sorting rule (S 2030 ).
  • the data stored in the internal sorting buffer 110 is classified into record groups for respective output nodes 300 determined in accordance with the predetermined decision rule, and classified record groups rearranged in accordance with the predetermined sorting rule are obtained.
  • FIG. 14 is a flow chart illustrating this process of storing a sorted string in the shared disk 500 .
  • the shared disk writing unit 105 of each input node 100 selects the shared disk 500 connected to the output node 300 to which the internal sorted result stored in the internal sorting buffer 11 is output as the sorted string (S 2040 ).
  • the shared disk writing unit 105 Similar to that described for the first sorting system, the shared disk writing unit 105 generates a sorted string storing header for each selected shared disk 500 (S 2050 ), and writes the header and the internally sorted string into each selected shared disk 500 (S 2060 ) to then update the sorted string storing information in each selected shared disk 500 (S 2070 ).
  • the process of reading a sorted string from the shared disk 500 and the merge/sort result output process are executed.
  • the sorting process is completed.
  • the sorting target data stored in a plurality of input local disks 200 the sorted result distributed to a plurality of output nodes 300 in accordance with the predetermined decision rule can therefore be obtained at each output local disk 400 of the output node 300 .
  • the sorting target data is stored in the input local disk, it may be stored in other locations.
  • the input node 100 may read via the network 710 the sorting target data stored in a disk of an optional computer connected to the network, i.e., in a remote disk.
  • the output node 300 may output the sorted result to a remote disk via the network 710 .
  • the sorting target data is stored in the input local disk 200 as a file discriminated by the file name.
  • the sorting target data may be read from the input local disk 200 by the input node 100 by directly designating a logical address or physical address of the input local disk.
  • One input node 100 may store the sorting target data as two or more files.
  • the shared disk 500 is connected to and accessed by both the input node 100 and output node 300 .
  • the output node 300 reads the sorted segments from the shared disk 500 . Therefore, read/write operations of the shared disk 500 are not performed at the same time. It is therefore not necessarily required to perform an exclusive control between nodes connected to the shared disk. Even if the exclusive control is not performed, there is no practical problem in executing the above processes.
  • FIG. 15 is a block diagram of a third sorting system of the invention. This system includes nodes 800 , local disks 900 and shared disks 500 .
  • Each node 800 has a CPU 801 , a buffer 810 , an input module 820 , an output module 830 , an IO bus I/F 600 and a network interface I/F 700 .
  • the input module 820 has an input information negotiation unit 102 , a data reading unit 103 , an internal sorting unit 104 , a shared disk writing unit 105 , a merge instructing unit 106 and an output node determining unit 107 .
  • the output module 830 has an output information negotiation unit 302 , a merge instruction receiving unit 303 , a shared disk reading unit 304 , a merging unit 305 and a sorted result outputting unit 306 .
  • the local disk 900 is connected via an IO bus 610 to the IO bus I/F 600 .
  • the input information negotiation unit 102 , data reading unit 103 , internal sorting unit 104 , shared disk writing unit 105 , merge instructing unit 106 and output node determining unit 107 as well as the output information negotiation unit 302 , merge instruction receiving unit 303 , shared disk writing unit 304 , merging unit 305 and sorted result outputting unit 306 are realized, for example, by execution of software stored in an unrepresented memory by CPU 901 .
  • the buffer 810 is realized, for example, by using part of a memory which CPU 801 uses for the execution of software, or by preparing a dedicated memory.
  • n nodes 800 there are a plurality of nodes 800 (hereinafter n nodes 800 ).
  • the plurality of nodes 800 are connected via respective network I/F 700 to the network 710 .
  • the plurality of nodes 800 are interconnected by the network 710 .
  • Each shared disk 500 is provided between nodes 800 and connected to the IO bus I/F 600 via the IO bus 610 .
  • Each node 800 is connected to (n-1) shared disks 500 .
  • the shared disk 500 has at least two storing regions A 510 and B 520 .
  • some or all of the nodes 800 (hereinafter i nodes 800 , and called input nodes) store the sorting target data in their local disks 900
  • some or all of the nodes 800 (hereinafter j nodes 800 , and called output nodes) store some or all of the sorted result in their local disks 900
  • i is an integer of 2 or larger or n or smaller
  • j is an integer of 1 or larger or n or smaller
  • (i+j) is n or larger.
  • the same node 800 may be included in both an aggregation of input nodes and an aggregation of output nodes.
  • Each local disk 900 of the input node has a self-node storage region 910 and a sorting target data storage region 920 for storing sorting target data.
  • Each local disk 900 of the output node has a self-node storage region 910 and a sorting result storage region 930 for storing sorted result.
  • Each local disk 900 of a node serving as both the input and output nodes has a self-node storage region 910 , a sorting target data storage region 920 and a sorted result storage region 930 .
  • the sorting target data is distributed and assigned to a plurality of input nodes and stored in the sorting target data storage regions 920 of the local disks 900 of the input nodes.
  • the amount of sorting target data stored in the local disk is larger than the capacity of the buffer 810 of the input node.
  • the sorted result is stored in the sorted result storage region 930 of the local disk 900 of the output node.
  • FIG. 16 is a flow chart illustrating the main sorting processes to be executed by the sorting system.
  • the input module 820 and output module 830 of each node 800 generate and negotiate management information necessary for the sorting process, prior to executing the actual sorting process (S 3010 ).
  • the input nodes for storing sorting target data in the local disks and output node for storing some or all of the sorted results are discriminated.
  • the input module 820 of each input node divides the sorting target data stored in the local disk 900 and stores the division data in the buffer 810 (S 3020 ).
  • the data read in the buffer 810 is internally sorted to generate a sorted result (sorted string) and the node 800 to which the sorted result is output is determined in accordance with a predetermined decision rule (if there are a plurality of output nodes) (S 3030 ), and the sorted string is stored in the shared disk 500 connected to the determined output node 800 or stored in its local disk 900 (S 3040 ).
  • the input module 820 checks whether all the sorting target data stored in the local disk 900 has been processed (S 3050 ).
  • Steps S 3020 to S 3040 are repeated. If all the sorting target data has been completely processed, the input module 820 of the input node instructs the output module 830 of the output node to execute a merging process (S 3060 ).
  • the output module 830 of the output node reads a portion (sorted segment) of each of a plurality of sorted strings from i shared disks 500 and local disks 900 and writes it in the buffer 810 (S 3080 ).
  • the output module 800 compares the keys of records stored in the buffer 810 to obtain a whole sorted result of the records (S 3090 ), and stores the sorted result in the local disk 900 (S 3100 ).
  • the output module 830 checks whether all the sorted strings have been completely processed (S 3110 ). If there is still sorted strings not processed, the output module 830 repeats the above-described Steps S 3080 to S 3100 . With the above-described processes executed by each node 800 , the final sorted result distributed to j nodes 800 of all the sorting target data divisionally stored in i local disks 90 can be obtained in each local disk of j nodes.
  • the management information negotiation process is executed prior to executing the actual sorting process.
  • the input information negotiation unit 102 and output information negotiation unit 302 of each node 800 generate, negotiate and store management information necessary for the sorting process, by using information preset by a user and information exchanged between the input and output information negotiation units 102 and 302 via the network 710 .
  • the management information necessary for the sorting process includes information stored in a system configuration information table which stores data of the system configuration of the sorting system, information stored in a record information table which stores data of the sorting target records, and other information.
  • the input information negotiation unit 102 and output information negotiation unit 302 assign the regions A 510 and B 520 of each shared disk 500 provided between the nodes 800 with one of the two data transfer directions between nodes. For example, of the two nodes 800 connected to one shared disk 500 , a first node 800 writes data in the region A 510 and reads data from the region B 520 , whereas a second node 800 writes data in the region B 520 and reads data from the region A 510 .
  • the region A 510 of the shared disk 500 is used for the data transfer region from the first to second nodes 800 and the region B 520 is used for the data transfer region from the second to first nodes 800 , so that each of the two regions is stationarily assigned as a unidirectional data path.
  • the input and output information negotiation units 102 and 302 discriminate between input nodes for storing sorting target data in the local disks 900 and output nodes for storing sorted results in the local disks, among the plurality of nodes 800 .
  • FIG. 17 shows an example of a system configuration table of the third sorting system.
  • a “node name” is a name assigned to each node.
  • An “item” and “contents” at each node indicate the configuration information at each node to be used for the sorting process.
  • a “network number” in an “input node # 1 ” is an identification number of the input node # 1 in the network 710 .
  • Each item in the “input node # 1 ” indicating the shared disk at another node stores: an identification number of the shared node at the other node; information (e.g., region name, region size and the like, this being applied also to the following description) of the region in which the sorted string to another node is stored; and information of the region in which the sorted string from another node is stored (in example shown in FIG. 17, region A and region B).
  • a local disk item stores: an identification number of the local disk for its own node among disks of the node # 1 ; information of the region in which the sorted string from another node is stored; and information of the region in which the sorted string to another node is stored. Both the regions may be the same region of the local disk.
  • a sorting target data” item stores: information of whether the sorting target data is stored in the local disk of the node (presence/absence of sorting target data); information of the storage region if the sorting target data is stored in the local disk; and the file name and record name of the sorting target data.
  • a “possibility of storing sorted result” item stores: information of a presence/absence of a possibility of acquiring the sorted result and storing it in the local disk of the node; and information of the storage region if the sorted result is stored in the local disk.
  • a sorting target data reading process is executed. Specifically, at the input node which stores the sorting target data in the local disk 900 , the data reading unit 103 reads the sorting target data from the local disk 900 and stores it in the buffer 810 , by the method similar to that shown in FIG. 5.
  • the internal sorting process will be described.
  • the output node for each record stored in the buffer 810 is determined in accordance with the predetermined node decision rule.
  • records in the internal sorting buffer are classified into record groups corresponding to determined output nodes to rearrange the records in each classified record group in accordance with the predetermined sorting rule. If only one output node is determined, records are not classified and only the internal sorting process is performed.
  • the data read into the buffer 810 is classified into record groups corresponding to the output nodes determined in accordance with the predetermined node decision rule, and records in each classified record group are rearranged in accordance with the predetermined sorting rule to obtain internally sorted results.
  • the shared disk writing unit 105 of the input node uses the internally sorted result in the buffer 810 of the records in each classified record group corresponding to the output node determined by the predetermined node decision rule, as the sorted result for the determined output node, and selects the region for storing the sorted string from the regions A 510 and B 520 of the shared disk 500 and the local disk 900 .
  • the sorted result as well as the sorted result storing header is written in the selected region to update the sorted string storing information of the regions A 510 and B 520 of the shared disk 500 and the self-node storage region of the local disk 900 . More specifically, if the output destination is another node, the sorted string is stored in the shared disk 500 , whereas if the output destination is its own node, the sorted string is stored in the self-node storage region 910 of the local disk 900 .
  • the shared disk writing unit 105 checks which one of the regions A 510 and B 520 of the shared disk 500 is assigned as the data transfer path from its own node to another node, for example, by referring to the system configuration information table. Thereafter, the sorted string is stored in one of the two regions allocated as the data transfer path from its own node to another node.
  • the merge instruction receiving unit 303 of the output module 830 receives a merge instruction from the input module 820 of the input node 100 via the network 710 .
  • the shared disk reading unit 304 Upon reception of the merge instruction from all the input nodes, the shared disk reading unit 304 reads the sorted string storing information from the i shared disks 500 and local disks 900 connected to its own node, to thereby check the total number of sorted strings stored in the shared disks 500 and local disks 900 .
  • a quotient is calculated by dividing the capacity of the buffer 810 by the total number of sorted strings. This quotient is used as the capacity of the buffer 810 to be assigned to each sorted string.
  • the shared disk reading unit 304 refers to the sorted string storing header of each sorted string to check the size thereof. This size is set as an initial value of the unprocessed amount of the sorted string. It is then checked whether the unprocessed amount of the sorted string is larger than the assigned buffer capacity. In accordance with this check result, a sorted segment having a proper data amount is read from the shared disk 500 and local disk 900 and written in the buffer 810 . Reading the unprocessed sorted string is performed starting from the top of the sorted string to the last in the sorted order.
  • the shared disk reading means 304 stores the sorted segments of all the sorted strings in the shared disks 500 and local disks 900 , in the buffer 810 .
  • the shared disk reading unit 304 checks which one of the regions A 510 and B 520 of the shared disk 500 is assigned as the data transfer path from its own node to another node, for example, by referring to the system configuration information table. Thereafter, the sorted string storage information and sorted segment are read from one of the two regions allocated as the data transfer path from its own node to another node.
  • the merging and sorted result outputting process to be executed by each output node is performed by the method similar to that shown in FIG. 10.
  • the merging unit 305 of the output node first sets a pointer to each record in the sorted segment of each sorted string stored in the buffer 810 . (S 1330 ), and compares the records designated by pointers of all the sorted segments in the buffer 810 , in accordance with a predetermined sorting rule, and passes the pointer to the record at the earliest position according to the predetermined sorting rule, to the sorted result outputting unit 306 .
  • the sorted result outputting unit 306 writes the record corresponding to the received pointer in the sorted result storing region of the local disk 900 .
  • the merging unit 305 increments the pointer for the output record by “1” to indicate the next record, to thereafter repeat the record comparison and output record decision. In this manner, the merging unit 305 and sorted result outputting unit 306 output the sorted results of the sorted segments stored in the buffer 810 , to the sorted result storage region 930 of the local disk 900 .
  • the merging unit 305 notifies the shared disk reading unit 304 of the sorted string to which the sorted segment belongs. Upon reception of this notice, the shared disk reading unit 304 checks whether there is any unprocessed sorted segment in the sorted string. If there is an unprocessed sorted segment, this segment is read from the shared disk 500 or local disk 900 and overwritten on the already output sorted segment in the buffer 810 . The merging unit 305 initialize the pointer for the newly stored sorted segment to indicate the start record in the stored sorted segment. The merging unit 305 and sorted result outputting unit 306 repeat the above-described merging process.
  • the shared disk reading unit 304 notifies the merging unit 305 of a sorted string process completion, so that the merging unit 305 discards the sorted segments of the sorted string so as not to be subjected to the record comparison to thereafter repeat the merging process for the remaining sorted strings.
  • the output node repeats the sorted string reading process and merging and sorted result outputting process until all the sorted strings stored in the shared disk 500 and local disk 900 connected to the output node are processes. With the above processes, all the sorted strings stored in the shared disk 500 and local disk 900 connected to each output node are merged and the sorting process for all the sorting target data stored in the sorting target data storage region 920 of the local disk 900 of the input node is completed. The sorted result distributed to each output node is therefore obtained in the sorted result storage region 930 of each local disk 900 connected to the output node.
  • the sorting target data is stored in the sorting target data storage region 920 of the local disk 900 , it may be stored in other locations.
  • the input node may read via the network 710 the sorting target data stored in a remote disk.
  • the output node may output the sorted result to a remote disk via the network 710 .
  • the local disk 900 has the self-node storage region 910 , sorting target data storage region 920 , sorted result storage region 930 and the like.
  • each node 800 may have a plurality of local disks 900 each separately having the self-node storage region 910 , sorting target data storage region 920 and sorted result storage region 930 .
  • each shared disk 500 has two regions A 510 and B 520 , for example two shared disks 500 may be provided between a pair of nodes 800 and each of the two shared disks 500 may be stationarily assigned a unidirectional data path in a tow-way data path.
  • the sorting target data is stored in the sorting target data storage region 920 of each local disk 900 as one file identified by the file name.
  • the sorting target data may be read by the node 800 from the local disk 900 , by directly designating a logical or physical address on the local disk 900 .
  • a single node 800 may store two or more files of the sorting target data.
  • each shared disk 500 is connected to and accessed by two nodes 800 .
  • the regions A 510 and B 520 are each stationarily assigned a unidirectional data transfer path in a two-way data path between two nodes 800 connected to the shared disk 500 . Therefore, the input module 820 of the input node stores (writes) a sorted string in one of the two regions of the shared disk 500 .
  • the output module 830 of the output node reads a sorted segment from the region of the shared disk 500 so that both the read/write operations of the region are not performed at the same time. It is therefore not necessarily required to perform an exclusive control between nodes connected to the shared disk. Even if the exclusive control is not performed, there is no practical problem in executing the above processes.
  • the input module 820 of each node 800 uses the buffer 810
  • the output module 830 of each node 800 uses the buffer 810 .
  • the reading, internal sorting and sorted string storing process and the sorted string reading and merging and sorted result outputting process are time sequentially separated at each node 800 by synchronization of the input and output modules 820 and 830 upon reception of a merge instruction notice. Therefore, the buffer 810 at each node 800 is not used by the input and output modules 820 and 830 at the same time.
  • the buffer 810 can therefore be shared by both the input and output modules 820 and 830 without preparing different buffer regions for the input and output modules 820 and 830 .
  • the data amount of sorting target data, sorted strings, sorted segments and the like to be processed uses the unit of record.
  • This record unit is used for storing the sorting target data in the internal sorting buffer 110 and buffer 810 , reading the sorted string from the shared disk 500 and local disk 900 , and storing the sorted segment in the merge buffer 310 and buffer 810 .
  • a plurality of disks are used, a plurality of physical or logical disks constituting a disk array with a plurality of interfaces may also be used as the disks of the embodiment.
  • a disk array manages a plurality of disks operating in parallel and constitutes a disk sub-system. By using the disk array, management of disks becomes easy and a higher speed system can be realized.
  • the sorting process, sorting method and system of each embodiment described above are applicable to generation and update (load) of databases of a database system.
  • original data of databases is sorted and processed. If the amount of original data is large, it takes a long time to sort and load the original data so that it takes a long time until the database system can be made usable and user convenience is degraded considerably.
  • the sorting process, sorting method and system it becomes possible to shorten the sorting time and loading time and considerably improve a time efficiency and user availability of the database system.
  • the sorting process, sorting method and system are suitable for use with a large scale database dealing with a large amount of data, such as data ware house, OLAP (Online Analytical Processing) and decision support system or with a parallel database system processing data in parallel by using a plurality of nodes.
  • data such as data ware house, OLAP (Online Analytical Processing) and decision support system
  • OLAP Online Analytical Processing
  • decision support system or with a parallel database system processing data in parallel by using a plurality of nodes.
  • the present invention it is possible to shorten the time required for a merging process when a plurality of computers of a computer system executes the merging process.
  • the sorting process can therefore be executed at high speed. Furthermore, it is possible to suppress the influence of delay, hindrance and the like upon another task executable by the computer system.

Abstract

A sorting system includes a plurality of input nodes, each of which sorts sorting target data distributed and stored in input local disks. An internally sorted result is stored as a plurality of sorted strings in a shared disk connected between the input node and output node. Upon reception of a merge instruction from all input nodes, the output node reads the sorted string from the shared disk and merges it and outputs a whole sorted result of all input data to an output local disk. In a process of obtaining a whole sorted result of all input data through parallel processing by a computer system constituted of a plurality of computers (nodes), a time to sorting input data can be shortened.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0001]
  • The present invention relates to a sorting process to be executed by a computer system, and more particularly to a sorting process to be executed by a plurality of computers. [0002]
  • 2. Description of the Related Art [0003]
  • A sorting process is one of fundamental types of information processing to be executed by a computer system. A most common sorting process is to rearrange a plurality of records each including one or a plurality of keys, in a certain order (e.g., ascending or descending order) in accordance with the keys. [0004]
  • In a sorting process for a relatively small amount of input data (collection of records), a computer writes given input data in a buffer of a computer memory, refers to and compares keys of respective records to rearrange the records and obtain a sorted result. Such a sorting process which uses only the buffer of a computer memory as data storage region is called internal sorting. [0005]
  • In another sorting process for a relatively large amount of input data as compared to the capacity of a buffer, since all the input data cannot be written in the buffer, the computer divides the input data and writes each division data into the buffer to perform internal sorting and obtain the sorted result (hereinafter called a sorted string) which is stored in an external storage. Such input data write, internal sorting and sorted string storage are repeated as many times as the number of input data divisions. As a result, sorted strings corresponding in number to the number of data divisions are stored in the external storage. [0006]
  • Next, the computer reads each portion of all the sorted strings from the external storage and writes the portions in the buffer. The computer compares the keys of the records stored in the buffer to obtain a sorted result of each portion of all the sorted strings. Such a process for each portion of all the sorted strings is executed from the start to end of each sorted string to thereby obtain a final sorted result of all the input data. Such a process of obtaining the final sorted result from each sorted result is called merging. Such a sorting process using the computer buffer and external storage as data storage region is called external sorting. The external storage may be a magnetic disk, a magnetic tape or the like. [0007]
  • Parallel processing is one type of high speed information processing to be executed by a computer system. Parallel processing shortens a process time by preparing a plurality of resources necessary for information processing and performing a plurality of tasks at the same time, thereby attempting to reduce the processing time. As an example of a parallelized external sorting process, JP-A-8-272545 discloses techniques of improving parallel processing by using a disk array constituted of a plurality of magnetic disks as an external storage and devising the storage locations of data in the disk array. [0008]
  • In another type of a parallelized external sorting process, a plurality of computers may be interconnected by a network to constitute one computer system and a large amount of input data may be sorted in parallel by a plurality of computers. More specifically, input data is distributed and assigned to some or all computers (hereinafter called input nodes) among a plurality of computers, and each input node externally sorts the assigned input data. The externally sorted result at each input node is transferred via the network to one computer (hereinafter called an output node) among a plurality of computers. The output node merges the sorted results transferred from the input nodes in a manner similar to dealing with externally sorted strings, to thereby obtain the final sorted result of all the input data. [0009]
  • However, while the sorting process is executed in parallel by a computer system constituted of a plurality of computers, the merging process is required to be executed both at the input nodes and output node. Therefore, a time required for the merging process at an input node is added to a time required for the sorting process at the input node. The merging process at an input node may delay or hinder another task at the input node. [0010]
  • If a network interconnecting computers cannot operate at high speed, a time to transfer, via such a network, the externally sorted result at an input node to the output node may pose some problem. For example, if a network is a LAN having a transfer speed of 100 M bits/sec such as 100 Base-T, the data transfer speed is about 12.5 M bytes/sec or lower. It takes therefore about a threefold transfer time, as compared to about 40 M bytes/sec or lower of an ultra wide SCSI which is commonly used I/O specifications. If a plurality of computers are interconnected by a network operating at not so high a speed, a ratio of the data transfer time to the sorting process time may become large. [0011]
  • SUMMARY OF THE INVENTION
  • It is an object of the present invention to provide a sorting system and method capable of executing parallel sorting processes at a plurality of computers in high speed by shortening the sorting process time. [0012]
  • A first sorting system of this invention comprises: a plurality of input nodes; one output node; and a shared external storage unit connected between each of the input units and the output unit. Each of the input units comprises: a first buffer; means for storing sorting target data in the first buffer; means for internally sorting data in the first buffer in accordance with a predetermined sorting rule; and means for storing as a sorted string an internally sorted result in the shared external storage unit. The output node comprises: a second buffer; means for storing, in the second buffer, the sorted string stored in the shared external storage unit; means for merging a plurality of sorted strings stored in the second buffer in accordance with the predetermined sorting rule; and means for outputting a merged result as a sorted result. [0013]
  • A second sorting system of the invention comprises: a plurality of input nodes; a plurality of output nodes; and a shared external storage unit connected between each of the input units and each of the output units. Each of the input units comprises: a first buffer; means for storing sorting target data in the first buffer; means for internally sorting data in the first buffer in accordance with a predetermined sorting rule; means for classifying data in the first buffer in accordance with a predetermined classification rule; and means for storing as a sorted string an internally sorted result in the shared external storage unit in accordance with the predetermined classification rule. The output node comprises: a second buffer; means for storing, in the second buffer, the sorted string stored in the shared external storage unit; means for merging a plurality of sorted strings stored in the second buffer in accordance with the predetermined sorting rule; and means for outputting a merged result as a sorted result. [0014]
  • In the first and second sorting systems, the input node may further comprise means for notifying a merge instruction to the output node, and the output node may further comprise means for receiving the merge instruction. [0015]
  • A third sorting system of the invention comprises: a plurality of nodes; a dedicated external storage unit provided for each of the plurality of nodes; and a shared external storage unit provided between each pair of nodes of the plurality of nodes. At least one node among the plurality of nodes comprises: a first buffer; means for storing sorting target data in the first buffer; means for internally sorting data in the first buffer in accordance with a predetermined sorting rule; means for sorting as a sorted string an internally sorted result in the dedicated external storage unit; a second buffer; means for storing, in the second buffer, the sorted string stored in the shared external storage unit and the dedicated external storage unit; means for merging a plurality of sorted strings stored in the second buffer in accordance with the predetermined sorting rule; and means for outputting a merged result as a sorted result. [0016]
  • A fourth sorting system of the invention comprises: a plurality of nodes; a dedicated external storage unit provided for each of the plurality of nodes; and a shared external storage unit provided between each pair of nodes of the plurality of nodes. At least one node among the plurality of nodes comprises: a first buffer; means for storing sorting target data in the first buffer; means for internally sorting data in the first buffer in accordance with a predetermined sorting rule; means for classifying data in the first buffer in accordance with a predetermined classification rule; means for sorting as a sorted string an internally sorted result in the dedicated external storage unit in accordance with the predetermined classification rule; a second buffer, means for storing, in the second buffer, the sorted string stored in the shared external storage unit and the dedicated external storage unit; means for merging a plurality of sorted strings stored in the second buffer in accordance with the predetermined sorting rule; and means for outputting a merged result as a sorted result. [0017]
  • In the third and fourth sorting systems, the first and second buffers may be the same buffer. At least one of the plurality of nodes may further comprise means for notifying another output node of a merge instruction and means for receiving the merge instruction from the other node. [0018]
  • A first sorting method of the invention for a sorting system having a plurality of input nodes, one output node, and a shared external storage unit connected between each of the input units and the output unit, comprises the steps of: storing sorting target data in the input node and internally sorting the sorting target data in accordance with a predetermined sorting rule; storing as a sorted string an internally sorted result in the shared external storage unit; and reading the sorted string from the shared external storage unit, storing the sorted string in the output node, and merging the sorted string in accordance with the predetermined sorting rule. [0019]
  • A second sorting method of the invention for a sorting system having a plurality of input nodes, a plurality of output nodes, and a shared external storage unit connected between each of the input units and each of the output units, comprises the steps of: storing sorting target data in each of the input nodes; classifying and internally sorting the read sorting target data in accordance with a predetermined classification rule and a predetermined sorting rule; distributing and storing a classified and sorted result in the shared external storage units shared with the output nodes in accordance with the predetermined classification rule; performing a process from the storing step to the distributing and storing step for all sorting target data; and thereafter reading the sorted string from the shared external storage units shared with the input nodes and storing the sorted string in each output node, and merging the sorted string in accordance with the predetermined sorting rule. [0020]
  • A third sorting method of the invention for a sorting system having a plurality of nodes, a dedicated external storage unit provided for each of the plurality of nodes, and a shared external storage unit provided between each pair of nodes of the plurality of nodes, comprises the steps of: storing sorting target data in each of the input nodes and internally sorting the sorting target data in accordance with a predetermined sorting rule; storing, as a sorted string, an internally sorted result in the dedicated or shared external storage unit; performing the internal sorting step and the sorted string storing step for all sorting target data; and thereafter reading each sorted string from the dedicated and shared external storage units at one of the plurality of nodes and merging each sorted string in accordance with the predetermined sorting rule. [0021]
  • A fourth sorting method of the invention for a sorting system having a plurality of nodes, a dedicated external storage unit provided for each of the plurality of nodes, and a shared external storage unit provided between each pair of nodes of the plurality of nodes, comprises the steps of: storing sorting target data in at least one of the plurality of nodes; classifying and internally sorting the read sorting target data in accordance with a predetermined classification rule and a predetermined sorting rule; distributing and storing as a sorted string a classified and internally stored result in the dedicated or shared external storage unit in accordance with the classification rule; performing a process from the storing step to the distributing and storing step for all sorting target data; and thereafter reading each sorted string from the dedicated and shared external storage units at one of the plurality of nodes and merging each sorted string in accordance with the predetermined sorting rule. [0022]
  • In the third and fourth sorting methods, at least one node may have the same buffer to be used for storing the sorting storage data and storing the sorted string. [0023]
  • The input node, output node, and other nodes correspond, for example, to general computers. The dedicated and shared external storage units correspond, for example, to a magnetic disk drive. The first and second buffers and other buffers may use, for example, a portion of a main storage of each computer.[0024]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a first sorting system of the invention. [0025]
  • FIG. 2 is a flow chart illustrating the main sorting processes to be executed by the first sorting system. [0026]
  • FIG. 3 shows an example of a system configuration information table of the first sorting system. [0027]
  • FIG. 4 shows an example of a record information table. [0028]
  • FIG. 5 is a flow chart illustrating a process of reading sorting target data. [0029]
  • FIG. 6 is a flow chart illustrating a process of storing a sorted string in a shared [0030] disk 500 to be executed by the first sorting system.
  • FIG. 7 shows an example of a sorted string storing header. [0031]
  • FIG. 8 shows an example of sorted string storing information. [0032]
  • FIG. 9 is a flow chart illustrating a process of reading a sorted result to be executed by the first sorting system. [0033]
  • FIG. 10 is a flow chart illustrating a process of merging and outputting a sorted result to be executed by the first sorting system. [0034]
  • FIG. 11 is a block diagram of a second sorting system of the invention. [0035]
  • FIG. 12 shows an example of a system configuration information table of the second sorting system. [0036]
  • FIG. 13 is a flow chart illustrating an internal sorting process to be executed by the second sorting system. [0037]
  • FIG. 14 is a flow chart illustrating a process of storing a sorted string to be executed by the second sorting system. [0038]
  • FIG. 15 is a block diagram of a third sorting system of the invention. [0039]
  • FIG. 16 is a flow chart illustrating the main sorting processes to be executed by the third sorting system. [0040]
  • FIG. 17 shows an example of a system configuration information table of the third sorting system.[0041]
  • DETAILED DESCRIPTION OF THE EMBODIMENTS
  • Embodiments of the invention will be described in detail with reference to the accompanying drawings. [0042]
  • (1st Embodiment) [0043]
  • FIG. 1 is a block diagram of a first sorting system of the invention. As shown in FIG. 1, this system includes [0044] input node 100, input local disks 200, an output node 300, an output local disk 400, and shared disks 500. The input nodes 100 and output node 300 are general computers such as PC and work stations. The input local disks 200, output local disk 400 and shared disks 500 are general external storage devices such as a magnetic disk device.
  • This system has a plurality of [0045] input nodes 100 and a single output node 300. The shared disk 500 is provided between the output node 300 and each input node 100. Namely, there are shared disks same in number as the number of input nodes 100.
  • The [0046] input node 100 has an input node CPU 101, an input information negotiation unit 102, a data reading unit 103, an internal sorting unit 104, a shared disk writing unit 105, a merge instructing unit 106, an internal sorting buffer 110, an IO bus interface (I/F) 600 and a network interface (I/F) 700. The input local disk 200 is connected via an IO bus 610 to the IO bus I/F 600 of the input node 100.
  • The input [0047] information negotiation unit 102, data reading unit 103, internal sorting unit 104, shared disk writing unit 105 and merge instructing unit 106 are realized, for example, by execution of software stored in an unrepresented memory by the input node CPU 101. The internal sorting buffer 110 is realized, for example, by using part of a memory which the input CPU 101 uses for the execution of software, or by preparing a dedicated memory.
  • The [0048] output node 300 has an output node CPU 301, an output information negotiation unit 302, a merge instruction receiving unit 303, a shared disk reading unit 304, a merge unit 305, a sorted result output unit 306, a merge buffer 310, an IO bus I/F 600 and a network I/F 700. The output local disk 400 is connected via an IO bus 610 to the IO bus I/F 600 of the output node 300.
  • The output [0049] information negotiation unit 302, merge instruction receiving unit 303, shared disk reading unit 304, merge unit 305 and sorted result output unit 306 are realized, for example, by execution of software stored in an unrepresented memory by the output node CPU 301. The merge buffer 310 is realized, for example, by using part of a memory which the output CPU 301 uses for the execution of software, or by preparing a dedicated memory.
  • Each of the plurality of [0050] input nodes 100 and target data stored in the input local disk 200 and stores the division data in the internal sorting buffer 110 (S1020). The stored data is internally sorted to generate a sorted result (sorted string) (S1030), and the sorted string is stored in the shared disk 500 (S1040). The input node 100 checks whether all the sorting target data assigned to the input node 100 has been processed and stored (S1050). If there is still sorting target data not processed, the input node 100 repeats the above-described Steps S1020 to S1040. If all the sorting target data assigned to the input node 100 has been completely processed, the input node 100 instructs the output node 300 to execute a merging process (S1060).
  • If the [0051] output node 300 receives the merge instruction from all input nodes 100 (S1070), the output node 300 reads a portion of each of a plurality of sorted strings from the shared disk 500 and writes the read portions in the merge buffer 310 (S1080). The output node 300 compares the keys of records stored in the merge buffer 310 to obtain a whole sorted result of the records (S1090), and stores the sorted result in the output local disk 400 (S1100). The output node 300 checks whether all the sorted strings stored in the shared disk 500 have been completely processed (S1110). If there is still sorted strings not processed, the output node 300 repeats the above-described Steps S1080 to S1100. With the above-described processes executed by the input and output nodes 100 and 300, the final sorted result of all the sorting target data divisionally stored in the plurality of input local disks 200 can be obtained.
  • The details of each process will be given in the following. [0052]
  • First, the management information negotiation process will be described. This process generates and negotiates management information necessary for the sorting process, prior to executing the actual sorting process. [0053]
  • The input [0054] information negotiation unit 102 of each input node 100 and the output information negotiation unit 302 of the output node 300 generate, negotiate and store management information necessary for the sorting process, by using information preset by a user or the like and information exchanged between the input and output information negotiation units 102 and 302 via the network 710. The management information necessary for the sorting process includes information stored in a system configuration information table which stores data of the system configuration of the sorting system, information stores in a record information table which stores data of the sorting target records, and other information.
  • FIG. 3 shows an example of the system configuration table. Referring to FIG. 3, a “node name” is a name assigned to each node. An “item” and “contents” at each node indicate the configuration information at each node to be used for the sorting process. For example, a the [0055] single output node 300 is connected via the network I/F 700 to the network 710. Each of the shared disks 500 is connected via the IO bus 610 to the IO bus I/F 600 of each of the input nodes 100 and output node 300.
  • In this system, data to be sorted (sorting target data) is divisionally assigned to the plurality of [0056] input nodes 100 and stored in the input local node disk 200 of each input node 100. The amount of sorting target data stored in each local disk 200 is herein assumed to be larger than the capacity of the internal sorting buffer 110 of each input mode 100. The sorting target data is a collection of records each including one or a plurality of keys to be used for a sorting process. In this system, the final sorted result is stored in the output local disk 400 of the output disk 300.
  • Next, the sorting process to be executed by the sorting system will be described. [0057]
  • First, the main sorting processes to be executed by the sorting system will be described, and the details of each process will be later given. FIG. 2 is a flow chart illustrating the main sorting processes to be executed by the sorting system. First, when a user or the like instructs the sorting system to start the sorting process, the [0058] input nodes 100 and output node 300 generate and negotiate management information necessary for the sorting process, prior to executing the actual sorting process (S1010).
  • Next, each [0059] input node 100 divides the sorting “network number” in an “input node # 1” is an identification number of the input node # 1 in the network 710 (in the example shown in FIG. 3, N1). A “shared disk number for connection to output node” is an identification number of a shared disk connected to the output node, among the disks of the input node #1 (in the example shown in FIG. 3, D1). A “input local disk number” is an identification number of the disk for storing the sorting target data, among the disks of the input node #1 (in the example shown in FIG. 3, D4). A “sorting target data file name” is the name of a file of the sorting target data stored in the input local disk (in the example shown in FIG. 3, AAA). A “record name” is the name assigned to the record type of the sorting target data (in the example shown in FIG. 3, XXX). A “shared disk number for connection to input node # 1” is an identification number of the disk for the connection to the input node # 1, among the disks connected to the input nodes (in the example shown in FIG. 3, D1). An “output local disk number” of the output node is an identification number of the disk for storing the sorted result, among the disks of the output node (in the example shown in FIG. 3, D7).
  • FIG. 4 shows an example of the record information table. In FIG. 4, a “record name” is a name assigned to a record type. This record name (type) is stored in the item “record name” of the system configuration table shown in FIG. 3. [0060]
  • In this record information table, a “record size” is a size (number of bytes) of a unit record. A “record item number” is the number of items constituting a record unit. An “item size” in each of “[0061] item # 1” to “item # 5” is a size (number of bytes) of each item. An “item type” is a type of data in the item. A “use as key” indicates whether the item is used as the key for rearranging record units in the sorting process, i.e., whether rearrangement is performed in accordance with the contents in the item. A “key priority order” is a priority order of items among a plurality of items to be used for rearrangement which is performed in the higher priority order. The record used in the record information table shown in FIG. 4 is a fixed length record. One record unit has 60 bytes including 16-byte “item # 1”, 32-byte “item # 2”, and 4-byte “item # 3”, “item # 4” and “item # 5”.
  • After the management information necessary for the sorting process is prepared by the above-described management information negotiation process, a sorting target data reading process is executed. FIG. 5 is a flow chart illustrating the sorting target data reading process. Before the sorting target data reading process starts, the [0062] data reading unit 103 of each input node 100 checks whether the file name of the sorting target data is already acquired (S1130). If not, the file name of the sorting target data is acquired from the system configuration information table (S1140).
  • Next, the amount of unprocessed sorting target data stored in the input [0063] local disk 200 is checked (S1150) and it is checked whether the amount of unprocessed data is larger than the capacity of the internal sorting buffer 110 (S1160). If larger, the unprocessed sorting target data corresponding in amount to the capacity of the internal sorting buffer 110 is read from the input local disk 200 and written in the internal sorting buffer 110 (S1170). If the amount of unprocessed data is equal to or smaller than the capacity of the internal sorting buffer 110, all the unprocessed sorting target data is read from the input local disk 200 and written in the internal sorting buffer 110 (S1180).
  • After the unprocessed sorting target data is written in the [0064] internal sorting buffer 110 in the above manner, the internal sorting process is performed for the data stored in the internal sorting buffer 110. The internal sorting means at each input node 100 internally sorts the sorting target data stored in the internal sorting buffer 110 to rearrange records in accordance with one or a plurality of keys contained in the records. The sorted result of the data stored in the internal sorting buffer 110 is therefore obtained. This rearrangement is performed in accordance with a predetermined sorting rule to rearrange the records in a key order such as an ascending or descending order.
  • After the internal sorting process for the data written in the [0065] internal sorting buffer 110 is completed in the above manner, a sorted string storing process is performed to store the internally sorted result of the data in the internal sorting buffer 110, in the shared disk 500 as a sorted string.
  • FIG. 6 is a flow chart illustrating a process of storing a sorted string in the shared disk. The shared [0066] disk writing unit 105 of each input node 100 generates a sorted string storing header (S1200), and writes as a sorted string the header and the internally sorted result of the data in the internal sorting buffer 110 into the shared disk 500 (S1210) to then update the sorted string storing information in the shared disk 500 (S1220).
  • FIG. 7 is a diagram showing an example of the sorted string storing header. As shown in FIG. 7, the sorted string storing header has fields of “output destination node name”, “sorted string storing number”, “generation date/time” and “record number”. The “output destination node name” is the name of the [0067] output node 300 connected to the shared disk 500. The “sorted string storing number” is a serial number sequentially assigned to each sorted string when the sorted string is generated and written. The “generation data/time” is the date and time when each sorted string is generated. The “record number” is the number of records contained in each sorted string.
  • FIG. 8 is a diagram showing an example of sorted string storing information. This sorted string storing information is information on the sorted strings written in the shared [0068] disk 500, and is provided for each shared disk 500. As shown in FIG. 8, the sorted string storing information includes an “output destination node name”, an “update date/time” and a “sorted string storing number”. The “output destination node name” is the name of the output node 300 connected to the shared disk 500 which stores the sorted string storing information. The “update date/time” is the date and time when the sorted string storing information is updated when a sorted string is written. The “sorted string storing number” is the number of sorted strings stored in the shared disk 500 for the output destination node.
  • The above-described processes from reading sorting target data to storing a sorted string in the shared disk are repeated by each [0069] input node 100 until all the sorting target data stored in the input local disk is completely processed. After each input node 100 completely processes all the sorting target data stored in the input local disk and stores the sorted strings in the shared disk 500, the merge instructing unit 106 of the input node 100 instructs the output node 300 to execute the merging process.
  • Next, a sorted string reading process will be described, which is executed when the [0070] output node 300 receives the merge instruction from all the input nodes 100. FIG. 9 is a flow chart illustrating a process of reading a sorted string from the shared disk 500. When the merge instruction receiving unit 303 of the output node 300 receives a merge instruction from the input node 100 via the network 710 (S1240), it checks whether the merge instruction has been received from all the input nodes (S1250). If there is an input node from which the merge instruction is still not received, the merge instruction receiving unit 303 stands by until the merge instruction is received from all the input nodes. When the merge instruction is received from all the input nodes 100, the shared disk reading unit 304 reads the sorted string storing information in each shared disk 500 connected to the output node 300 (S1260) to check the total number of sorted strings stored in all the shared disks 500. A quotient is calculated by dividing the capacity of the merge buffer 310 by the total number of sorted strings. This quotient is used as the capacity of the merge buffer 310 to be assigned to each sorted string (S1270).
  • After the capacity of the merge buffer to be assigned to each sorted string determined, the shared [0071] disk reading unit 304 selects one of sorted strings to be processed, and refers to the sorted string storing header of the selected sorted string to check the size of the sorted string. This size is set as an initial value of the unprocessed amount of the sorted string. It is then checked whether the unprocessed amount of the sorted string is larger than the assigned buffer capacity (S1290). If the unprocessed amount of the sorted string is larger than the assigned buffer capacity, the unprocessed sorted string corresponding in amount to the assigned buffer capacity is read from the shared disk 500 and written in the merge buffer 310 (S1300). If the unprocessed amount of the sorted string is equal to or smaller than the assigned buffer capacity, all the unprocessed sorted string is read from the shared disk 500 and written in the merge buffer 310 (S1310). Reading the unprocessed sorted string is performed starting from the top of the sorted string to the last in the sorted order. The read unprocessed portion of the sorted string is called a sorted segment which constitutes the unprocessed sorted string.
  • After the sorted segment of one sorted string is stored in the [0072] merge buffer 310, it is checked whether the sorted segments of all the sorted strings to be processed have been read (S1320). If there is a sorted string whose segment is still not read, the above-described Steps S1280 to S1310 are repeated. In this manner, the sorted segments of the sorted strings stored in all the shared disks 500 are read and written in the merge buffer 310.
  • After the sorted segments are completely written in the [0073] merge buffer 310 in the above manner, a merging process and a final sorted result outputting process are performed for the data stored in the merge buffer 310. FIG. 10 is a flow chart illustrating the merging/sorted result outputting process. The merging unit 305 of the output node 300 first sets a pointer to a record in the sorted segment of each sorted string stored in the merge buffer 310 (S1330). The initial value of the pointer immediately after each sorted segment is read is assumed to be the start record of the sorted segment.
  • Next, the merging [0074] unit 305 compares the keys of records designated by pointers of all the sorted segments in the merge buffer 310 in accordance with a predetermined sorting rule, and passes the pointer to the record at the earliest position in accordance with the predetermined sorting rule, to the sorted result outputting unit 306 (S1340). The sorted result outputting unit 306 writes the record corresponding to the received pointer in the output local disk 400 (S1350). The merging unit 305 checks whether all other records of the sorted segment stored in the merge buffer 310 have been output (S1370). If a record of the sorted segment to be compared is still stored in the merge buffer 310, the pointer for the sorted string is incremented by “1” to indicate the next record (S1400), to thereafter repeat the above-described Steps S1340 and S1350. In this manner, the merging unit 305 and sorted result outputting unit 306 output the sorted results of the sorted segments stored in the merge buffer 310, to the output local disk 400.
  • If the merging [0075] unit 305 and sorted result outputting unit 306 output all the records contained in one sorted segment stored in the merge buffer 310 (S1370: Y), then the merging unit 305 notifies the shared disk reading unit 304 of the sorted string to which the sorted segment belongs (S1380).
  • Upon reception of this notice, the shared [0076] disk reading unit 304 checks the unprocessed amount of the sorted string (S1390) to check whether there is any unprocessed sorted segment in the sorted string (S1410). If there is an unprocessed sorted segment, this segment is read from the shared disk 500 (S1420) and overwritten on the already output sorted segment in the merge buffer 310 (S1420). The merging unit 305 initializes the pointer for the newly stored sorted segment to indicate the start record in the stored sorted segment (S1430). The merging means and sorted result outputting unit 306 repeat the above-described Steps S1340 to S1390. The unprocessed amount of the sorted string is properly updated when the sorted segment is read from the shared disk 500 and stored in the merge buffer 310.
  • If there is no unprocessed sorted segment in the sorted string (S[0077] 1410: Y), then it is checked whether all the sorted strings stored in a plurality of shared disks 500 have been processed (S1440). It there is a sorted string still not processed, the shared disk reading unit 304 notifies the merging unit 305 of the process completion of the sorted string (S1450) so that the merging unit 305 discards the sorted segments of the sorted string so as not to be subjected to the record comparison (S1460) to thereafter repeat the above-described Steps S1340 to S1430.
  • With the above processes, all the sorted strings stored in the shared [0078] disks 500 are merged and the sorting process is completed. Namely, the sorted result of all the sorting target data stored in a plurality of input local disks 200 is obtained in the output local disk 400 connected to the output node 300.
  • As described above, in this sorting system, since the sorted strings are stored in the shared disks, it is not necessary that the [0079] input node 100 executes the merging process. Therefore a high speed sorting process can be realized by shortening the process time required for the input node. Furthermore, since the input node 100 is not required to execute the merging process, it is possible to suppress the influence of delay, hindrance and the like upon another task executable by the input node 100.
  • Still further, since the sorted strings are stored in the shared [0080] disks 500 and processed, it is not necessary to transfer the sorting target data and sorted strings from the input nodes to output node via the network. Therefore, even if the network cannot operate at high speed, a high speed sorting process can be realized.
  • (2nd Embodiment) [0081]
  • FIG. 11 is a block diagram of a second sorting system of the invention. Different points from the first sorting system reside in that there are a plurality of output nodes [0082] 300 (and output local disks) 400 and that each input node 200 has an output node determining unit 107. In this system, a shared disk 500 is connected via an IO bus 610 to an IO bus I/F 600 of each of the input node 100 and output node 300. One shared disk 500 is provided between each input node 600 and each output node 300. Namely, in this system, there are (i×j) shared disks 500 where i is the number of input nodes and j is the number of output nodes. Other structures are similar to the first sorting system.
  • In this system, a plurality of output nodes are prepared and each output node obtains the sorted result of all input data, the sorted result having been distributed in accordance with a predetermined distribution rule. [0083]
  • The output [0084] node determining unit 107 determines the output node 300 to which records to be internally sorted are output. The output node determining unit 107 can be realized, for example, by execution of software stored in an unrepresented memory by an input node CPU 101.
  • The sorting process to be executed by this sorting system will be described. [0085]
  • First, the management information negotiation process is executed to generate and negotiate management information necessary for the sorting process, prior to executing the actual sorting process. This process is generally similar to that described with the first sorting system, and the input [0086] information negotiation unit 102 of each input node 100 and the output information negotiation unit 302 of each output node 300 generate, negotiate and store management information necessary for the sorting process.
  • FIG. 12 shows an example of a system configuration table to be used by the second sorting system. As shown in FIG. 12, the system configuration table has the structure similar to that shown in FIG. 3, except that each input node has a plurality of “shared disk numbers for connection to output nodes” and a plurality of output node information pieces are provided. [0087]
  • After the management information necessary for the sorting process is prepared by the management information negotiation process, a sorting target data reading process is executed at each input node, in the manner similar to that of the first sorting system. [0088]
  • Next, the internal sorting process to be executed by the second sorting system will be described. FIG. 13 is a flow chart illustrating the internal sorting process to be executed by the second sorting system. After the sorting target data stored in the input [0089] local disk 200 at each input node is written in the internal sorting buffer 110, the output node determining means determines the output node to which each record stored in the internal sorting buffer 110 is output, in accordance with a predetermined node decision rule using a key contained in each record or other information (S2010). The predetermined node decision rule is a distribution rule of distributing records (sorting target data) to a plurality of output nodes 300. For example, records having consecutive key values are cyclically distributed to each output node 300, a record having a specific key value is assigned to the output node 300 which is determined by substituting the specific key value into a calculation formula or by using a table with the specific value being used as a search key, or the like.
  • After the output node is determined for each record in the above manner, the [0090] internal sorting unit 104 classifies the records stored in the internal sorting buffer into record groups each corresponding to each of the output nodes determined by the output node determining unit 107 (S2020), and rearranges the order of records in each classified record group in accordance with a predetermined sorting rule (S2030).
  • With the above processes, the data stored in the [0091] internal sorting buffer 110 is classified into record groups for respective output nodes 300 determined in accordance with the predetermined decision rule, and classified record groups rearranged in accordance with the predetermined sorting rule are obtained.
  • After the internal sorting process is completed in the above manner, a process of storing a sorted string in the shared [0092] disk 500 is executed. FIG. 14 is a flow chart illustrating this process of storing a sorted string in the shared disk 500. The shared disk writing unit 105 of each input node 100 selects the shared disk 500 connected to the output node 300 to which the internal sorted result stored in the internal sorting buffer 11 is output as the sorted string (S2040). Similar to that described for the first sorting system, the shared disk writing unit 105 generates a sorted string storing header for each selected shared disk 500 (S2050), and writes the header and the internally sorted string into each selected shared disk 500 (S2060) to then update the sorted string storing information in each selected shared disk 500 (S2070).
  • The above-described processes from reading sorting target data to storing a sorted string in the shared [0093] disk 500 are repeated by each input node 100 until all the sorting target data stored in the input local disk 200 is completely processed. After each input node 100 completely processes all the sorting target data stored in the input local disk 200 and stores the sorted strings in the shared disk 500, the merge instructing unit 106 of the input node 100 instructs via the network 710 all the output nodes 300 to execute the merging process.
  • Similar to that described with the first sorting system, the process of reading a sorted string from the shared [0094] disk 500 and the merge/sort result output process are executed. When each of the output nodes 300 merges all the sorted strings in a plurality of shared disks 500 connected to the output node 300 and outputs the merged result to the output local disk 400, the sorting process is completed. Of all the sorting target data stored in a plurality of input local disks 200, the sorted result distributed to a plurality of output nodes 300 in accordance with the predetermined decision rule can therefore be obtained at each output local disk 400 of the output node 300.
  • Also in this second sorting system, since the sorted strings are stored in the shared [0095] disks 500, it is not necessary that the input node 100 executes the merging process. Therefore a high speed sorting process can be realized by shortening the process time required for the input node. Furthermore, since the input node 100 is not required to execute the merging process, it is possible to suppress the influence of delay, hindrance and the like upon another task executable by the input node 100. Still further, it is not necessary to transfer the sorting target data and sorted strings from the input nodes to output nodes via the network. Therefore, even if the network cannot operate at high speed, a high speed sorting process can be realized.
  • In the above-described two embodiments, although the sorting target data is stored in the input local disk, it may be stored in other locations. For example, the [0096] input node 100 may read via the network 710 the sorting target data stored in a disk of an optional computer connected to the network, i.e., in a remote disk. Similarly, in the two embodiments, although the sorted result of sorting target data is stored in the output local disk 400, the output node 300 may output the sorted result to a remote disk via the network 710.
  • In the above two embodiments, the sorting target data is stored in the input [0097] local disk 200 as a file discriminated by the file name. For example, the sorting target data may be read from the input local disk 200 by the input node 100 by directly designating a logical address or physical address of the input local disk. One input node 100 may store the sorting target data as two or more files.
  • Also in the two embodiments, the shared [0098] disk 500 is connected to and accessed by both the input node 100 and output node 300. In this case, however, after the input node 100 completes the process of storing sorted strings in the shared disk 500 and after the input node 100 and output node 300 are synchronized upon a notice of a merge instruction, the output node 300 reads the sorted segments from the shared disk 500. Therefore, read/write operations of the shared disk 500 are not performed at the same time. It is therefore not necessarily required to perform an exclusive control between nodes connected to the shared disk. Even if the exclusive control is not performed, there is no practical problem in executing the above processes.
  • (3rd Embodiment) [0099]
  • FIG. 15 is a block diagram of a third sorting system of the invention. This system includes [0100] nodes 800, local disks 900 and shared disks 500.
  • Each [0101] node 800 has a CPU 801, a buffer 810, an input module 820, an output module 830, an IO bus I/F 600 and a network interface I/F 700. The input module 820 has an input information negotiation unit 102, a data reading unit 103, an internal sorting unit 104, a shared disk writing unit 105, a merge instructing unit 106 and an output node determining unit 107.
  • The [0102] output module 830 has an output information negotiation unit 302, a merge instruction receiving unit 303, a shared disk reading unit 304, a merging unit 305 and a sorted result outputting unit 306. The local disk 900 is connected via an IO bus 610 to the IO bus I/F 600.
  • The input [0103] information negotiation unit 102, data reading unit 103, internal sorting unit 104, shared disk writing unit 105, merge instructing unit 106 and output node determining unit 107 as well as the output information negotiation unit 302, merge instruction receiving unit 303, shared disk writing unit 304, merging unit 305 and sorted result outputting unit 306 are realized, for example, by execution of software stored in an unrepresented memory by CPU 901. The buffer 810 is realized, for example, by using part of a memory which CPU 801 uses for the execution of software, or by preparing a dedicated memory.
  • In this system, there are a plurality of nodes [0104] 800 (hereinafter n nodes 800). The plurality of nodes 800 are connected via respective network I/F 700 to the network 710. Namely, the plurality of nodes 800 are interconnected by the network 710. Each shared disk 500 is provided between nodes 800 and connected to the IO bus I/F 600 via the IO bus 610. Each node 800 is connected to (n-1) shared disks 500. In this system, there are n×(n-1)/2 shared disks 500. The shared disk 500 has at least two storing regions A510 and B520.
  • In this system, of the plurality of [0105] nodes 800, some or all of the nodes 800 (hereinafter i nodes 800, and called input nodes) store the sorting target data in their local disks 900, and some or all of the nodes 800 (hereinafter j nodes 800, and called output nodes) store some or all of the sorted result in their local disks 900, wherein i is an integer of 2 or larger or n or smaller, j is an integer of 1 or larger or n or smaller, and (i+j) is n or larger. Namely, in this system, the same node 800 may be included in both an aggregation of input nodes and an aggregation of output nodes.
  • Each [0106] local disk 900 of the input node has a self-node storage region 910 and a sorting target data storage region 920 for storing sorting target data. Each local disk 900 of the output node has a self-node storage region 910 and a sorting result storage region 930 for storing sorted result. Each local disk 900 of a node serving as both the input and output nodes has a self-node storage region 910, a sorting target data storage region 920 and a sorted result storage region 930.
  • In this system, the sorting target data is distributed and assigned to a plurality of input nodes and stored in the sorting target [0107] data storage regions 920 of the local disks 900 of the input nodes. In each input node, it is assumed that the amount of sorting target data stored in the local disk is larger than the capacity of the buffer 810 of the input node. Also in this system, the sorted result is stored in the sorted result storage region 930 of the local disk 900 of the output node.
  • The sorting process to be executed by this system will be described. [0108]
  • First, the main sorting processes to be executed by the sorting system will be described. FIG. 16 is a flow chart illustrating the main sorting processes to be executed by the sorting system. First, when the sorting system is instructed to execute a sorting process, the [0109] input module 820 and output module 830 of each node 800 generate and negotiate management information necessary for the sorting process, prior to executing the actual sorting process (S3010). For example, of the plurality of nodes 800, input nodes for storing sorting target data in the local disks and output node for storing some or all of the sorted results are discriminated.
  • Next, the [0110] input module 820 of each input node divides the sorting target data stored in the local disk 900 and stores the division data in the buffer 810 (S3020). The data read in the buffer 810 is internally sorted to generate a sorted result (sorted string) and the node 800 to which the sorted result is output is determined in accordance with a predetermined decision rule (if there are a plurality of output nodes) (S3030), and the sorted string is stored in the shared disk 500 connected to the determined output node 800 or stored in its local disk 900 (S3040). The input module 820 checks whether all the sorting target data stored in the local disk 900 has been processed (S3050). If there is still sorting target data not processed, the above-described Steps S3020 to S3040 are repeated. If all the sorting target data has been completely processed, the input module 820 of the input node instructs the output module 830 of the output node to execute a merging process (S3060).
  • If the merge instruction is received from all [0111] input modules 820 of the input nodes (S3070), the output module 830 of the output node reads a portion (sorted segment) of each of a plurality of sorted strings from i shared disks 500 and local disks 900 and writes it in the buffer 810 (S3080). The output module 800 compares the keys of records stored in the buffer 810 to obtain a whole sorted result of the records (S3090), and stores the sorted result in the local disk 900 (S3100). The output module 830 checks whether all the sorted strings have been completely processed (S3110). If there is still sorted strings not processed, the output module 830 repeats the above-described Steps S3080 to S3100. With the above-described processes executed by each node 800, the final sorted result distributed to j nodes 800 of all the sorting target data divisionally stored in i local disks 90 can be obtained in each local disk of j nodes.
  • The details of each process will be given in the following. [0112]
  • First, the management information negotiation process is executed prior to executing the actual sorting process. The input [0113] information negotiation unit 102 and output information negotiation unit 302 of each node 800 generate, negotiate and store management information necessary for the sorting process, by using information preset by a user and information exchanged between the input and output information negotiation units 102 and 302 via the network 710. The management information necessary for the sorting process includes information stored in a system configuration information table which stores data of the system configuration of the sorting system, information stored in a record information table which stores data of the sorting target records, and other information.
  • With this information negotiation process, the input [0114] information negotiation unit 102 and output information negotiation unit 302 assign the regions A510 and B520 of each shared disk 500 provided between the nodes 800 with one of the two data transfer directions between nodes. For example, of the two nodes 800 connected to one shared disk 500, a first node 800 writes data in the region A510 and reads data from the region B520, whereas a second node 800 writes data in the region B520 and reads data from the region A510. Namely, the region A510 of the shared disk 500 is used for the data transfer region from the first to second nodes 800 and the region B520 is used for the data transfer region from the second to first nodes 800, so that each of the two regions is stationarily assigned as a unidirectional data path.
  • The input and output [0115] information negotiation units 102 and 302 discriminate between input nodes for storing sorting target data in the local disks 900 and output nodes for storing sorted results in the local disks, among the plurality of nodes 800.
  • FIG. 17 shows an example of a system configuration table of the third sorting system. Referring to FIG. 17, a “node name” is a name assigned to each node. An “item” and “contents” at each node indicate the configuration information at each node to be used for the sorting process. For example, a “network number” in an “[0116] input node # 1” is an identification number of the input node # 1 in the network 710. Each item in the “input node # 1” indicating the shared disk at another node, stores: an identification number of the shared node at the other node; information (e.g., region name, region size and the like, this being applied also to the following description) of the region in which the sorted string to another node is stored; and information of the region in which the sorted string from another node is stored (in example shown in FIG. 17, region A and region B). A local disk item stores: an identification number of the local disk for its own node among disks of the node # 1; information of the region in which the sorted string from another node is stored; and information of the region in which the sorted string to another node is stored. Both the regions may be the same region of the local disk. A sorting target data” item stores: information of whether the sorting target data is stored in the local disk of the node (presence/absence of sorting target data); information of the storage region if the sorting target data is stored in the local disk; and the file name and record name of the sorting target data. A “possibility of storing sorted result” item stores: information of a presence/absence of a possibility of acquiring the sorted result and storing it in the local disk of the node; and information of the storage region if the sorted result is stored in the local disk.
  • After the management information necessary for the sorting process is prepared by the above-described management information negotiation process, a sorting target data reading process is executed. Specifically, at the input node which stores the sorting target data in the [0117] local disk 900, the data reading unit 103 reads the sorting target data from the local disk 900 and stores it in the buffer 810, by the method similar to that shown in FIG. 5.
  • Next, the internal sorting process will be described. As the input node stores the sorting target data in the [0118] buffer 810, similar to the method shown in FIG. 13, the output node for each record stored in the buffer 810 is determined in accordance with the predetermined node decision rule. At each determined output node, records in the internal sorting buffer are classified into record groups corresponding to determined output nodes to rearrange the records in each classified record group in accordance with the predetermined sorting rule. If only one output node is determined, records are not classified and only the internal sorting process is performed.
  • With the above processes, the data read into the [0119] buffer 810 is classified into record groups corresponding to the output nodes determined in accordance with the predetermined node decision rule, and records in each classified record group are rearranged in accordance with the predetermined sorting rule to obtain internally sorted results.
  • Next, a process of storing a sorted string in the shared [0120] disk 500 or local disk 900 will be described. This process is executed in the manner similar to that shown in FIG. 14. The shared disk writing unit 105 of the input node uses the internally sorted result in the buffer 810 of the records in each classified record group corresponding to the output node determined by the predetermined node decision rule, as the sorted result for the determined output node, and selects the region for storing the sorted string from the regions A510 and B520 of the shared disk 500 and the local disk 900. The sorted result as well as the sorted result storing header is written in the selected region to update the sorted string storing information of the regions A510 and B520 of the shared disk 500 and the self-node storage region of the local disk 900. More specifically, if the output destination is another node, the sorted string is stored in the shared disk 500, whereas if the output destination is its own node, the sorted string is stored in the self-node storage region 910 of the local disk 900. In storing the sorted string in the shared disk 500, the shared disk writing unit 105 checks which one of the regions A510 and B520 of the shared disk 500 is assigned as the data transfer path from its own node to another node, for example, by referring to the system configuration information table. Thereafter, the sorted string is stored in one of the two regions allocated as the data transfer path from its own node to another node.
  • The above-described processes from reading sorting target data to storing a sorted string are repeated by each input node until all the sorting target data stored in the [0121] local disk 900 is processed. After the input module 820 of the input node completely processes all the sorting target data stored in the local disk 900 and stores the sorted strings in the shared disk 500 or local disk 900, the merge instructing unit 106 instructs via the network 701 the output modules 830 of all the output nodes to execute the merging process.
  • Next, a process of reading a sorted string from the shared [0122] disk 500 and local disk 900 will be described. This process is executed by a method similar to that shown in FIG. 9. The merge instruction receiving unit 303 of the output module 830 receives a merge instruction from the input module 820 of the input node 100 via the network 710. Upon reception of the merge instruction from all the input nodes, the shared disk reading unit 304 reads the sorted string storing information from the i shared disks 500 and local disks 900 connected to its own node, to thereby check the total number of sorted strings stored in the shared disks 500 and local disks 900. A quotient is calculated by dividing the capacity of the buffer 810 by the total number of sorted strings. This quotient is used as the capacity of the buffer 810 to be assigned to each sorted string.
  • The shared [0123] disk reading unit 304 refers to the sorted string storing header of each sorted string to check the size thereof. This size is set as an initial value of the unprocessed amount of the sorted string. It is then checked whether the unprocessed amount of the sorted string is larger than the assigned buffer capacity. In accordance with this check result, a sorted segment having a proper data amount is read from the shared disk 500 and local disk 900 and written in the buffer 810. Reading the unprocessed sorted string is performed starting from the top of the sorted string to the last in the sorted order. The shared disk reading means 304 stores the sorted segments of all the sorted strings in the shared disks 500 and local disks 900, in the buffer 810.
  • In reading the sorted string storage information and sorted segment from the shared disk, the shared [0124] disk reading unit 304 checks which one of the regions A510 and B520 of the shared disk 500 is assigned as the data transfer path from its own node to another node, for example, by referring to the system configuration information table. Thereafter, the sorted string storage information and sorted segment are read from one of the two regions allocated as the data transfer path from its own node to another node.
  • Lastly, the merging and sorted result outputting process will be described. The merging and sorted result outputting process to be executed by each output node is performed by the method similar to that shown in FIG. 10. The merging [0125] unit 305 of the output node first sets a pointer to each record in the sorted segment of each sorted string stored in the buffer 810. (S1330), and compares the records designated by pointers of all the sorted segments in the buffer 810, in accordance with a predetermined sorting rule, and passes the pointer to the record at the earliest position according to the predetermined sorting rule, to the sorted result outputting unit 306. The sorted result outputting unit 306 writes the record corresponding to the received pointer in the sorted result storing region of the local disk 900. The merging unit 305 increments the pointer for the output record by “1” to indicate the next record, to thereafter repeat the record comparison and output record decision. In this manner, the merging unit 305 and sorted result outputting unit 306 output the sorted results of the sorted segments stored in the buffer 810, to the sorted result storage region 930 of the local disk 900.
  • If all the records contained in one sorted segment stored in the [0126] buffer 810 are output, the merging unit 305 notifies the shared disk reading unit 304 of the sorted string to which the sorted segment belongs. Upon reception of this notice, the shared disk reading unit 304 checks whether there is any unprocessed sorted segment in the sorted string. If there is an unprocessed sorted segment, this segment is read from the shared disk 500 or local disk 900 and overwritten on the already output sorted segment in the buffer 810. The merging unit 305 initialize the pointer for the newly stored sorted segment to indicate the start record in the stored sorted segment. The merging unit 305 and sorted result outputting unit 306 repeat the above-described merging process. If there is no unprocessed sorted segment in the sorted string, the shared disk reading unit 304 notifies the merging unit 305 of a sorted string process completion, so that the merging unit 305 discards the sorted segments of the sorted string so as not to be subjected to the record comparison to thereafter repeat the merging process for the remaining sorted strings.
  • The output node repeats the sorted string reading process and merging and sorted result outputting process until all the sorted strings stored in the shared [0127] disk 500 and local disk 900 connected to the output node are processes. With the above processes, all the sorted strings stored in the shared disk 500 and local disk 900 connected to each output node are merged and the sorting process for all the sorting target data stored in the sorting target data storage region 920 of the local disk 900 of the input node is completed. The sorted result distributed to each output node is therefore obtained in the sorted result storage region 930 of each local disk 900 connected to the output node.
  • Also in this sorting system, similar to the first and second sorting systems, since the sorted strings are stored in the shared [0128] disks 500, it is not necessary that the input node executes the merging process. Therefore a high speed sorting process can be realized by shortening the process time required for the input node. Furthermore, it is possible to suppress the influence of delay, hindrance and the like upon another task executable by the input node. Still further, since it is not necessary to transfer the sorting target data and sorted strings from the input nodes which execute the internal sorting process to output nodes via the network, even if the network cannot operate at high speed, a high speed sorting process can be realized.
  • In this sorting system, although the sorting target data is stored in the sorting target [0129] data storage region 920 of the local disk 900, it may be stored in other locations. For example, the input node may read via the network 710 the sorting target data stored in a remote disk. Similarly, the output node may output the sorted result to a remote disk via the network 710.
  • Also in this system, the [0130] local disk 900 has the self-node storage region 910, sorting target data storage region 920, sorted result storage region 930 and the like. Instead, for example, each node 800 may have a plurality of local disks 900 each separately having the self-node storage region 910, sorting target data storage region 920 and sorted result storage region 930. Further, although each shared disk 500 has two regions A510 and B520, for example two shared disks 500 may be provided between a pair of nodes 800 and each of the two shared disks 500 may be stationarily assigned a unidirectional data path in a tow-way data path.
  • Also in this system, the sorting target data is stored in the sorting target [0131] data storage region 920 of each local disk 900 as one file identified by the file name. Instead, for example, the sorting target data may be read by the node 800 from the local disk 900, by directly designating a logical or physical address on the local disk 900. A single node 800 may store two or more files of the sorting target data.
  • Also in this system, each shared [0132] disk 500 is connected to and accessed by two nodes 800. In this case, however, the regions A510 and B520 are each stationarily assigned a unidirectional data transfer path in a two-way data path between two nodes 800 connected to the shared disk 500. Therefore, the input module 820 of the input node stores (writes) a sorted string in one of the two regions of the shared disk 500. After synchronization between the input module 820 of an input node and the output module 830 of an output node upon a merge instruction, the output module 830 of the output node reads a sorted segment from the region of the shared disk 500 so that both the read/write operations of the region are not performed at the same time. It is therefore not necessarily required to perform an exclusive control between nodes connected to the shared disk. Even if the exclusive control is not performed, there is no practical problem in executing the above processes.
  • In the reading, internal sorting and sorted string storing process for sorting target data, the [0133] input module 820 of each node 800 uses the buffer 810, and in the sorted string reading and merging and sorted result outputting process, the output module 830 of each node 800 uses the buffer 810. However, the reading, internal sorting and sorted string storing process and the sorted string reading and merging and sorted result outputting process, are time sequentially separated at each node 800 by synchronization of the input and output modules 820 and 830 upon reception of a merge instruction notice. Therefore, the buffer 810 at each node 800 is not used by the input and output modules 820 and 830 at the same time. The buffer 810 can therefore be shared by both the input and output modules 820 and 830 without preparing different buffer regions for the input and output modules 820 and 830.
  • In the embodiments described above, the data amount of sorting target data, sorted strings, sorted segments and the like to be processed uses the unit of record. This record unit is used for storing the sorting target data in the [0134] internal sorting buffer 110 and buffer 810, reading the sorted string from the shared disk 500 and local disk 900, and storing the sorted segment in the merge buffer 310 and buffer 810.
  • In each of the embodiments described above, although a plurality of disks are used, a plurality of physical or logical disks constituting a disk array with a plurality of interfaces may also be used as the disks of the embodiment. A disk array manages a plurality of disks operating in parallel and constitutes a disk sub-system. By using the disk array, management of disks becomes easy and a higher speed system can be realized. [0135]
  • The sorting process, sorting method and system of each embodiment described above are applicable to generation and update (load) of databases of a database system. In many cases of the generation and update of databases, original data of databases is sorted and processed. If the amount of original data is large, it takes a long time to sort and load the original data so that it takes a long time until the database system can be made usable and user convenience is degraded considerably. By applying the sorting process, sorting method and system to the load process, it becomes possible to shorten the sorting time and loading time and considerably improve a time efficiency and user availability of the database system. The sorting process, sorting method and system are suitable for use with a large scale database dealing with a large amount of data, such as data ware house, OLAP (Online Analytical Processing) and decision support system or with a parallel database system processing data in parallel by using a plurality of nodes. [0136]
  • As described so far, according to the present invention, it is possible to shorten the time required for a merging process when a plurality of computers of a computer system executes the merging process. The sorting process can therefore be executed at high speed. Furthermore, it is possible to suppress the influence of delay, hindrance and the like upon another task executable by the computer system. [0137]
  • Still further, it is possible to shorten the data transfer time of a network and a process time for the sorting process and realize the sorting process capable of being executed at high speed. [0138]

Claims (10)

What is claimed is:
1. A sorting system comprising:
a plurality of input nodes;
one output node; and
a shared external storage unit connected between each of said input units and said output unit, wherein:
each of said input units comprises:
a first buffer;
means for storing sorting target data in said first buffer;
means for internally sorting data in said first buffer in accordance with a predetermined sorting rule; and
means for storing as a sorted string an internally sorted result in said shared external storage unit; and
said output node comprises:
a second buffer;
means for storing in said second buffer the sorted string stored in said shared external storage unit;
means for merging a plurality of sorted strings stored in said second buffer, in accordance with the predetermined sorting rule; and
means for outputting a merged result as a sorted result.
2. A sorting system comprising:
a plurality of input nodes;
a plurality of output nodes; and
a shared external storage unit connected between each of said input units and each of said output units, wherein:
each of said input units comprises:
a first buffer;
means for storing sorting target data in said first buffer;
means for internally sorting data in said first buffer in accordance with a predetermined sorting rule;
means for classifying data in said first buffer in accordance with a predetermined classification rule; and
means for storing as a sorted string an internally sorted result in said shared external storage unit, in accordance with the predetermined classification rule; and
said output node comprises:
a second buffer;
means for storing in said second buffer the sorted string stored in said shared external storage unit;
means for merging a plurality of sorted strings stored in said second buffer, in accordance with the predetermined sorting rule; and
means for outputting a merged result as a sorted result.
3. A sorting system comprising:
a plurality of nodes;
a dedicated external storage unit provided for each of said plurality of nodes; and
a shared external storage unit provided between each pair of nodes of said plurality of nodes, wherein:
at least one node among said plurality of nodes comprises:
a first buffer;
means for storing sorting target data in said first buffer;
means for internally sorting data in said first buffer in accordance with a predetermined sorting rule;
means for sorting as a sorted string an internally sorted result in said dedicated external storage unit;
a second buffer;
means for storing in said second buffer the sorted string stored in said shared external storage unit and said dedicated external storage unit;
means for merging a plurality of sorted strings stored in said second buffer, in accordance with the predetermined sorting rule; and
means for outputting a merged result as a sorted result.
4. A sorting system comprising:
a plurality of nodes;
a dedicated external storage unit provided for each of said plurality of nodes; and
a shared external storage unit provided between each pair of nodes of said plurality of nodes, wherein:
at least one node among said plurality of nodes comprises:
a first buffer;
means for storing sorting target data in said first buffer;
means for internally sorting data in said first buffer in accordance with a predetermined sorting rule;
means for classifying data in said first buffer in accordance with a predetermined classification rule;
means for sorting as a sorted string an internally sorted result in said dedicated external storage unit, in accordance with the predetermined classification rule;
a second buffer;
means for storing in said second buffer the sorted string stored in said shared external storage unit and said dedicated external storage unit;
means for merging a plurality of sorted strings stored in said second buffer, in accordance with the predetermined sorting rule; and
means for outputting a merged result as a sorted result.
5. A sorting system according to claim 4, wherein said first and second buffers are a same buffer.
6. A sorting method for a sorting system having a plurality of input nodes, one output node, and a shared external storage unit connected between each of the input units and the output unit, the method comprising the steps of:
storing sorting target data in the input node and internally sorting the sorting target data in accordance with a predetermined sorting rule;
storing as a sorted string an internally sorted result in the shared external storage unit; and
reading the sorted string from the shared external storage unit, storing the sorted string in the output node, and merging the sorted string in accordance with the predetermined sorting rule.
7. A sorting method for a sorting system having a plurality of input nodes, a plurality of output nodes, and a shared external storage unit connected between each of the input units and each of the output units, the method comprising the steps of:
storing sorting target data in each of the input nodes;
classifying and internally sorting the read sorting target data in accordance with a predetermined classification rule and a predetermined sorting rule;
distributing and storing a classified and sorted result in the shared external storage units shared with the output nodes, in accordance with the predetermined classification rule;
performing a process from said storing step to said distributing and storing step for all sorting target data; and
thereafter reading the sorted string from the shared external storage units shared with the input nodes, storing the sorted string in each output node, and merging the sorted string in accordance with the predetermined sorting rule.
8. A sorting method for a sorting system having a plurality of nodes, a dedicated external storage unit provided for each of the plurality of nodes, and a shared external storage unit provided between each pair of nodes of the plurality of nodes, the method comprising the steps of:
storing sorting target data in each of the input nodes and internally sorting the sorting target data in accordance with a predetermined sorting rule;
storing as a sorted string an internally sorted result in the dedicated or shared external storage unit;
performing said internal sorting step and said sorted string storing step for all sorting target data; and
thereafter reading each sorted string from the dedicated and shared external storage units at one of the plurality of nodes and merging each sorted string in accordance with the predetermined sorting rule.
9. A sorting method for a sorting system having a plurality of nodes, a dedicated external storage unit provided for each of the plurality of nodes, and a shared external storage unit provided between each pair of nodes of the plurality of nodes, the method comprising the steps of:
storing sorting target data in at least one of the plurality of nodes;
classifying and internally sorting the read sorting target data in accordance with a predetermined classification rule and a predetermined sorting rule;
distributing and storing as a sorted string a classified and internally stored result in the dedicated or shared external storage unit, in accordance with the classification rule;
performing a process from said storing step to said distributing and storing step for all sorting target data; and
thereafter reading each sorted string from the dedicated and shared external storage units at one of the plurality of nodes and merging each sorted string in accordance with the predetermined sorting rule.
10. A sorting method according to claim 9, wherein at least one node has a same buffer to be used for storing the sorting storage data and storing the sorted string.
US09/366,320 1998-08-03 1999-08-02 Sorting system and method executed by plural computers for sorting and distributing data to selected output nodes Expired - Lifetime US6424970B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP10-219253 1998-08-03
JP21925398A JP3774324B2 (en) 1998-08-03 1998-08-03 Sort processing system and sort processing method

Publications (2)

Publication Number Publication Date
US20020065793A1 true US20020065793A1 (en) 2002-05-30
US6424970B1 US6424970B1 (en) 2002-07-23

Family

ID=16732631

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/366,320 Expired - Lifetime US6424970B1 (en) 1998-08-03 1999-08-02 Sorting system and method executed by plural computers for sorting and distributing data to selected output nodes

Country Status (4)

Country Link
US (1) US6424970B1 (en)
EP (1) EP0978782B1 (en)
JP (1) JP3774324B2 (en)
DE (1) DE69913050T2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050144167A1 (en) * 2002-04-26 2005-06-30 Nihon University School Juridical Person Parallel merge/sort processing device, method, and program
US7032085B2 (en) 2003-12-25 2006-04-18 Hitachi, Ltd. Storage system with a data sort function
US20060101086A1 (en) * 2004-11-08 2006-05-11 Sas Institute Inc. Data sorting method and system
US7058952B1 (en) * 1998-04-16 2006-06-06 International Business Machines Corporation Technique for determining an optimal number of tasks in a parallel database loading system with memory constraints
US20110131033A1 (en) * 2009-12-02 2011-06-02 Tatu Ylonen Oy Ltd Weight-Ordered Enumeration of Referents and Cutting Off Lengthy Enumerations
US8949249B2 (en) 2010-06-15 2015-02-03 Sas Institute, Inc. Techniques to find percentiles in a distributed computing environment
US20150278299A1 (en) * 2014-03-31 2015-10-01 Research & Business Foundation Sungkyunkwan University External merge sort method and device, and distributed processing device for external merge sort
US20160188643A1 (en) * 2014-12-31 2016-06-30 Futurewei Technologies, Inc. Method and apparatus for scalable sorting of a data set
US10642901B2 (en) * 2014-12-12 2020-05-05 International Business Machines Corporation Sorting an array consisting of a large number of elements
US20230024024A1 (en) * 2021-07-22 2023-01-26 Disney Enterprises, Inc. Fully traceable and intermediately deterministic rule configuration and assessment framework

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NO992269D0 (en) * 1999-05-10 1999-05-10 Fast Search & Transfer Asa ° engine with two-dimensional scalable, parallel architecture
US7689623B1 (en) 2002-04-08 2010-03-30 Syncsort Incorporated Method for performing an external (disk-based) sort of a large data file which takes advantage of “presorted” data already present in the input
US20050050113A1 (en) * 2003-08-21 2005-03-03 International Business Machines Corporation System and method for facilitating data flow between synchronous and asynchronous processes
KR101167349B1 (en) * 2004-11-12 2012-07-19 에스티 에릭슨 에스에이 A power converter
JP5276392B2 (en) * 2007-09-21 2013-08-28 住友電気工業株式会社 Cutting tool and method of manufacturing cutting tool
US10089379B2 (en) * 2008-08-18 2018-10-02 International Business Machines Corporation Method for sorting data
US8311987B2 (en) * 2009-08-17 2012-11-13 Sap Ag Data staging system and method
WO2015180793A1 (en) * 2014-05-30 2015-12-03 Huawei Technologies Co.,Ltd Parallel mergesorting

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5084815A (en) * 1986-05-14 1992-01-28 At&T Bell Laboratories Sorting and merging of files in a multiprocessor
US5410689A (en) * 1991-06-13 1995-04-25 Kabushiki Kaisha Toshiba System for merge sorting that assigns an optical memory capacity to concurrent sort cells
US5794240A (en) * 1992-05-26 1998-08-11 Fujitsu Limited Multi-threaded sorting system for a data processing system
JP3202074B2 (en) * 1992-10-21 2001-08-27 富士通株式会社 Parallel sort method
JP3415914B2 (en) * 1993-10-12 2003-06-09 富士通株式会社 Parallel merge sort processing method
US5613085A (en) 1994-12-27 1997-03-18 International Business Machines Corporation System for parallel striping of multiple ordered data strings onto a multi-unit DASD array for improved read and write parallelism
US5671405A (en) * 1995-07-19 1997-09-23 International Business Machines Corporation Apparatus and method for adaptive logical partitioning of workfile disks for multiple concurrent mergesorts
US5852826A (en) * 1996-01-26 1998-12-22 Sequent Computer Systems, Inc. Parallel merge sort method and apparatus

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7058952B1 (en) * 1998-04-16 2006-06-06 International Business Machines Corporation Technique for determining an optimal number of tasks in a parallel database loading system with memory constraints
US7536432B2 (en) * 2002-04-26 2009-05-19 Nihon University School Juridical Person Parallel merge/sort processing device, method, and program for sorting data strings
US20050144167A1 (en) * 2002-04-26 2005-06-30 Nihon University School Juridical Person Parallel merge/sort processing device, method, and program
US7032085B2 (en) 2003-12-25 2006-04-18 Hitachi, Ltd. Storage system with a data sort function
US20060101086A1 (en) * 2004-11-08 2006-05-11 Sas Institute Inc. Data sorting method and system
US20080208861A1 (en) * 2004-11-08 2008-08-28 Ray Robert S Data Sorting Method And System
US7454420B2 (en) 2004-11-08 2008-11-18 Sas Institute Inc. Data sorting method and system
US20110131033A1 (en) * 2009-12-02 2011-06-02 Tatu Ylonen Oy Ltd Weight-Ordered Enumeration of Referents and Cutting Off Lengthy Enumerations
US8949249B2 (en) 2010-06-15 2015-02-03 Sas Institute, Inc. Techniques to find percentiles in a distributed computing environment
US20150278299A1 (en) * 2014-03-31 2015-10-01 Research & Business Foundation Sungkyunkwan University External merge sort method and device, and distributed processing device for external merge sort
US10642901B2 (en) * 2014-12-12 2020-05-05 International Business Machines Corporation Sorting an array consisting of a large number of elements
US11372929B2 (en) 2014-12-12 2022-06-28 International Business Machines Corporation Sorting an array consisting of a large number of elements
US20160188643A1 (en) * 2014-12-31 2016-06-30 Futurewei Technologies, Inc. Method and apparatus for scalable sorting of a data set
US20230024024A1 (en) * 2021-07-22 2023-01-26 Disney Enterprises, Inc. Fully traceable and intermediately deterministic rule configuration and assessment framework
US11734155B2 (en) * 2021-07-22 2023-08-22 Disney Enterprises, Inc. Fully traceable and intermediately deterministic rule configuration and assessment framework

Also Published As

Publication number Publication date
EP0978782B1 (en) 2003-11-26
DE69913050D1 (en) 2004-01-08
JP2000056947A (en) 2000-02-25
JP3774324B2 (en) 2006-05-10
US6424970B1 (en) 2002-07-23
DE69913050T2 (en) 2004-07-29
EP0978782A1 (en) 2000-02-09

Similar Documents

Publication Publication Date Title
US6424970B1 (en) Sorting system and method executed by plural computers for sorting and distributing data to selected output nodes
US4514826A (en) Relational algebra engine
US5640554A (en) Parallel merge and sort process method and system thereof
US5943683A (en) Data processing method using record division storing scheme and apparatus therefor
US5488717A (en) MTree data structure for storage, indexing and retrieval of information
US10831773B2 (en) Method and system for parallelization of ingestion of large data sets
US20030033278A1 (en) Data sort method, data sort apparatus, and data sort program
JP2000187668A (en) Grouping method and overlap excluding method
JPH02178730A (en) Internal sorting system using dividing method
US7890705B2 (en) Shared-memory multiprocessor system and information processing method
US8515976B2 (en) Bit string data sorting apparatus, sorting method, and program
JP3515810B2 (en) Sort processing method and apparatus
US5918231A (en) Object-oriented database management system with improved usage efficiency of main memory
CN109241058A (en) A kind of method and apparatus from key-value pair to B+ tree batch that being inserted into
US20220188316A1 (en) Storage device adapter to accelerate database temporary table processing
JP4620593B2 (en) Information processing system and information processing method
US10990575B2 (en) Reorganization of databases by sectioning
US11734282B1 (en) Methods and systems for performing a vectorized delete in a distributed database system
JP3151820B2 (en) Sorting method based on count classification using relative keys
JP4559971B2 (en) Distributed memory information processing system
JP4772506B2 (en) Information processing method, information processing system, and program
JPH0581337A (en) Data processor
US20220138338A1 (en) Data replacement apparatus, data replacement method, and program
JPH10307828A (en) Clustering processing method and its system
JPS6134626A (en) Method of executing external distribution and sorting

Legal Events

Date Code Title Description
AS Assignment

Owner name: HITACHI, LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ARAKAWA, HIROSHI;YAMAMOTO, AKIRA;HONMA, SHIGEO;AND OTHERS;REEL/FRAME:010293/0743

Effective date: 19990719

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: GOOGLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HITACHI, LTD.;REEL/FRAME:030555/0554

Effective date: 20121016

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: GOOGLE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044127/0735

Effective date: 20170929