Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20100005331 A1
Publication typeApplication
Application numberUS 12/168,504
Publication dateJan 7, 2010
Filing dateJul 7, 2008
Priority dateJul 7, 2008
Also published asCA2730165A1, CN102165644A, CN102165644B, EP2311145A1, US8886985, WO2010005429A1
Publication number12168504, 168504, US 2010/0005331 A1, US 2010/005331 A1, US 20100005331 A1, US 20100005331A1, US 2010005331 A1, US 2010005331A1, US-A1-20100005331, US-A1-2010005331, US2010/0005331A1, US2010/005331A1, US20100005331 A1, US20100005331A1, US2010005331 A1, US2010005331A1
InventorsSiva Somasundaram, Allen Yang
Original AssigneeSiva Somasundaram, Allen Yang
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Automatic discovery of physical connectivity between power outlets and it equipment
US 20100005331 A1
Abstract
The invention relates generally to the field of power management in data centers and more specifically to the automatic discovery and association of connectivity relationships between power outlets and IT equipment, and to methods of operating data centers having automatic connectivity discovery capabilities.
Images(11)
Previous page
Next page
Claims(10)
1. A method for operating a data center having a plurality of servers powered via a plurality of power supply outlets, the method comprising of steps of:
determining a first set of candidate power outlets for at least one of the servers connected to at least one of the power outlets;
collecting power consumption data for the candidate power outlets over time and central processing unit utilization data for the at least one server during an overlapping time; and
correlating the CPU utilization data to the power consumption data to determine a second set of candidate power supplies.
2. The method of claim 1 wherein the determining step includes the substep of determining candidate power outlets located within a specified distance from the at least one server.
3. The method of claim 1 further comprising the step of correlating the power consumption data to theoretical power consumption data for the at least one server.
4. The method of claim 1 wherein the collecting step includes the substep of specifying an IP address for the at least one server.
5. The method of claim 1 wherein the correlating step includes the substep of quantizing the CPU utilization data.
6. The method of claim 1 wherein the correlating step includes the substep of time-stamping the power consumption data and the CPU utilization data.
7. The method of claim 1 wherein the correlating step includes the substep of correlating state changes between the CPU utilization data and the power consumption data.
8. The method of claim 1 further comprising the step of correlating the power consumption data to theoretical power consumption data for the at least one server,
wherein the determining step includes the substep of determining candidate power outlets located within a distance from the at least one server,
wherein the collecting step includes the substep of specifying an IP address for the at least one server and
wherein the correlating step includes the substeps of
quantizing the CPU utilization data,
time-stamping the power consumption data and the CPU utilization data, and
correlating state changes between the quantized CPU utilization data and the power consumption data.
9. A system for automatically discovering the connectivity of servers to power outlets in a data center comprising:
a data collection module interfaced with power supply outlets and IT equipment, the data collection module operable to collect actual power usage for power supply outlets and CPU usage from IT equipment;
a data store having the information collected by the data collection module; and
a correlation engine operable to correlate the CPU usage data with actual power usage data to identify IT equipment connected to power supply outlets on a one-to-one basis.
10. A method for monitoring racks of IT equipment comprising the steps of:
aggregating CPU usage data for the data equipment in a database;
determining candidate power outlet and IT equipment connectivity pairs, including the substep of determining a maximum distance from a candidate server to a candidate power supply strip;
correlating CPU usage for a candidate IT server with actual power usage of a candidate power strip, including the substep of identifying state changes for the IT equipment;
and matching the state changes to the actual power usage profile.
Description
    CROSS-REFERENCE TO RELATED APPLICATIONS
  • [0001]
    This application relates to U.S. patent application Ser. No. 12/112,435, entitled, “System and Method for Efficient Association of a Power Outlet and Device,” filed on Apr. 30, 2008, and to U.S. patent application Ser. No. 12/044,530, entitled, “Environmentally Cognizant Power Management”, filed on Mar. 7, 2008, both of which are assigned to the same assignee and which are incorporated herein by reference.
  • TECHNICAL FIELD
  • [0002]
    The invention relates generally to the field of power management in data centers and more specifically to the automatic discovery of connectivity relationships between power outlets and IT equipment, and to methods of operating data centers having automatic connectivity discovery capabilities.
  • BACKGROUND
  • [0003]
    Intelligent power distribution devices offer enhanced power distribution and monitoring capabilities for certain sensitive electrical and electronic applications. An exemplary application where deployment of intelligent power distribution devices proves useful is in the powering of multiple computer servers at predefined schedules based on power management policies that are involved in the provision of network services. Here, the ability to control and monitor power distribution is an invaluable tool for computer network operators and IT personnel, and for use in comprehensive power optimization.
  • [0004]
    One intelligent power device of the above-described type is the Dominion PX Intelligent Power Distribution Unit (IPDU), developed and sold by Raritan Corp. of Somerset, N.J. The Dominion PX IPDU offers increased operational and monitoring capabilities at each of the AC power outlets included in the device. Generally, these capabilities will include the ability to turn an outlet on and off, and also provide power consumption measurements for that outlet, among other features. It is desirable for the intelligent power device or equipment monitoring the intelligent power device to know what specific equipment is at the other end of a power cable plugged into each outlet of the intelligent power device.
  • [0005]
    Further, network administrators are often required to maintain the power connectivity topology of a data center. One method for maintaining a power connectivity topology is with a spreadsheet or in a centralized configuration database, which the network administrator updates from time to time. Other data center asset management systems are also available to track the physical power connectivity relationship relying on manual input of physical connections using bar code readers and serial numbers in the nameplate. Data, once inputted, can be presented to topology rendering engines, which can present topologies as reports or as topology maps for intuitive visualization. In large data centers, which can contain thousands of servers, manually maintaining the data center power topology is a tedious and error-prone task.
  • [0006]
    Nevertheless, the importance of maintaining accurate and up-to-date power topologies is increasing in the field of network administration and management. As the cost of computing decreases, the cost of power usage by the data center becomes a cost-driver. Reducing power consumption is, therefore, an object of concern for network administrators. Likewise, recent green initiatives have provided incentive to reduce power usage in the data center. Organizations, such as Green Grid, publish data center energy efficiency metrics. Data centers measure themselves against these metrics in evaluating efficiency. All of these data center management requirements benefit from a highly accurate data center power topology.
  • [0007]
    There are known certain automatic discovery topology tools for networks. These tools like ping, tracert, and mping, disclose logical connectivity maps for networks; however, they do not provide for automatic discovery of physical connectivity between IT equipment and power outlets. At present, the only way to determine what equipment is associated with specific outlets of a power distribution device is to have that information manually entered.
  • SUMMARY OF THE INVENTION
  • [0008]
    A system and method according to the principles of the invention automatically discovers a physical connectivity topology for information technology (IT) equipment in a data center. The topology displays the connection between IT equipment and power outlets. A system according to the principles of the invention applies a set of heuristics to identify candidate power outlets for individual servers or other IT equipment. In one aspect, for a particular piece of equipment, the candidate outlets are selected based upon physical proximity to the IT equipment. These candidates are iteratively narrowed based upon theoretical power consumption data, actual power consumption data, CPU utilization, and correlation of state change events.
  • [0009]
    Physical location can be determined using various technologies, such as ultrasound sensing or RFID. This information can then be used to augment the physical connectivity between the server and power outlets. In a typical situation, the power consumption data as provided by the IT equipment vendors can be used to narrow candidate outlets by systematically comparing the outlets that fall within the operating range provided by the vendor. This name plate data typically exceeds the actual power consumption and may not narrow the candidate outlets to a conclusive mapping. In these cases, actual data can further narrow the candidate outlets. CPU utilization data for the servers can be collected over a time interval and quantized to reduce noise and other artifacts. Actual power consumption over the same time period is collected from candidate power outlet using an appropriate IPDU. Pattern matching between quantized CPU utilization and power consumption graphs identifies matches. Further, state changes reflected in power and CPU utilization data further narrow the candidate power outlets for given IT equipment. Quantized CPU utilization and power consumption data can also be used for these comparisons. Where heuristics narrow the candidates, but do not converge, the administrator can view utilization graphs and other data outputs to make subjective conclusions as to the best outlet candidate for a piece of IT equipment.
  • [0010]
    A system and method for providing automatic identity association between an outlet of an intelligent power distribution unit and a target device, such as a computer server, which is powered by that outlet can include a power management unit or power distribution unit which implements data collection at the power outlet. The IT equipment's power requirement profiles prescribed by the equipment vendors as well as the actual usage patterns measured over time are correlated with power consumption patterns detected on the candidate power outlets. Further correlations are made between the time sequence of certain state changes on the IT equipment, such as server turn on and off, server computing work load changes and virtual machine migration. These state changes can be detected by a monitoring system and are reflected in actual power utilization changes on the power outlets. The heuristic rules and indicators are applied iteratively until the candidate number of power outlets matches the number of power supply units on the IT equipment.
  • [0011]
    The discovery of physical connectivity topology according to the principles of the invention maintains a high degree of integrity. In addition to key indicators such as actual CPU utilization and power consumption, other indicators characteristic of the particular functionality of given IT equipment can further identify candidate power outlets. Furthermore, interfaces can be used to permit administrators to verify the power matching by actual inspection of CPU utilization and power consumption usage graphs for the IT equipment and the discovered power outlet.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0012]
    In the drawings:
  • [0013]
    FIG. 1 illustrates a system according to the principles of the invention;
  • [0014]
    FIG. 2 shows another system according to the principles of the invention;
  • [0015]
    FIG. 3 shows exemplary graphs for implementing aspects of heuristic rules according to the principles of the invention;
  • [0016]
    FIG. 4 shows other exemplary graphs for implementing aspects of heuristic rules according to the principles of the invention;
  • [0017]
    FIG. 5 shows an exemplary graph for a single intelligent power unit over a twenty-four hour period according to the principles of the invention;
  • [0018]
    FIG. 6 shows an exemplary graph of CPU and power utilization for a single intelligent power unit over a three hour period according to the principles of the invention;
  • [0019]
    FIG. 7 shows an exemplary graph of CPU and processed view of the data for a single intelligent power unit over a three hour period according to the principles of the invention;
  • [0020]
    FIG. 8 shows an exemplary histogram translation of PDU utilization at the socket level according to the principles of the invention;
  • [0021]
    FIG. 9 shows an exemplary flow diagram of the auto association framework according to the principles of the invention, and
  • [0022]
    FIG. 10 shows an exemplary flow diagram of an auto association algorithm according to the principles of the invention.
  • DETAILED DESCRIPTION OF THE DRAWINGS
  • [0023]
    FIG. 1 discloses a system 100 according to the principles of invention. The system 100 includes N racks of IT equipment, of which three racks 102, 104, 106 are illustrated, of the ye that may be typically employed in a data center. These racks can hold any number of various types of IT equipment including servers, routers, and gateways. By way of example, rack 102 illustrates two vertically mounted power strips 114, 116, each of which include eight power receptacles, and to which the power supplies of the IT equipment are physically connected. Other racks in the data center have similar power outlet units, which can be mounted in a variety of configurations.
  • [0024]
    In this exemplary system 100, these power strips are of the type that can provide power consumption data and other functionality, such as the Dominion PX IPDU provided by Raritan Corp. of Somerset, N.J. Alternatively, these units can be referred to as power distribution units or PDUs. These power distribution units provide TCP/IP access to power consumption data and outlet level switching, and can provide alerts via SNMP and email for events like exceeded threshold or once on/off power cycling. PDUs integrate with a wide variety of KVM switch solutions, such as the Dominion KX2 and Paragon II KVM switches provided by Raritan Corp. Racks 104, 106 maybe similarly equipped. PDUs are often highly configurable, and these exemplary power distribution units 114, 116 interface directly with a Power Manager 108. Power Manager 108 maybe an element management system that can configure multiple IPDUs in the electrical power distribution network. The Power Manager can also collect the IT utilization information provided by the IPDU. The exemplary Power Manager 108 maybe equipped to provide remote access to the Administrator 112 and can address power distribution units 114, 116 through Internet Protocols. The Power Manager 108 can be configured to discover and aggregate data in Database 110 which provides data for the heuristics applied according to the principles of the invention. As will be explained below, this data includes actual power consumption data, IT equipment specifications, CPU utilization data, theoretical power consumption data, and state change events on the IT equipment.
  • [0025]
    FIG. 2 illustrates another exemplary system 200 with a data center including N racks of IT equipment. Three racks 220, 222, 224 accessible to an Administrator 208 over an IP network 212 are disclosed for illustrative purposes. Rack 224 includes IT equipment as well as a power distribution unit having intelligent power capabilities. Among these capabilities are the gathering of data such as actual power consumption data at the output outlet level. Racks 220, 222 are similarly equipped, and further include an environmental sensor 228 operable to sense environmental conditions in the data center. An optional power data aggregator 226 interfaces with power distribution units and aggregates data from the outlets. These several racks 220, 222, 224 are further equipped with sensors and circuitry for determining physical proximity to power outlets. The sensors mounted in the racks can be monitored by the IPDUs to infer the amount of power dissipation in terms of temperature rise. The amount of temperature rise directly correlates to the amount of power consumption exercised by the server and thus can be used in the correlation. The system 200 includes an authentication sever 214 and a remote access switch 204, such as the Dominion KX KVM over IP switch which interfaces with the Administrator 208.
  • [0026]
    The switch 204 is further interconnected with a data store 202 for storing and retrieving data useful in determining the physical connectivity of IT equipment to power outlets. This data includes but is not limited to power failure reference signatures, theoretical power signatures, actual power signatures, actual power data and other associations. The power distribution manager 206 further interfaces with the KVM switch 204 providing the Administrator 208 with the ability to access from the remote location power distribution unit data from various power distribution units located on racks 220, 222, 224. Another database 216 is accessible over the IP network 212 to store physical location data as pertains to IT equipment and power outlets. A change alert server 218 is also optionally connected and accessible over the KVM switch 204. In operation, data from the racks and power distribution units in the data center is collected and stored over the IP network and selectively accessible to the Administrator 208. The power distribution center and reporting equipment access the data and implements the methods according to the invention to identify physical connectivity between IT equipment and power outlets. The KVM switch 204 can be used to actively connect to the server to be associated as this will increase utilization at the server. Administrators can use this KVM approach to improve the connectivity discovery on selected servers that may provide similar power signatures in regular operation.
  • [0027]
    In each of the above systems 100, 200, power distribution units and KVM switches and/or other administrator appliances or servers are programmed to collect data for storing in databases for later use and for applying correlation heuristics. The data acquired through the monitoring can be classified into two major categories. One is the time series information that provides the value of the data at any instant of the time. Secondly, the time stamped events that effect both the IT and power systems. Examples of the latter include the reboot of the sever machine and startup of the server. Among the different data attributes useful to a correlation method according to the principles of the invention are data related to the theoretical power usage requirements of particular IT equipment, the actual power consumption data at particular power outlets as measured over time, actual CPU utilization data for servers in the data center collected over time, and physical distance relationships between identified servers and identified power outlets. In addition to this data, other useful characterizing data can be obtained and stored in the data stores. This data could include data characteristics for a particular type of IT equipment found in the data center. For example, email servers, web servers, routers and the like often have identifiable characteristics depending upon their particular usage in the data center which include data related to temperature, CPU utilization, changes of state from on to off, any other characteristic that may be identifying either alone or in combination with other server characteristics.
  • [0028]
    A correlation engine can be implemented in either a power management unit, a general purpose computer, or a dedicated server accessible to the data stores to run any heuristics and to develop a connectivity map for the entire data center. As heuristics are applied, the number of outlets that can connect to a particular possible server are narrowed and in the general case converge to an identified outlet for the server. Where heuristics are applied but cannot reduce the possible candidates to a correspondence, the administrator may access graphical renderings of particular characteristics such as CPU utilization graphs, power consumption graphs, and the like to make a subjective assessment of the likelihood that a particular server is physically connected to a particular outlet. Databases and rendering engines can be implemented using known data structures and rendering software such that topographies of the data center's physical connectivity can be rendered.
  • [0029]
    Any particular heuristic is optional and additional heuristic rules and indicators can be added to a process for identifying a physical connectivity between a server and an outlet. In one exemplary method, a set of power outlets are identified as the probable candidates for a particular IT advice. These probable candidates can be based upon previously provided connectivity data, association clustering, physical location, or best guess candidates input by a data administrator. The additional information helps convergence by matching the likely set of unknowns as opposed to applying decisions to completely unknown sets of power and IT end points (pairs). With respect to these candidates, a set of heuristic rules are applied to attempt to map the IT equipment to a particular outlet or outlets. The heuristic process concludes when the number of candidate power outlets matches the number of power supply units on the IT equipment or when all heuristics are exhausted. In the case where all heuristics are exhausted, the administrator may make a subjective selection based upon viewing data of the remaining candidate outlets.
  • [0030]
    A number of indicators that can be used in the heuristic process include power usage name plate values, actual power consumption patterns, the time sequence of IT equipment state change events, and the physical location of the IT equipment in relation to the power outlets. So, for example, assuming a set of 20 candidate power outlets for a given piece of IT equipment, a subset are eliminated because they are not within a certain physical distance of the IT equipment. This indicator leverages the typical practice of locating servers within a specified maximum distance of its outlet. The name plate information is used to group the servers by their average power consumption levels and the pattern matching algorithm can match the selected subset of servers to determine the electrical power outlets only if the power values overlap. For example, if a power outlet has delivered M watts of power and the sever has the maximum name plate power as N watts and if M>>N then there is no correlation between the power outlet in question and the server. Of the remaining candidate outlets, a heuristic is applied to identify and correlate actual CPU utilization with actual power consumption at the power outlet. This reduces the number of candidate outlets to an identified set. If it does not, then an additional heuristic is applied to determine actual state changes as reflected in CPU utilization graphs and power consumption graphs. Additional heuristics could be applied by analyzing IT utilization over a day with a histogram. The time series data can be transformed into other domains in the frequency or spatial domain to improve the correlation within the context of power characteristics.
  • [0031]
    In one aspect of the invention, the first candidate of potential outlets for a particular server is identified through IP addressing. The number of IPUs in the electrical distribution can be discovered using different methods based on their capabilities. In the case of a Raritan DPX, the IPMI discovery will provide enough information about the presence and configuration of these units. Similarly the network management technologies provide capabilities to discover the server system details including the network IP address that can be used to monitor and measure the IT utilization over a network. Using the IP address, data is collected from servers and from power outlet units. The data is aggregated in the data store. The data collection methodologies available for the proposed invention include SNMP, IPMI, WMI and WS-MAN. All these standard management interfaces provide remote monitoring capabilities useful for this invention. The data is time-stamped so that power usage, CPU usage and events can be correlated between different candidate power outlets and different IT equipment.
  • [0032]
    FIGS. 3A, 3B and 3C show three exemplary graphs 302, 304, 306 demonstrating one aspect of the heuristics that can be applied according to the principles of the invention. The graph 302 of FIG. 3A shows CPU utilization (Y axis) over time (X axis). The CPU utilization data is raw, unquantized data, and represents all cores in the candidate IT equipment under consideration. The unquantized data is somewhat noisy, and may be suboptimal for correlating with other data. Graph 304 of FIG. 3B shows the same data as quantized to remove artifacts and noise. In this example, the usage values are approximately quantized to integer values 1 and 2, although other quantization methods can be employed without departing from the principles of the invention. Here again the usage data corresponds to all cores for the candidate IT equipment. FIG. 3C graph 306 shows the actual power consumption of the candidate outlet over the same time period with time tracked using time stamps applied during data collection. There is an event change demonstrating a change in CPU utilization, as shown by arrows 308 and 310. Likewise, in the power consumption graph 306, the data reveals a power spike at 312. This spike 312 potentially correlates with the events in core utilization 308, 310 for the unquantized and quantized graphs. Time stamp comparison of the events is another data indicator that can be used to correlate this candidate IT equipment to the candidate power outlet.
  • [0033]
    FIG. 4 shows exemplary utilization data graphs 402, 404 and corresponding histograms 406, 408 which can be used to correlate candidate power outlets to IT equipment in the heuristics according to the principles of the invention. Graph 402 represents raw utilization data for all cores of a piece of IT equipment over a whole day, where the utilization values fall from approximately zero to approximately 100. The raw utilization data is not easily mined for indicators that can be used to correlate to candidate power outlets. The utilization histogram 406 categorizes the utilization based upon the frequency of the utilization at particular selected values. The histogram, therefore, depicts how often the IT equipment was used at a particular level over a given period.
  • [0034]
    Graph 406 details how often the processor cores of given IT equipment switches to different utilizations levels. In this example, the graph 404 is obtained by decimating raw utilization data by two over a given period. Because the graph 404 shows changes to lower or higher utilization from a current utilization status, the graph is normalized around zero on the vertical axis. Histogram 408 is an analysis showing frequency of utilization change on the X axis versus frequency of usage on the Y axis. This data can be used in the correlation techniques of the invention by preparing similar graphical histograms and spectrums for candidate power outlets and then examining them using computer implemented power matching or manually if necessary.
  • [0035]
    FIG. 5 shows a power utilization graph of a single Dominion PX over a twenty-four hour period. Data 502 indicates the power utilization of one of the sockets reduced to zero at a particular time 501 that corresponds to the CPU utilization to be zero (or not available). If events like power recycle and shut down are not simultaneous present (as they are not in FIG. 5), then there is a low probability for achieving correlation based on events. Available PDUs are not currently equipped with an event logging feature for individual sockets in their PDUs. A PDU according to the principles of the invention extends such logging for the purposes of correlating events between servers and PDU sockets. Because the order of power recycle controls, the delay required to associate between the server and PDU are achievable.
  • [0036]
    FIG. 6 shows an example of CPU and power utilization for a three-hour period. Data 601 represents the CPU utilization over the three-hour period. In this exemplary embodiment, the sum of all processor cores in a particular server includes all four cores in this processor, so the total value needs is divided by four to represent utilization as a percentage of power. Data 602 represents the power utilization for the server over the same period as logged by a PDU. As can be seen by data 601 and 602, both the CPU utilization and power steadily increase over time. As seen by data 602, the server consumes an average of 178 Watts for the average CPU utilization of 27.90 as indicated by data 601.
  • [0037]
    FIG. 7 shows an example of CPU utilization and a corresponding histogram of processed data, emphasizing the low utilization of the server. Data 701 shows the server activity and how active the server is over a given period of time. According to the principles of the current invention, and as can be seen from data 701, the transformation of the time series information from the server utilization or PDU can be useful when correlating based on data values. Data 702, is exemplary of the histogram based approach for converting the time series data 701 into a utilization context. Histogram data 702 may be correlated with a histogram of PDU utilization in accordance with the principles of the present invention.
  • [0038]
    FIG. 8 shows exemplary PDU utilization of single outlet and the corresponding histogram view. Data 801 represents the power in watts of a given power outlet over a given period of time. As indicated the average power at the outlet is 137.27 watts. Data 802, represented by the histogram translation of PDU utilization at the socket level indicates that the majority of power activity at the socket level corresponds to the average consumed power over the same given period.
  • [0039]
    FIG. 9 shows an exemplary flow diagram 900 of the heuristic auto-association framework in accordance with an embodiment of the present invention. Once started, step 901 retrieves environmental components of the system. Specifically, at step 901, the auto-association framework gathers configuration information regarding the servers and PDUs in the system and downloads that configuration information for storage in step 902. Step 903 determines if all configuration information has been collected. If there is additional configuration information to gather, steps 901 and 902 are repeated until the process is complete. During step 904, the utilization measurements from the identified servers and PDUs are collected and stored in a database at step 906. Steps 904 and 906 will be repeated until terminated by a user in step 905.
  • [0040]
    FIG. 10 shows an exemplary flow diagram 1000 of a heuristic auto-association algorithm in accordance with the principles of the invention. In step 1001, the system determines if the server asset information is available for analysis. If the information is available, then the data is filtered at step 1002 based on the server maximum and average power information. The filtered information from step 1002 as well as the utilization data stored in the database of step 906 of FIG. 9. are passed along for analysis at step 1003. During step 1003, the derived metric from the utilization data from the server and PDU (i.e., sum, histogram, max., and min.), are computed. Similarly, at step 1004, an event analysis is performed to detect the timing of specific events on the various PDUs and servers and to group them based on relative occurrences. This may be based on server asset information from the various server manufacturers as supplied by database 1011 and input into step 1004 to further this analysis. The analyzed data from steps 1003 and 1004 are passed through a first level heuristics at step 1005. During step 1005, servers and PDUs are grouped into pairs based on the data and or event matching. During step 1006, it is determined if the pairings from step 1005 is a correct association between server and PDU. If it is determined to be correct, the information is passed on to a server and PDU association database and stored in step 1007. If the server PDU association of step 1005 is not determined as decided by step 1006, then the process moves to step 1008 to further classify the server PDU pair with a second level metric (i.e., detail wavelets, processor characteristics, quantization, etc.). Step 1009 performs higher-level heuristics and attempts to groups the servers and PDUs devices based on the second metric and classifications. If it is determined in step 1006 that the association is correct, then the server PDU association information is stored in the database at step 1007. Once it is determined that all servers have been associated with all PDUs, via step 1012 the algorithm exits.
  • [0041]
    These and other aspects of the invention can be implemented in existing power management topologies. Data acquisition capabilities for aggregating CPU utilization, actual power utilization, name plate specifications, and other data are currently known and in use. The data related to the assets can be acquired from the vendor list or can be imported from enterprise asset management tools. Basic data schemes may be used to aggregate the data including tables or hierarchical data structures. The heuristic process can be implemented on a general purpose computer or a separate functionality implemented within existing power management units. Rendering engines with front end interface capabilities for rendering graphs and/or interfaces are also known within the art.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US4321582 *Mar 11, 1980Mar 23, 1982Banghart Thomas SData retrieval system and method
US4543649 *Oct 17, 1983Sep 24, 1985Teknar, Inc.System for ultrasonically detecting the relative position of a moveable device
US4955821 *Jul 10, 1989Sep 11, 1990Litton Systems, Inc.Method for controlling connector insertion or extraction sequence on power distribution panel
US5515853 *Mar 28, 1995May 14, 1996Sonometrics CorporationThree-dimensional digital ultrasound tracking system
US5719800 *Jun 30, 1995Feb 17, 1998Intel CorporationPerformance throttling to reduce IC power consumption
US5964879 *Nov 19, 1997Oct 12, 1999Intel CorporationMethod and system for dynamically power budgeting with device specific characterization of power consumption using device driver programs
US6167330 *May 8, 1998Dec 26, 2000The United States Of America As Represented By The Secretary Of The Air ForceDynamic power management of systems
US6229899 *Sep 24, 1998May 8, 2001American Technology CorporationMethod and device for developing a virtual speaker distant from the sound source
US6413104 *Jan 28, 2000Jul 2, 2002Northrop Grumman CorporationPower distribution panel with sequence control and enhanced lockout capability
US6476728 *Nov 12, 1999Nov 5, 2002Canon Kabushiki KaishaPower consumption management apparatus and method
US6499102 *Dec 29, 1999Dec 24, 2002Intel CorporationMethod of dynamically changing the lowest sleeping state in ACPI
US6553336 *Jun 26, 2000Apr 22, 2003Telemonitor, Inc.Smart remote monitoring system and method
US6567769 *Apr 24, 2001May 20, 2003Digipower Manufacturing Inc.Unattendant data center environment protection, control, and management system
US6697300 *Sep 13, 2002Feb 24, 2004General Dynamics Advanced Information Systems, Inc.Method and apparatus for determining the positioning of volumetric sensor array lines
US6983210 *Jun 18, 2004Jan 3, 2006Matsushita Electric Industrial Co., Ltd.Energy management system, energy management method, and unit for providing information on energy-saving recommended equipment
US6985697 *Sep 22, 2003Jan 10, 2006Nokia, Inc.Method and system for wirelessly managing the operation of a network appliance over a limited distance
US6986069 *Jul 1, 2002Jan 10, 2006Newisys, Inc.Methods and apparatus for static and dynamic power management of computer systems
US7032119 *May 18, 2001Apr 18, 2006Amphus, Inc.Dynamic power and workload management for multi-server system
US7057557 *Apr 19, 2002Jun 6, 2006Lg Electronics Inc.Apparatus and method for estimating position of mobile communication terminal
US7248978 *Feb 2, 2005Jul 24, 2007Power Measurement Ltd.System and method for routing power management data via XML firewall
US7272735 *Feb 28, 2006Sep 18, 2007Huron Ip LlcDynamic power and workload management for multi-server system
US7295556 *Feb 28, 2003Nov 13, 2007Enterasys Networks, Inc.Location discovery in a data network
US7444526 *Jun 16, 2005Oct 28, 2008International Business Machines CorporationPerformance conserving method for reducing power consumption in a server system
US7802120 *Sep 21, 2010Apple Inc.Methods and apparatuses for dynamic power control
US7853816 *Dec 14, 2010Nec CorporationApparatus, system and method for monitoring and managing the supply of power to a plurality of units connected to a power supply
US20020007463 *May 18, 2001Jan 17, 2002Amphus, Inc.Power on demand and workload management system and method
US20020156600 *Apr 24, 2001Oct 24, 2002Her-Lin ChangUnattendant data center environment protection, control, and management system
US20030124999 *Sep 13, 2002Jul 3, 2003Nokia CorporationMethod and apparatus for scaling the dynamic range of a receiver for continuously optimizing performance versus power consumption
US20030193777 *Apr 16, 2002Oct 16, 2003Friedrich Richard J.Data center energy management system
US20030204759 *Apr 26, 2002Oct 30, 2003Singh Jitendra K.Managing system power
US20040003303 *Jul 1, 2002Jan 1, 2004Newisys, Inc.Methods and apparatus for power management
US20040051397 *Dec 21, 2001Mar 18, 2004Asko JuntunenIntelligent power distribution system with a selectable output voltage
US20040064745 *Sep 26, 2002Apr 1, 2004Sudarshan KadambiMethod and apparatus for controlling the rate at which instructions are executed by a microprocessor system
US20040163001 *Feb 14, 2003Aug 19, 2004Bodas Devadatta V.Enterprise power and thermal management
US20040167732 *Feb 26, 2004Aug 26, 2004American Power Conversion CorporationMethod and apparatus for preventing overloads of power distribution networks
US20040267897 *Nov 6, 2003Dec 30, 2004Sychron Inc.Distributed System Providing Scalable Methodology for Real-Time Control of Server Pools and Data Centers
US20050102539 *Nov 6, 2003May 12, 2005International Business Machines CorporationComputer-component power-consumption monitoring and control
US20050143865 *Dec 29, 2004Jun 30, 2005Jay Warren GardnerSystem and methods for maintaining power usage within a set allocation
US20050223090 *May 9, 2005Oct 6, 2005Server Technology, Inc.Network power management system
US20050283624 *Jun 17, 2004Dec 22, 2005Arvind KumarMethod and an apparatus for managing power consumption of a server
US20060005057 *Jun 30, 2004Jan 5, 2006Nalawadi Rajeev KDynamic power requirement budget manager
US20060013070 *Dec 2, 2003Jan 19, 2006Sverre HolmUltrasonic tracking and locating system
US20060072271 *Oct 5, 2005Apr 6, 2006Ofi, Inc.Electrical power distribution system
US20060085854 *Oct 19, 2004Apr 20, 2006Agrawal Subhash CMethod and system for detecting intrusive anomalous use of a software system using multiple detection algorithms
US20060103504 *Nov 12, 2004May 18, 2006Afco Systems Development, Inc.Tracking system and method for electrically powered equipment
US20060112286 *Nov 23, 2004May 25, 2006Whalley Ian NMethod for dynamically reprovisioning applications and other server resources in a computer center in response to power and heat dissipation requirements
US20060168975 *Jan 28, 2005Aug 3, 2006Hewlett-Packard Development Company, L.P.Thermal and power management apparatus
US20060171538 *Jan 28, 2005Aug 3, 2006Hewlett-Packard Development Company, L.P.Information technology (IT) equipment positioning system
US20060184935 *Feb 11, 2005Aug 17, 2006Timothy AbelsSystem and method using virtual machines for decoupling software from users and services
US20060184936 *Feb 11, 2005Aug 17, 2006Timothy AbelsSystem and method using virtual machines for decoupling software from management and control systems
US20060184937 *Feb 11, 2005Aug 17, 2006Timothy AbelsSystem and method for centralized software management in virtual machines
US20060259621 *May 16, 2005Nov 16, 2006Parthasarathy RanganathanHistorical data based workload allocation
US20060265192 *May 3, 2005Nov 23, 2006Turicchi Thomas E JrSystems and methods for organizing and storing data
US20060288241 *Jun 16, 2005Dec 21, 2006Felter Wesley MPerformance conserving method for reducing power consumption in a server system
US20070010916 *Sep 14, 2006Jan 11, 2007Rodgers Barry NMethod for adaptively managing a plurality of loads
US20070019626 *Jun 29, 2006Jan 25, 2007Saurav LahiriSystem, method, and apparatus for discovering a new server connected within an automated data center
US20070038414 *Jan 27, 2006Feb 15, 2007American Power Conversion CorporationMethods and systems for managing facility power and cooling
US20070040582 *Aug 17, 2005Feb 22, 2007Gross Kenny CInferential power monitor without voltage/current transducers
US20070078635 *Oct 3, 2006Apr 5, 2007American Power Conversion CorporationMethods and systems for managing facility power and cooling
US20070136453 *Oct 10, 2006Jun 14, 2007Server Technology, Inc.Networkable electrical power distribution plugstrip with current display and method of use
US20070150215 *Jul 11, 2006Jun 28, 2007American Power Conversion CorporationMethod and apparatus for preventing overloads of power distribution networks
US20070180117 *Mar 28, 2006Aug 2, 2007Fujitsu LimitedManagement system, management program-recorded recording medium, and management method
US20070240006 *Apr 24, 2007Oct 11, 2007Fung Henry TSystem and method for activity or event base dynamic energy conserving server reconfiguration
US20070245165 *Apr 24, 2007Oct 18, 2007Amphus, Inc.System and method for activity or event based dynamic energy conserving server reconfiguration
US20070260897 *May 5, 2006Nov 8, 2007Dell Products L.P.Power allocation management in an information handling system
US20070273208 *Jul 12, 2007Nov 29, 2007Mpathx, LlcAutomatic Sensing Power Systems and Methods
US20080052145 *Aug 10, 2007Feb 28, 2008V2 Green, Inc.Power Aggregation System for Distributed Electric Resources
US20080148075 *Dec 15, 2006Jun 19, 2008Texas Instruments IncorporatedMethod and system of controlling power states of devices
US20080170471 *Feb 22, 2006Jul 17, 2008European Aeronautic Defence And Space Company- Eads FranceLocalization of a Non-Destructive Testing Probe
US20080238404 *Mar 10, 2008Oct 2, 2008Liebert CorporationMethod and apparatus for monitoring a load
US20080244281 *Mar 30, 2007Oct 2, 2008Felter Wesley MMethod and System for Associating Power Consumption with a Network Address
US20080270077 *Apr 30, 2007Oct 30, 2008Mehmet Kivanc OzonatSystem and method for detecting performance anomalies in a computing system
US20080317021 *Jun 21, 2007Dec 25, 2008American Power Conversion CorporationMethod and system for determining physical location of equipment
US20090207694 *Feb 5, 2009Aug 20, 2009Guigne Jacques YUltrasonic in-building positioning system based on phase difference array with ranging
US20090234512 *Dec 26, 2008Sep 17, 2009Server Technology, Inc.Power distribution, management, and monitoring systems and methods
US20090262604 *Aug 20, 2007Oct 22, 2009Junichi FunadaLocalization system, robot, localization method, and sound source localization program
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US8065206 *Nov 22, 2011Hewlett-Packard Development Company, L.P.Byte-based method, process and algorithm for service-oriented and utility infrastructure usage measurement, metering, and pricing
US8090480 *Jan 3, 2012International Business Machines CorporationConsumer electronic usage monitoring and management
US8138626May 10, 2010Mar 20, 2012Greenwave Reality, Pte Ltd.Power node for energy management
US8185333 *May 22, 2012Greenwave Reality, Pte, Ltd.Automated load assessment device and method
US8306639Jun 7, 2010Nov 6, 2012Greenwave Reality, Pte, Ltd.Home automation group selection by color
US8335936Dec 18, 2012Greenwave Reality, Pte Ltd.Power node with network switch
US8430402Apr 30, 2013Greenwave Reality Pte Ltd.Networked light bulb with color wheel for configuration
US8447434 *Jan 26, 2011May 21, 2013Williams-Pyro, Inc.System and method of dynamic and distributed control with topology discovery in an isolated power grid
US8452554Jun 7, 2010May 28, 2013Greenwave Reality PTE, Ltd.Networked device with power usage estimation
US8600575 *Sep 24, 2010Dec 3, 2013Synapsense CorporationApparatus and method for collecting and distributing power usage data from rack power distribution units (RPDUs) using a wireless sensor network
US8606372 *Apr 15, 2011Dec 10, 2013Williams Pyro, Inc.System and method of discovering and prioritizing loads in an isolated power grid with dynamic distributed control
US8732508 *Mar 31, 2009May 20, 2014Hewlett-Packard Development Company, L.P.Determining power topology of a plurality of computer systems
US8738195 *Sep 21, 2010May 27, 2014Intel CorporationInferencing energy usage from voltage droop
US8781769May 28, 2013Jul 15, 2014Greenwave Reality Pte LtdNetworked device with power usage estimation
US8805998 *Jun 11, 2010Aug 12, 2014Eaton CorporationAutomatic matching of sources to loads
US8811377Aug 30, 2010Aug 19, 2014Synapsense CorporationApparatus and method for instrumenting devices to measure power usage using a multi-tier wireless network
US9002668 *Apr 28, 2010Apr 7, 2015International Business Machines CorporationDiscovering an equipment power connection relationship
US9008079Dec 5, 2012Apr 14, 2015Iii Holdings 2, LlcSystem and method for high-performance, low-power data center interconnect fabric
US9054990May 18, 2012Jun 9, 2015Iii Holdings 2, LlcSystem and method for data center security enhancements leveraging server SOCs or server fabrics
US9069929Jun 19, 2012Jun 30, 2015Iii Holdings 2, LlcArbitrating usage of serial port in node card of scalable and modular servers
US9075655Sep 21, 2012Jul 7, 2015Iii Holdings 2, LlcSystem and method for high-performance, low-power data center interconnect fabric with broadcast or multicast addressing
US9077654May 18, 2012Jul 7, 2015Iii Holdings 2, LlcSystem and method for data center security enhancements leveraging managed server SOCs
US9092594Jun 19, 2012Jul 28, 2015Iii Holdings 2, LlcNode card management in a modular and large scalable server system
US9197949Nov 29, 2011Nov 24, 2015Tenrehte Technologies, Inc.Self-organizing multiple appliance network connectivity apparatus for controlling plurality of appliances
US9213384Aug 23, 2011Dec 15, 2015Tata Consultancy Services LimitedGeneration of energy consumption profiles
US9262225Dec 27, 2012Feb 16, 2016Iii Holdings 2, LlcRemote memory access functionality in a cluster of data processing nodes
US9311269Dec 3, 2012Apr 12, 2016Iii Holdings 2, LlcNetwork proxy for high-performance, low-power data center interconnect fabric
US20080281736 *Jul 25, 2008Nov 13, 2008Electronic Data Systems CorporationByte-based method, process and algorithm for service-oriented and utility infrastructure usage measurement, metering, and pricing
US20090222633 *Feb 25, 2009Sep 3, 2009Takeo MiyajimaInformation processing system and information processing method capable of performing detailed state notification even in a difficult situation
US20100174419 *Jan 7, 2009Jul 8, 2010International Business Machines CorporationConsumer Electronic Usage Monitoring and Management
US20110095608 *May 10, 2010Apr 28, 2011Jonsson Karl SPower node for energy management
US20110095709 *Jun 7, 2010Apr 28, 2011Greenwave Reality, Inc.Networked Light Bulb with Color Wheel for Configuration
US20110098831 *Jun 7, 2010Apr 28, 2011Greenwave Reality, Inc.Home Automation Group Selection by Color
US20110098867 *Apr 28, 2011Jonsson Karl SAutomated load assessment device and mehtod
US20110098953 *Apr 28, 2011Greenwave Reality, Inc.Networked Device with Power Usage Estimation
US20110270461 *Apr 28, 2010Nov 3, 2011International Business Machines CorporationDiscovering an equipment power connection relationship
US20110307111 *Dec 15, 2011Eaton CorporationAutomatic matching of sources to loads
US20110320849 *Mar 31, 2009Dec 29, 2011Hewlett-Packard Development Company, L.P.Determining power topology of a plurality of computer systems
US20120078429 *Sep 24, 2010Mar 29, 2012Patrick Edward WestonAPPARATUS AND METHOD FOR COLLECTING AND DISTRIBUTING POWER USAGE DATA FROM RACK POWER DISTRIBUTION UNITS (RPDUs) USING A WIRELESS SENSOR NETWORK
US20130073232 *Sep 14, 2012Mar 21, 2013Electronic Systems Protection, Inc.Source Power Anomaly and Load Power Consumption Monitoring and Analysis
US20130089004 *Mar 30, 2011Apr 11, 2013Sophia ConseilControl system
US20140359323 *Sep 24, 2010Dec 4, 2014Smooth-Stone, Inc. C/O Barry EvansSystem and method for closed loop physical resource control in large, multiple-processor installations
CN101834755A *Mar 26, 2010Sep 15, 2010深圳市共济科技有限公司Intelligent power distribution system
CN102447255A *Aug 31, 2011May 9, 2012三星电子株式会社Power management apparatus, power management system and method for controlling power management system
CN102939572A *Jun 10, 2011Feb 20, 2013伊顿公司Automatic matching of sources to loads
EP2580635A2 *Jun 10, 2011Apr 17, 2013Eaton CorporationAutomatic matching of sources to loads
WO2012004418A2 *Jul 11, 2011Jan 12, 2012Stratergia Ltd.Power profiling and auditing consumption systems and methods
WO2012004418A3 *Jul 11, 2011Jun 7, 2012Stratergia Ltd.Systems and methods for power consumption profiling and auditing
WO2012025938A1 *Aug 23, 2011Mar 1, 2012Tata Consultancy Services LimitedGeneration of energy consumption profiles
WO2012047757A1 *Sep 30, 2011Apr 12, 2012AvocentSystem and method for monitoring and managing data center resources in real time incorporating manageability subsystem
WO2012064182A1 *Nov 8, 2010May 18, 2012Uptime Products B.V.A data center, a method for collecting power consumption information in a data center, and a computer program product
WO2012075059A2 *Nov 29, 2011Jun 7, 2012Tenrehte Technologies, Inc.Appliance network connectivity apparatus
WO2012075059A3 *Nov 29, 2011Aug 2, 2012Tenrehte Technologies, Inc.Appliance network connectivity apparatus
WO2012130509A1 *Feb 7, 2012Oct 4, 2012Nec Europe Ltd.A method for detecting an association between an electrical device and a power meter, a power supply system and a monitoring system
Classifications
U.S. Classification713/340, 713/300
International ClassificationG06F1/26, G06F11/30
Cooperative ClassificationH02J3/14, G06F1/26, G06F1/3203, G06F1/28, G06F11/3093, G06F11/3062, G06F11/3051
European ClassificationG06F11/30E1, G06F11/30S1, G06F11/30C, G06F1/28, G06F1/32P
Legal Events
DateCodeEventDescription
Jul 8, 2008ASAssignment
Owner name: RARITAN AMERICAS, INC., NEW JERSEY
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SOMASUNDARAM, SIVA;YANG, ALLEN;REEL/FRAME:021203/0125;SIGNING DATES FROM 20080703 TO 20080707
Owner name: RARITAN AMERICAS, INC., NEW JERSEY
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SOMASUNDARAM, SIVA;YANG, ALLEN;SIGNING DATES FROM 20080703 TO 20080707;REEL/FRAME:021203/0125
May 11, 2012ASAssignment
Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION (SUCCESSOR
Free format text: AMENDMENT NO. 1 TO PATENT SECURITY AGREEMENT;ASSIGNORS:RARITAN AMERICAS, INC.;RARITAN, INC.;RIIP, INC.;AND OTHERS;REEL/FRAME:028192/0318
Effective date: 20120430
Sep 10, 2012ASAssignment
Owner name: PNC BANK, NATIONAL ASSOCIATION, PENNSYLVANIA
Free format text: SECURITY AGREEMENT;ASSIGNORS:RARITAN, INC.;RARITAN AMERICAS, INC.;RARITAN TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:028924/0527
Effective date: 20120907
Owner name: RARITAN AMERICAS, INC., NEW JERSEY
Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION;REEL/FRAME:028924/0272
Effective date: 20120907
Owner name: RARITAN, INC., NEW JERSEY
Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION;REEL/FRAME:028924/0272
Effective date: 20120907
Owner name: RIIP, INC., NEW JERSEY
Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION;REEL/FRAME:028924/0272
Effective date: 20120907
Oct 8, 2015ASAssignment
Owner name: RARITAN AMERICAS, INC.,, NEW JERSEY
Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:PNC BANK NATIONAL ASSOCIATION;REEL/FRAME:036819/0205
Effective date: 20151008
Owner name: RIIP, INC., NEW JERSEY
Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:PNC BANK NATIONAL ASSOCIATION;REEL/FRAME:036819/0205
Effective date: 20151008
Owner name: RARITAN INC, NEW JERSEY
Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:PNC BANK NATIONAL ASSOCIATION;REEL/FRAME:036819/0205
Effective date: 20151008
Owner name: RARITAN TECHNOLOGIES, INC.,, NEW JERSEY
Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:PNC BANK NATIONAL ASSOCIATION;REEL/FRAME:036819/0205
Effective date: 20151008