Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20100220622 A1
Publication typeApplication
Application numberUS 12/714,480
Publication dateSep 2, 2010
Filing dateFeb 27, 2010
Priority dateFeb 27, 2009
Also published asEP2401841A2, EP2401841A4, US8209415, US20100223378, WO2010099513A2, WO2010099513A3, WO2010099514A2, WO2010099514A3
Publication number12714480, 714480, US 2010/0220622 A1, US 2010/220622 A1, US 20100220622 A1, US 20100220622A1, US 2010220622 A1, US 2010220622A1, US-A1-20100220622, US-A1-2010220622, US2010/0220622A1, US2010/220622A1, US20100220622 A1, US20100220622A1, US2010220622 A1, US2010220622A1
InventorsCoach Wei
Original AssigneeYottaa Inc
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Adaptive network with automatic scaling
US 20100220622 A1
Abstract
A method for automatic scaling the processing capacity and bandwidth capacity of a network includes providing a network comprising a plurality of traffic processing units and a plurality of network links. Next, providing monitoring means for monitoring processing capacity demand and bandwidth capacity demand of the network. Next, providing managing means for adding traffic processing units to the network, removing traffic processing units from the network, connecting links to the network and disconnecting links from the network. Next, monitoring processing capacity demand and bandwidth capacity demand of the network via the monitoring means and then dynamically adjusting processing capacity of the network by selectively adding or removing traffic processing units in the network via the managing means upon observation of processing capacity demand increase or processing capacity demand decrease, respectively. The method also includes dynamically adjusting bandwidth capacity of the network by selectively connecting or disconnecting links in the network via the managing means upon observation of bandwidth capacity demand increase or bandwidth capacity decrease, respectively.
Images(14)
Previous page
Next page
Claims(20)
1. A method for automatic scaling of processing capacity and bandwidth capacity of a network comprising:
providing a network comprising a plurality of traffic processing units and a plurality of network links;
providing monitoring means for monitoring processing capacity demand and bandwidth capacity demand of said network;
providing managing means for adding traffic processing units to said network, removing traffic processing units from said network, connecting links to said network and disconnecting links from said network;
monitoring processing capacity demand and bandwidth capacity demand of said network via said monitoring means;
dynamically adjusting processing capacity of said network by selectively adding or removing traffic processing units in said network via said managing means upon observation of processing capacity demand increase or processing capacity demand decrease, respectively;
dynamically adjusting bandwidth capacity of said network by selectively connecting or disconnecting links in said network via said managing means upon observation of bandwidth capacity demand increase or bandwidth capacity decrease, respectively.
2. The method of claim 1, wherein said traffic processing units comprise virtual machines.
3. The method of claim 2, wherein said virtual machines comprise virtual computing instances provided by commercial cloud computing providers.
4. The method of claim 1, wherein said traffic processing units comprise physical machines.
5. The method of claim 1, wherein said network comprises an overlay network superimposed over an underlying network.
6. The method of claim 5, wherein said network links comprise network links of said underlying network.
7. The method of claim 5, wherein said underlying network comprises one of the Internet, WAN, wireless Network or a private network.
8. The method of claim 1, wherein said traffic processing units are distributed at different geographic locations.
9. The method of claim 1, wherein said traffic processing units are added or removed via an Application Programming Interface (API).
10. The method of claim 1 wherein said traffic processing units comprise specially designed traffic processing hardware and general purpose computers running specially designed traffic processing software and wherein said traffic processing hardware comprise at least one of router, switch, or hub.
11. A system for automatic scaling of processing capacity and bandwidth capacity of a network comprising:
a network comprising a plurality of traffic processing units and a plurality of network links;
monitoring means for monitoring processing capacity demand and bandwidth capacity demand of said network;
managing means for adding traffic processing units to said network, removing traffic processing units from said network, connecting links to said network and disconnecting links from said network;
wherein said monitoring means monitor processing capacity demand and bandwidth capacity demand of said network and provide processing capacity demand information and bandwidth capacity demand information to said managing means;
wherein said managing means dynamically adjust said processing capacity of said network by selectively adding or removing traffic processing units in said network upon receiving information of processing capacity demand increase or processing capacity demand decrease, respectively; and
wherein said managing means dynamically adjust bandwidth capacity of said network by selectively connecting or disconnecting links in said network upon receiving information of bandwidth capacity demand increase or bandwidth capacity decrease, respectively.
12. The system of claim 11, wherein said traffic processing units comprise virtual machines.
13. The system of claim 11, wherein said virtual machines comprise virtual computing instances provided by commercial cloud computing providers.
14. The system of claim 11, wherein said traffic processing units comprise physical machines.
15. The system of claim 11, wherein said network comprises an overlay network superimposed over an underlying network.
16. The system of claim 15, wherein said network links comprise network links of said underlying network.
17. The system of claim 15, wherein said underlying network comprises one of the Internet, WAN, wireless Network or a private network.
18. The system of claim 11, wherein said traffic processing units are distributed at different geographic locations.
19. The system of claim 11, wherein said traffic processing units are added or removed via an Application Programming Interface (API).
20. The system of claim 11, wherein said traffic processing units comprise specially designed traffic processing hardware and general purpose computers running specially designed traffic processing software and wherein said traffic processing hardware comprise at least one of router, switch, or hub.
Description
    CROSS REFERENCE TO RELATED CO-PENDING APPLICATIONS
  • [0001]
    This application claims the benefit of U.S. provisional application Ser. No. 61/156,069 filed on Feb. 27, 2009 and entitled METHOD AND SYSTEM FOR COMPUTER CLOUD MANAGEMENT, which is commonly assigned and the contents of which are expressly incorporated herein by reference.
  • [0002]
    This application claims the benefit of U.S. provisional application Ser. No. 61/165,250 filed on Mar. 31, 2009 and entitled CLOUD ROUTING NETWORK FOR BETTER INTERNET PERFORMANCE, RELIABILITY AND SECURITY, which is commonly assigned and the contents of which are expressly incorporated herein by reference.
  • FIELD OF THE INVENTION
  • [0003]
    The present invention relates to network design and management and in particular to a system and a method for an adaptive network with automatic capacity scaling in response to load demand changes.
  • BACKGROUND OF THE INVENTION
  • [0004]
    Networking changed the information technology industry by enabling different computing systems to communicate, collaborate and interact. There are many types of networks. The Internet is probably the biggest network on earth. It connects millions of computers all over the world. Wide Area Networks (WAN) are networks that are typically used to connect the computer systems of a corporation located in different geographies. Local Area Networks (LAN) are networks that typically provide connectivity in an office environment.
  • [0005]
    The purpose of a network is to enable communications between the systems that are connected to the network by delivering information from the source of the information to its destination. In such a mission, the network itself needs to have sufficient processing capacity and bandwidth capacity in order to perform traffic delivery and various processing tasks including figuring out an appropriate route for the traffic to travel through, handling of errors and accidents and ensuring the necessary security measures, among others.
  • [0006]
    A typical network includes two types of components: traffic processing components and connectivity components. Traffic processing components include the various types of networking devices such as router, switch and hub, among others. The connectivity components are typically called “links” that interconnect two processing components or end points. There are many ways to classify network links. Physical network links include those via Ethernet cable, wireless connectivity, satellite connectivity, optic fiber connections, dial-up phone line and so on. Virtual network links refer to logic links formed between two entities and may include many physical links as well as various processing components along the way. The combination of the processing capacity of the traffic processing components of a network determines the network's processing capacity. The bandwidth capacity of the various links together ultimately determines the bandwidth capacity of a network.
  • [0007]
    FIG. 1 shows a typical network 90 with many traffic processing components 105, 115, 125, 135 labeled as “router” as well as many links 101, 111, 121,131, 141, 151. Through this network 90, traffic is sent from source 100 to destination 150. When designing and managing a network, it is crucial to provision sufficient capacity. When there is not enough capacity, problems ranging from slowness, congestion, to packet loss and malfunctioning would occur.
  • [0008]
    In the prior art, network design and management are based on a fixed amount of capacity provisioned beforehand. One would acquire all the necessary hardware and software components, configure them, and then build connectivity between them. This fixed infrastructure provides a fixed amount of capacity. The problems of such approaches include high acquisition cost and over-provisioning or under-provisioning of capacity. Acquiring all the traffic processing components and setting up the links upfront can be very expensive for a large-scale network. The cost to build a large-scale network can range from millions of dollars to even higher. An example is the Internet itself, which costs billions of dollars to build and we are still investing millions of dollars to improve its capacity. An important aspect of the network is the fact that network traffic demand varies. Peak demands can be a few hundred percent or even higher than the average demand. In order to meet the needs of peak demand, the capacity of the network has to be over-provisioned. For example, a rule of thumb in designing a network is to provision 3-5 times the capacity of its normal demand. Such over-provisioning is necessary in order for the network to function properly and to meet its service agreements. However, normal bandwidth demand and processing demand are significantly lower than peak demands. It is not unusual to see a typical network's utilization rate to be only at 20%. Thus a significant portion of capacity is wasted. For large-scale networks, such waste is significant and ranges from thousands of dollars to millions of dollars or even higher. Further, such over-provisioning creates a significant carbon footprint. Today's telecommunication networks are responsible for 1% to 5% of global carbon footprint, and this percentage has been rising rapidly due to the rapid growth and adoption of information technology. FIG. 1A shows the discrepancy for typical networks between the provisioned capacity and actual capacity demand. Because prior art networks are based on fixed capacity, service suffers when capacity demand overwhelms the fixed capacity and waste occurs when demand is below the provisioned capacity.
  • [0009]
    Thus there is an unfulfilled need for new approaches to build and manage a network that can eliminate the expensive upfront costs, reduce capacity waste, and improving utilization efficiency.
  • SUMMARY OF THE INVENTION
  • [0010]
    In general, in one aspect, the invention features a method for automatic scaling the processing capacity and bandwidth capacity of a network. The method includes providing a network comprising a plurality of traffic processing units and a plurality of network links. Next, providing monitoring means for monitoring processing capacity demand and bandwidth capacity demand of the network. Next, providing managing means for adding traffic processing units to the network, removing traffic processing units from the network, connecting links to the network and disconnecting links from the network. Next, monitoring processing capacity demand and bandwidth capacity demand of the network via the monitoring means and then dynamically adjusting processing capacity of the network by selectively adding or removing traffic processing units in the network via the managing means upon observation of processing capacity demand increase or processing capacity demand decrease, respectively. The method also includes dynamically adjusting bandwidth capacity of the network by selectively connecting or disconnecting links in the network via the managing means upon observation of bandwidth capacity demand increase or bandwidth capacity decrease, respectively.
  • [0011]
    Implementations of this aspect of the invention may include one or more of the following. The traffic processing units include specially designed traffic processing hardware, such as router, switch, and hub, among others. The traffic processing units also include general purpose computers running specially designed traffic processing software. The traffic processing units utilize virtual machines and physical machines. The virtual machines are based on virtualization technology including VMWare, Xen and Microsoft Virutalization. The virtual machines are virtual computing instances provided by commercial cloud computing providers. The cloud computing providers include Amazon.com's EC2, RackSpace, SoftLayer, AT&T, GoGrid, Verizon, Fijitsu, Voxel, Google, Microsoft, FlexiScale, among others. The network is an overlay network superimposed over an underlying network. The network links are virtual network links of the underlying network. The underlying network may be the Internet, WAN, Wireless Network or a private network. The traffic processing units are distributed at different geographic locations. The traffic processing units are added or removed via an Application Programming Interface (API).
  • [0012]
    In general, in another aspect, the invention features a system for automatic scaling of the processing capacity and bandwidth capacity of a network. The system includes a network comprising a plurality of traffic processing units and a plurality of network links, monitoring means for monitoring processing capacity demand and bandwidth capacity demand of the network and managing means for adding traffic processing units to the network, removing traffic processing units from the network, connecting links to the network and disconnecting links from the network. The monitoring means monitor processing capacity demand and bandwidth capacity demand of the network and provide processing capacity demand information and bandwidth capacity demand information to the managing means. The managing means dynamically adjust the processing capacity of the network by selectively adding or removing traffic processing units in the network upon receiving information of processing capacity demand increase or processing capacity demand decrease, respectively. The managing means also dynamically adjust bandwidth capacity of the network by selectively connecting or disconnecting links in the network upon receiving information of bandwidth capacity demand increase or bandwidth capacity decrease, respectively.
  • [0013]
    Among the advantages of the invention may be one or more of the following. The network system is adaptive so that it always “provision” optimal capacity in response to the demand, eliminating capacity waste without sacrificing service quality, as shown in FIG. 2A. The network system is horizontally scalable. Its capacity increases linearly by just adding more traffic processing nodes to the system. It is also fault-tolerant. Failure of individual components within the system does not cause system failure. In fact, the system assumes component failures as common occurrences and is able to run on commodity hardware to deliver high performance and high availability services.
  • [0014]
    The details of one or more embodiments of the invention are set forth in the accompanying drawings and description below. Other features, objects and advantages of the invention will be apparent from the following description of the preferred embodiments, the drawings and from the claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0015]
    FIG. 1 shows the current Internet routing (prior art);
  • [0016]
    FIG. 1A is a graph of the network capacity demand versus time in a prior art network with fixed capacity;
  • [0017]
    FIG. 2 shows a cloud routing network of the present invention;
  • [0018]
    FIG. 2A shows the global locations of a geographically distributed network;
  • [0019]
    FIG. 2B a graph of the network capacity demand versus time in an adaptive network that changes its capacity based on demand;
  • [0020]
    FIG. 3 shows the functional blocks of the cloud routing system of FIG. 2;
  • [0021]
    FIG. 4 shows the traffic processing pipeline in the cloud routing network of FIG. 2;
  • [0022]
    FIG. 5 shows the cloud routing workflow of the present invention;
  • [0023]
    FIG. 6 shows the process of network capacity auto-scaling and route convergence of the present invention;
  • [0024]
    FIG. 7 shows the node management workflow of the present invention;
  • [0025]
    FIG. 8 shows various components in a cloud routing network;
  • [0026]
    FIG. 9 shows a traffic management unit (TMU); and
  • [0027]
    FIG. 10 shows the various sub-components of a traffic processing unit (TPU).
  • DETAILED DESCRIPTION OF THE INVENTION Cloud Routing Network
  • [0028]
    The present invention describes a cloud routing network that is implemented as an overlay virtual network or as a physical network. By way of background, we use the term “cloud routing network” to refer to a network (virtual or physical) that includes traffic processing nodes (TPUs) deployed at various locations inter-connected by network links, through which client traffic travels to destinations. A cloud routing network can be a virtual overlay network superimposed on an underlying physical network, a physical network or a combination of both. Referring to FIG. 2, the cloud routing network 300 includes router clouds 340, 350 and 360, which are superimposed over a physical network 370, which in this case is the Internet. Cloud 340 includes TPUs 342, 344, 346. Cloud 350 includes TPUs 352, 354 and cloud 360 includes TPUs 362, 364. Each TPU has a certain amount of processing capacity. The TPUs are connected to each other via network links. Each link possesses a certain amount of bandwidth. The processing capacity of the cloud network is the combined processing capacities of all the TPUs. The bandwidth capacity of the cloud network is the combined bandwidth capacity of all the links.
  • [0029]
    Cloud network 300 also includes a traffic management system 330, a traffic processing system 334, a data processing system 332 and a monitoring system 336. These systems are specialized software that the traffic processing nodes run in order to perform functions such as traffic monitoring, TPU node management, traffic re-direction, traffic splitting, load balancing, traffic inspection, traffic cleansing, traffic optimization, route selection, route optimization, among others. In one example, cloud network 300 is implemented as a virtual network that includes virtual machines at various commercially available cloud computing data centers, such as Amazon.com's Elastic Computing Cloud (EC2), SoftLayer, RackSpace, GoGrid, FlexiScale, AT&T, Verizon, Fujitsu, Voxel, among others. These cloud computing data centers provide the physical infrastructure to add or remove TPU nodes dynamically, which further enables the virtual network to scale both its processing capacity and network bandwidth capacity. When traffic grows to a certain level, the network starts up more TPUs, adds links to these new TPU nodes and thus increases the network's processing power as well as bandwidth capacity. When traffic level decreases to a certain threshold, the network shuts down certain TPUs to reduce its processing and bandwidth capacity.
  • [0030]
    The traffic management system 330 directs network traffic to its traffic processing units (TPU). The traffic monitoring system 336 monitors the network traffic, the traffic processing system 334 inspects and processes the network traffic and the data processing 332 gathers data from different sources and provides global decision support and means to configure and manage the system. Referring to FIG. 3, the functional components of the cloud routing system 300 include a traffic management interface unit 410, a traffic redirection unit 420, a traffic routing unit 430, a node management unit 440, a monitoring unit 450 and a data repository 460. The traffic management interface unit 410 includes a management user interface (UI) 412 and a management API 414.
  • [0031]
    For a virtual overlay network based cloud routing network, most TPU nodes are virtual machines running specialized traffic handling software. Various TPU nodes may belong to different clouds. Each cloud itself is a collection of nodes located in the same data center (or the same geographic location). Some nodes perform traffic management. Some nodes perform traffic processing. Some nodes perform monitoring and data processing. Some nodes perform management functions to adjust the network's capacity. Some nodes perform access management and security control. These nodes are connected to each other via the underlying network 370. The connection between two nodes may contain many physical links and hops in the underlying network, but these links and hops together form a conceptual “virtual link” that conceptually connects these two nodes directly. All these virtual links together with the TPU nodes form a virtual network. Each node has only a fixed amount of bandwidth and processing capacity. The capacity of the network is the sum of the capacity of all nodes, and thus a cloud routing network has only a fixed amount of processing and network capacity at any given moment. This fixed amount of capacity may be insufficient or excessive for the traffic demand. By adjusting the capacity of individual nodes or by adding or removing nodes, the network is able to adjust its processing power as well as bandwidth capacity.
  • [0032]
    In the case when a cloud routing network is primarily a physical network, most TPU nodes are physical machines running specialized traffic handling software, including general purpose computers as well as specially designed hardware appliances. Again, various TPU nodes may belong to different clouds. In each cloud, some nodes perform traffic management. Some nodes perform traffic processing. Some nodes perform monitoring and data processing. Some nodes perform management functions to adjust the network's capacity. Some nodes perform access management and security control. These nodes are connected to each other via network links. These links together with the TPU nodes form a network. Each node has only a fixed amount of bandwidth and processing capacity. The capacity of this network is the sum of the capacity of all nodes, and thus a cloud routing network has only a fixed amount of processing and network capacity at any given moment. This fixed account of capacity may be insufficient or excessive for the traffic demand. By adjusting the capacity of individual nodes or by adding or removing nodes, the network is able to adjust its processing power as well as bandwidth capacity.
  • Traffic Processing
  • [0033]
    The invention uses a cloud routing network service to process traffic and thus delivers “conditioned” traffic from source to destination according to delivery requirements. FIG. 2 shows a typical traffic processing service. When a client 305 issues a request to a network service running on servers 550 and 560, the cloud routing network 300 processes the request by doing the following steps:
      • 1. Traffic management service 330 intercepts the requests and routes the request to a TPU node;
      • 2. The TPU node checks the service's specific policy and performs the pipeline processing shown in FIG. 4;
      • 3. If necessary, a global data repository 332 is used for data collection and data analysis for decision support;
      • 4. If necessary, the client request is routed to the next TPU node, i.e., from TPU 342 to 352; and then
      • 5. Request is sent to an “optimal” server 550 for processing
  • [0039]
    More specifically, when a client issues a request to a server (for example, a consumer enters a web URL into a web browser to access a web site), the default Internet routing mechanism would route the request through the network hops along a certain network path from the client to the target server (“default path”). Using a cloud routing network, if there are multiple server nodes, the cloud routing network first selects an “optimal” server node from the multiple server nodes to as the target serve node to serve the request. This server node selection process takes into consideration factors including load balancing, performance, cost, and geographic proximity, among others. Secondly, instead of going through the default path, the traffic management service redirects the request to an “optimal” TPU within the overlay network (“Optimal” is defined by the system's routing policy, such as being geographically nearest, most cost effective, or a combination of a few factors). This “optimal” TPU further routes the request to second “optimal” TPU within the cloud routing network if necessary. For performance and reliability reasons, these two TPU nodes communicate with each other using either the best available or an optimized transport mechanism. Then the second “optimal” node may route the request to a third “optimal” node and so on. This process can be repeated within the cloud routing network until the request finally arrives at the target. The set of “optimal” TPU nodes together form a “virtual” path along which traffic travels. This virtual path is chosen in such a way that a certain routing measure (such as performance, cost, carbon footprint, or a combination of a few factors) is optimized. When the server responds, the response goes through a similar pipeline process within the cloud routing network until it is reaches the client.
  • [0040]
    FIG. 5 shows a typical network routing process. In this embodiment, the traffic management service utilizes a Domain Name Server (DNS) mechanism. The customer 801 configures the DNS record for an application so that DNS queries are processed by the cloud routing network 800, as shown in FIG. 8. Typical ways of configuring DNS records include setting the DNS server, the CNAME record or the “A” record of the application to a DNS server provided by the cloud routing network. When a client wants to access the application (e.g. www.somesite.com), the client needs to resolve the hostname to an IP address. The cloud routing network receives the DNS query. Based on the current routing policy, the network 800 first selects an “optimal” server node among the plurality of server nodes that the application is running on, and then selects an entry router 803. The IP address of the entry router node 803 is returned as a result of the DNS query. When the entry router 803 receives a message from the client 801, it selects an optimal exit router node 804, optimal path 805 as well as an optimal transport mechanism to deliver the message. The exit router node 804 receives the message, and further delivers it to the target server node 820. In this process, client IP, path information and performance metrics data are collected and logged in data processing unit (DPU) 806, which can be used for future path selection and node selection.
  • Process Capacity Scaling and Bandwidth Capacity Scaling
  • [0041]
    The invention enables a network to adjust its process capacity and bandwidth in response to traffic demand variations. The cloud routing network 300 monitors traffic demand, load conditions, network performance and various other factors via its monitoring service 336. When certain conditions are met, it dynamically launches new nodes at appropriate locations, activates links to these new nodes and spreads traffic to these new nodes in response to increased demand, or shuts down some existing nodes in response to decreased traffic demand. The net result is that the cloud routing network dynamically adjusts its processing and network capacity to deliver optimal results while eliminating unnecessary capacity waste and carbon footprint.
  • [0042]
    A cloud routing network utilizes an Application Programming Interface (API) from individual nodes to add or remove nodes from the network. Cloud computing providers typically provide APIs that allows a third party to manage machines instances. For example, Amazon.com's EC2 provides Amazon Web Services (AWS) based APIs that a third party can send web services messages to interact with and manage virtual machine instances, such as starting a new node, shutting down an existing node, checking the status of a node, etc. The managing means of the cloud routing network typically utilizes such APIs to add or remove traffic processing nodes and links, thus adjusting the network's capacity.
  • [0043]
    FIG. 6 depicts two important aspects of the cloud routing network: adaptive scaling and path convergence. Based on the continuously collected metrics data from monitor nodes and logs, the node management module 440 (shown in FIG. 3) checks the current capacity and takes actions. When it detects that capacity is “insufficient” according to a certain measure, it starts new router nodes. The router table is updated to include the new routers and thus spreads traffic to the new routers. When too much capacity is detected, node management module selectively shuts down some of the router nodes after traffic to these nodes have been drained up. The router tables are updated by removing these router nodes from the tables. At any time, when an event such as router failure or path condition change occurs, the router table is updated to reflect the change. The updated router table is used for subsequent routing.
  • [0044]
    Further, the cloud routing network can quickly recover from “fault”. When a fault such as node failure and link failure occurs, the system detects the problem and recovers from it by either starting a new node or selecting an alternative route. As a result, though individual components may not be reliable, the overall system is highly reliable.
  • Traffic Processing Unit Node Management
  • [0045]
    Node management module 440 provides services for managing the TPU nodes, such as starting a virtual machine (VM) instance, stopping a VM instance and recovering from a node failure, among others. In accordance to the node management policies in the system, this service launches new nodes when the traffic demand is high and it shuts down some nodes when it detects these nodes are not necessary any more.
  • [0046]
    The node monitoring module 450 monitors the TPU nodes over the network, collects performance and availability data, and provides feedback to the cloud routing system 300. This feedback is then used to make decisions such as when to scale up and when to scale down. Data repository 460 contains data for the cloud routing system, such as Virtual Machine Image (VMI), application artifacts (files, scripts, and configuration data), routing policy data, and node management policy data, among others.
  • [0047]
    FIG. 7 shows the node management workflow. When the system receives a node status change event from its monitoring agents, it first checks whether the event signals a node down. If so, the node is removed from the system. If the system policy says “re-launch failed nodes”, the node controller will try to launch a new node. Then the system checks whether the event indicates that the current set of server nodes are getting overloaded. If so, at a certain threshold, and if the system's policy permits, a node manager will launch new nodes and notify the traffic management service to spread load to the new nodes. Finally, the system checks to see whether it is in the state of “having too much capacity”. If so and the node management policy permits, a node controller will try to shut down a certain number of nodes to eliminate capacity waste.
  • [0048]
    In launching new nodes, the system picks the best geographic region to launch the new node. Globally distributed cloud environments such as Amazon.com's EC2 cover several continents, as shown in FIG. 2A. Launching new nodes at appropriate geographic locations help spread application load globally, reduce network traffic and improve application performance. In shutting down nodes to reduce capacity waste, the system checks whether session stickiness is required for the application. If so, shutdown is timed until all current sessions on these nodes have expired.
  • Monitoring
  • [0049]
    The cloud routing network contains a monitoring service 336 (that includes monitoring module 450) that provides the necessary data to the cloud routing network 300 as the basis for its decisions. Various embodiments implement a variety of techniques for monitoring. The following lists a few examples of monitoring techniques:
      • 1. Internet Control Message Protocol (ICMP) Ping: A small IP packet that is sent over the network to detect route and node status;
      • 2. traceroute: a technique commonly to check network route conditions;
      • 3. Host agent: an embedded agent running on host computers that collects data about the host;
      • 4. Web performance monitoring: a monitor node, acting as a normal user agent, periodically sends HTTP requests to a web server and processes the HTTP responses from the web server. The monitor nodes records metrics along the way, such as DNS resolution time, request time, response time, page load time, number of requests, number of JavaScript files, or page footprint, among others.
      • 5. Security monitoring: A monitor node periodically scans a target system for security vulnerabilities such as network port scanning and network service scanning to determine which ports are publicly accessible and which network services are running, further determining whether there are vulnerabilities.
      • 6. Content security monitoring: a monitor nodes would periodically crawls a web site and scans its content for detection of infected content, such as malware, spyware, undesirable adult content, or virus, among others.
  • [0056]
    The above examples are for illustration purpose. The present invention is agnostic and accommodates a wide variety of ways of monitoring. An embodiment of the present invention employs all above techniques for monitoring different target systems: Using ICMP, traceroute and host agent to monitor the cloud routing network itself, using web performance monitoring, network security monitoring and content security monitoring to monitor the available, performance and security of target network services such as web applications. A data processing system (DPS) would aggregate data from such monitoring service and provides all other services global visibility to such data.
  • [0057]
    Several embodiments of the present invention have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention. Accordingly, other embodiments are within the scope of the following claims.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US4345116 *Dec 31, 1980Aug 17, 1982Bell Telephone Laboratories, IncorporatedDynamic, non-hierarchical arrangement for routing traffic
US4490103 *Sep 13, 1979Dec 25, 1984Bucher-Guyer AgPress with easily exchangeable proof plates
US5852717 *Nov 20, 1996Dec 22, 1998Shiva CorporationPerformance optimizations for computer networks utilizing HTTP
US6061349 *May 2, 1997May 9, 2000Cisco Technology, Inc.System and method for implementing multiple IP addresses on multiple ports
US6108703 *May 19, 1999Aug 22, 2000Massachusetts Institute Of TechnologyGlobal hosting system
US6275470 *Jun 18, 1999Aug 14, 2001Digital Island, Inc.On-demand overlay routing for computer-based communication networks
US6415323 *Dec 9, 1999Jul 2, 2002Fastforward NetworksProximity-based redirection system for robust and scalable service-node location in an internetwork
US6415329 *Oct 30, 1998Jul 2, 2002Massachusetts Institute Of TechnologyMethod and apparatus for improving efficiency of TCP/IP protocol over high delay-bandwidth network
US6430618 *Mar 13, 1998Aug 6, 2002Massachusetts Institute Of TechnologyMethod and apparatus for distributing requests among a plurality of resources
US6449658 *Nov 18, 1999Sep 10, 2002Quikcat.Com, Inc.Method and apparatus for accelerating data through communication networks
US6606685 *Nov 15, 2001Aug 12, 2003Bmc Software, Inc.System and method for intercepting file system writes
US6650621 *Oct 5, 1999Nov 18, 2003Stonesoft OyLoad balancing routing algorithm based upon predefined criteria
US6754699 *Jul 19, 2001Jun 22, 2004Speedera Networks, Inc.Content delivery and global traffic management network system
US6795823 *Aug 31, 2000Sep 21, 2004Neoris Logistics, Inc.Centralized system and method for optimally routing and tracking articles
US6820133 *Mar 24, 2000Nov 16, 2004Netli, Inc.System and method for high-performance delivery of web content using high-performance communications protocol between the first and second specialized intermediate nodes to optimize a measure of communications performance between the source and the destination
US6880002 *Mar 18, 2002Apr 12, 2005Surgient, Inc.Virtualized logical server cloud providing non-deterministic allocation of logical attributes of logical servers to physical resources
US6963915 *Jun 3, 2002Nov 8, 2005Massachussetts Institute Of TechnologyMethod and apparatus for distributing requests among a plurality of resources
US7020719 *Sep 10, 2002Mar 28, 2006Netli, Inc.System and method for high-performance delivery of Internet messages by selecting first and second specialized intermediate nodes to optimize a measure of communications performance between the source and the destination
US7032010 *Jun 21, 2004Apr 18, 2006Speedera Networks, Inc.Scalable domain name system with persistence and load balancing
US7072979 *Jun 28, 2000Jul 4, 2006Cisco Technology, Inc.Wide area load balancing of web traffic
US7111061 *May 29, 2001Sep 19, 2006Akamai Technologies, Inc.Global load balancing across mirrored data centers
US7126955 *Jan 29, 2003Oct 24, 2006F5 Networks, Inc.Architecture for efficient utilization and optimum performance of a network
US7155515 *Feb 6, 2001Dec 26, 2006Microsoft CorporationDistributed load balancing for single entry-point systems
US7165116 *Jul 10, 2001Jan 16, 2007Netli, Inc.Method for network discovery using name servers
US7203796 *Oct 24, 2003Apr 10, 2007Network Appliance, Inc.Method and apparatus for synchronous data mirroring
US7251688 *May 29, 2001Jul 31, 2007Akamai Technologies, Inc.Method for generating a network map
US7257584 *May 13, 2004Aug 14, 2007Surgient, Inc.Server file management
US7266656 *Apr 28, 2004Sep 4, 2007International Business Machines CorporationMinimizing system downtime through intelligent data caching in an appliance-based business continuance architecture
US7274658 *Mar 1, 2002Sep 25, 2007Akamai Technologies, Inc.Optimal route selection in a content delivery network
US7286476 *Aug 1, 2003Oct 23, 2007F5 Networks, Inc.Accelerating network performance by striping and parallelization of TCP connections
US7308499 *Apr 30, 2003Dec 11, 2007Avaya Technology Corp.Dynamic load balancing for enterprise IP traffic
US7325109 *Oct 24, 2003Jan 29, 2008Network Appliance, Inc.Method and apparatus to mirror data at two separate sites without comparing the data at the two sites
US7340532 *Feb 7, 2001Mar 4, 2008Akamai Technologies, Inc.Load balancing array packet routing system
US7346676 *Dec 21, 2006Mar 18, 2008Akamai Technologies, Inc.Load balancing service
US7346695 *Oct 26, 2005Mar 18, 2008F5 Networks, Inc.System and method for performing application level persistence
US7359985 *Sep 14, 2004Apr 15, 2008Akamai Technologies, Inc.Method and system for high-performance delivery of web content using high-performance communications protocols to optimize a measure of communications performance between a source and a destination
US7373644 *Oct 2, 2001May 13, 2008Level 3 Communications, LlcAutomated server replication
US7376736 *Nov 13, 2006May 20, 2008Akamai Technologies, Inc.Method and system for providing on-demand content delivery for an origin server
US7380039 *Dec 29, 2004May 27, 20083Tera, Inc.Apparatus, method and system for aggregrating computing resources
US7389510 *Nov 6, 2003Jun 17, 2008International Business Machines CorporationLoad balancing of servers in a cluster
US7392325 *Oct 20, 2006Jun 24, 2008Akamai Technologies, Inc.Method for high-performance delivery of web content
US7395335 *Apr 7, 2005Jul 1, 2008Microsoft CorporationDistributed load balancing for single entry-point systems
US7398422 *Aug 31, 2004Jul 8, 2008Hitachi, Ltd.Method and apparatus for data recovery system using storage based journaling
US7406692 *Feb 24, 2004Jul 29, 2008Bea Systems, Inc.System and method for server load balancing and server affinity
US7418518 *Oct 20, 2006Aug 26, 2008Akamai Technologies, Inc.Method for high-performance delivery of web content
US7426617 *Feb 5, 2004Sep 16, 2008Network Appliance, Inc.Method and system for synchronizing volumes in a continuous data protection system
US7436775 *Jul 24, 2003Oct 14, 2008Alcatel LucentSoftware configurable cluster-based router using stock personal computers as cluster nodes
US7447774 *Jan 26, 2007Nov 4, 2008Cisco Technology, Inc.Load balancing network access requests
US7447939 *Feb 27, 2004Nov 4, 2008Sun Microsystems, Inc.Systems and methods for performing quiescence in a storage virtualization environment
US7451345 *May 27, 2005Nov 11, 2008International Business Machines CorporationRemote copy synchronization in disaster recovery computer systems
US7454458 *Jun 24, 2002Nov 18, 2008Ntt Docomo, Inc.Method and system for application load balancing
US7454500 *Sep 26, 2000Nov 18, 2008Foundry Networks, Inc.Global server load balancing
US7475157 *Sep 13, 2002Jan 6, 2009Swsoft Holding, Ltd.Server load balancing system
US7478148 *Jan 16, 2002Jan 13, 2009Akamai Technologies, Inc.Using virtual domain name service (DNS) zones for enterprise content delivery
US7480705 *Jul 24, 2001Jan 20, 2009International Business Machines CorporationDynamic HTTP load balancing method and apparatus
US7480711 *Dec 2, 2005Jan 20, 2009Packeteer, Inc.System and method for efficiently forwarding client requests in a TCP/IP computing environment
US7484002 *Mar 16, 2007Jan 27, 2009Akamai Technologies, Inc.Content delivery and global traffic management network system
US7502858 *May 7, 2002Mar 10, 2009Akamai Technologies, Inc.Integrated point of presence server network
US20010052016 *Feb 16, 2001Dec 13, 2001Skene Bryan D.Method and system for balancing load distrubution on a wide area network
US20020163881 *May 3, 2001Nov 7, 2002Dhong Sang HooCommunications bus with redundant signal paths and method for compensating for signal path errors in a communications bus
US20030210694 *Oct 29, 2002Nov 13, 2003Suresh JayaramanContent routing architecture for enhanced internet services
US20050060704 *Sep 17, 2003Mar 17, 2005International Business Machines CorporationManaging processing within computing environments including initiation of virtual machines
US20050232157 *Apr 20, 2004Oct 20, 2005Fujitsu LimitedMethod and system for managing network traffic
US20060031266 *Aug 3, 2004Feb 9, 2006Colbeck Scott JApparatus, system, and method for selecting optimal replica sources in a grid computing environment
US20060062161 *Sep 12, 2005Mar 23, 2006Huawei Technologies Co., Ltd.Method for dynamic lossless adjustment of bandwidth of an embedded resilient packet ring network
US20060085792 *Oct 15, 2004Apr 20, 2006Microsoft CorporationSystems and methods for a disaster recovery system utilizing virtual machines running on at least two host computers in physically different locations
US20060136908 *Dec 17, 2004Jun 22, 2006Alexander GebhartControl interfaces for distributed system applications
US20060193247 *Feb 25, 2005Aug 31, 2006Cisco Technology, Inc.Disaster recovery for active-standby data center using route health and BGP
US20070078988 *Sep 15, 2006Apr 5, 20073Tera, Inc.Apparatus, method and system for rapid delivery of distributed applications
US20080016387 *Jun 27, 2007Jan 17, 2008Dssdr, LlcData transfer and recovery process
US20080052404 *Oct 30, 2007Feb 28, 2008Akamai Technologies, Inc.Method and system for fault tolerant media streaming over the Internet
US20080151766 *Dec 21, 2006Jun 26, 2008Bhumip KhasnabishMethod, computer program product, and apparatus for providing a distributed router architecture
US20080159159 *Dec 28, 2006Jul 3, 2008Weinman Joseph BSystem And Method For Global Traffic Optimization In A Network
US20080159287 *Dec 29, 2006Jul 3, 2008Lucent Technologies Inc.EFFICIENT PERFORMANCE MONITORING USING IPv6 CAPABILITIES
US20080256223 *Apr 13, 2007Oct 16, 2008International Business Machines CorporationScale across in a grid computing environment
US20080259795 *Oct 18, 2006Oct 23, 2008Giovanni FiaschiAutomatic Connectivity Adaptation of Packet Traffic in a Transport Network
US20080259944 *May 16, 2008Oct 23, 2008Satish RaghunathMethod and apparatus of providing resource allocation and admission control support in a vpn
US20080281908 *May 8, 2008Nov 13, 2008Riverbed Technology, Inc.Hybrid segment-oriented file server and wan accelerator
US20080320482 *Jun 20, 2007Dec 25, 2008Dawson Christopher JManagement of grid computing resources based on service level requirements
US20090030986 *Jul 23, 2008Jan 29, 2009Twinstrata, Inc.System and method for remote asynchronous data replication
US20090055507 *Aug 20, 2007Feb 26, 2009Takashi OedaStorage and server provisioning for virtualized and geographically dispersed data centers
US20090271472 *Apr 28, 2008Oct 29, 2009Scheifler Robert WSystem and Method for Programmatic Management of Distributed Computing Resources
US20100131324 *Nov 26, 2008May 27, 2010James Michael FerrisSystems and methods for service level backup using re-cloud network
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US8079060 *Feb 24, 2011Dec 13, 2011Kaspersky Lab ZaoSystems and methods for policy-based program configuration
US8108912May 29, 2008Jan 31, 2012Red Hat, Inc.Systems and methods for management of secure data in cloud-based network
US8122282 *Mar 12, 2010Feb 21, 2012International Business Machines CorporationStarting virtual instances within a cloud computing environment
US8239509May 28, 2008Aug 7, 2012Red Hat, Inc.Systems and methods for management of virtual appliances in cloud-based network
US8255529Feb 26, 2010Aug 28, 2012Red Hat, Inc.Methods and systems for providing deployment architectures in cloud computing environments
US8271653Aug 31, 2009Sep 18, 2012Red Hat, Inc.Methods and systems for cloud management using multiple cloud management schemes to allow communication between independently controlled clouds
US8316125Aug 31, 2009Nov 20, 2012Red Hat, Inc.Methods and systems for automated migration of cloud processes to external clouds
US8341625May 29, 2008Dec 25, 2012Red Hat, Inc.Systems and methods for identification and management of cloud-based virtual machines
US8364819May 28, 2010Jan 29, 2013Red Hat, Inc.Systems and methods for cross-vendor mapping service in cloud networks
US8375223Oct 30, 2009Feb 12, 2013Red Hat, Inc.Systems and methods for secure distributed storage
US8402139Feb 26, 2010Mar 19, 2013Red Hat, Inc.Methods and systems for matching resource requests with cloud computing environments
US8458658Feb 29, 2008Jun 4, 2013Red Hat, Inc.Methods and systems for dynamically building a software appliance
US8504443Aug 31, 2009Aug 6, 2013Red Hat, Inc.Methods and systems for pricing software infrastructure for a cloud computing environment
US8504689May 28, 2010Aug 6, 2013Red Hat, Inc.Methods and systems for cloud deployment analysis featuring relative cloud resource importance
US8606667 *Feb 26, 2010Dec 10, 2013Red Hat, Inc.Systems and methods for managing a software subscription in a cloud network
US8606897May 28, 2010Dec 10, 2013Red Hat, Inc.Systems and methods for exporting usage history data as input to a management platform of a target cloud-based network
US8612566Jul 20, 2012Dec 17, 2013Red Hat, Inc.Systems and methods for management of virtual appliances in cloud-based network
US8612577Nov 23, 2010Dec 17, 2013Red Hat, Inc.Systems and methods for migrating software modules into one or more clouds
US8612615Nov 23, 2010Dec 17, 2013Red Hat, Inc.Systems and methods for identifying usage histories for producing optimized cloud utilization
US8631099May 27, 2011Jan 14, 2014Red Hat, Inc.Systems and methods for cloud deployment engine for selective workload migration or federation based on workload conditions
US8639950Dec 22, 2011Jan 28, 2014Red Hat, Inc.Systems and methods for management of secure data in cloud-based network
US8706852 *Aug 23, 2011Apr 22, 2014Red Hat, Inc.Automated scaling of an application and its support components
US8713147Nov 24, 2010Apr 29, 2014Red Hat, Inc.Matching a usage history to a new cloud
US8762501Aug 29, 2011Jun 24, 2014Telefonaktiebolaget L M Ericsson (Publ)Implementing a 3G packet core in a cloud computer with openflow data and control planes
US8769083Aug 31, 2009Jul 1, 2014Red Hat, Inc.Metering software infrastructure in a cloud computing environment
US8782192May 31, 2011Jul 15, 2014Red Hat, Inc.Detecting resource consumption events over sliding intervals in cloud-based network
US8782233Nov 26, 2008Jul 15, 2014Red Hat, Inc.Embedding a cloud-based resource request in a specification language wrapper
US8825791Nov 24, 2010Sep 2, 2014Red Hat, Inc.Managing subscribed resource in cloud network using variable or instantaneous consumption tracking periods
US8832219Mar 1, 2011Sep 9, 2014Red Hat, Inc.Generating optimized resource consumption periods for multiple users on combined basis
US8832459Aug 28, 2009Sep 9, 2014Red Hat, Inc.Securely terminating processes in a cloud computing environment
US8849971May 28, 2008Sep 30, 2014Red Hat, Inc.Load balancing in cloud-based networks
US8862720Aug 31, 2009Oct 14, 2014Red Hat, Inc.Flexible cloud management including external clouds
US8867361Jun 28, 2012Oct 21, 2014Telefonaktiebolaget L M Ericsson (Publ)Implementing EPC in a cloud computer with OpenFlow data plane
US8873398 *May 23, 2011Oct 28, 2014Telefonaktiebolaget L M Ericsson (Publ)Implementing EPC in a cloud computer with openflow data plane
US8904005Nov 23, 2010Dec 2, 2014Red Hat, Inc.Indentifying service dependencies in a cloud deployment
US8909783May 28, 2010Dec 9, 2014Red Hat, Inc.Managing multi-level service level agreements in cloud-based network
US8909784Nov 30, 2010Dec 9, 2014Red Hat, Inc.Migrating subscribed services from a set of clouds to a second set of clouds
US8924539Nov 24, 2010Dec 30, 2014Red Hat, Inc.Combinatorial optimization of multiple resources across a set of cloud-based networks
US8935692May 22, 2008Jan 13, 2015Red Hat, Inc.Self-management of virtual machines in cloud-based networks
US8937866Oct 25, 2012Jan 20, 2015Fourthwall MediaNetwork bandwidth regulation using traffic scheduling
US8943497May 29, 2008Jan 27, 2015Red Hat, Inc.Managing subscriptions for cloud-based virtual machines
US8949426Nov 24, 2010Feb 3, 2015Red Hat, Inc.Aggregation of marginal subscription offsets in set of multiple host clouds
US8954564May 28, 2010Feb 10, 2015Red Hat, Inc.Cross-cloud vendor mapping service in cloud marketplace
US8959221Mar 1, 2011Feb 17, 2015Red Hat, Inc.Metering cloud resource consumption using multiple hierarchical subscription periods
US8977750Feb 24, 2009Mar 10, 2015Red Hat, Inc.Extending security platforms to cloud-based networks
US8984104May 31, 2011Mar 17, 2015Red Hat, Inc.Self-moving operating system installation in cloud-based network
US8984505Nov 26, 2008Mar 17, 2015Red Hat, Inc.Providing access control to user-controlled resources in a cloud computing environment
US9037692Nov 26, 2008May 19, 2015Red Hat, Inc.Multiple cloud marketplace aggregation
US9037723May 31, 2011May 19, 2015Red Hat, Inc.Triggering workload movement based on policy stack having multiple selectable inputs
US9053472Feb 26, 2010Jun 9, 2015Red Hat, Inc.Offering additional license terms during conversion of standard software licenses for use in cloud computing environments
US9092243May 28, 2008Jul 28, 2015Red Hat, Inc.Managing a software appliance
US9100311Jun 2, 2014Aug 4, 2015Red Hat, Inc.Metering software infrastructure in a cloud computing environment
US9104407May 28, 2009Aug 11, 2015Red Hat, Inc.Flexible cloud management with power management support
US9112836Jan 14, 2014Aug 18, 2015Red Hat, Inc.Management of secure data in cloud-based network
US9152640May 10, 2012Oct 6, 2015Hewlett-Packard Development Company, L.P.Determining file allocation based on file operations
US9167501May 5, 2014Oct 20, 2015Telefonaktiebolaget L M Ericsson (Publ)Implementing a 3G packet core in a cloud computer with openflow data and control planes
US9201485May 29, 2009Dec 1, 2015Red Hat, Inc.Power management in managed network having hardware based and virtual resources
US9202225May 28, 2010Dec 1, 2015Red Hat, Inc.Aggregate monitoring of utilization data for vendor products in cloud networks
US9210173Nov 26, 2008Dec 8, 2015Red Hat, Inc.Securing appliances for use in a cloud computing environment
US9219669Jul 10, 2014Dec 22, 2015Red Hat, Inc.Detecting resource consumption events over sliding intervals in cloud-based network
US9306868Jan 5, 2015Apr 5, 2016Red Hat, Inc.Cross-cloud computing resource usage tracking
US9311162May 27, 2009Apr 12, 2016Red Hat, Inc.Flexible cloud management
US9354939May 28, 2010May 31, 2016Red Hat, Inc.Generating customized build options for cloud deployment matching usage profile against cloud infrastructure options
US9363198Sep 11, 2014Jun 7, 2016Red Hat, Inc.Load balancing in cloud-based networks
US9386086Sep 11, 2013Jul 5, 2016Cisco Technology Inc.Dynamic scaling for multi-tiered distributed systems using payoff optimization of application classes
US9389980Nov 30, 2009Jul 12, 2016Red Hat, Inc.Detecting events in cloud computing environments and performing actions upon occurrence of the events
US9398082Sep 19, 2014Jul 19, 2016Red Hat, Inc.Software appliance management using broadcast technique
US9407572Apr 20, 2015Aug 2, 2016Red Hat, Inc.Multiple cloud marketplace aggregation
US9419913Jul 15, 2013Aug 16, 2016Red Hat, Inc.Provisioning cloud resources in view of weighted importance indicators
US9436459May 28, 2010Sep 6, 2016Red Hat, Inc.Generating cross-mapping of vendor software in a cloud computing environment
US9438484Nov 24, 2014Sep 6, 2016Red Hat, Inc.Managing multi-level service level agreements in cloud-based networks
US9442771Nov 24, 2010Sep 13, 2016Red Hat, Inc.Generating configurable subscription parameters
US9450783May 28, 2009Sep 20, 2016Red Hat, Inc.Abstracting cloud management
US9473376 *Mar 11, 2013Oct 18, 2016Alcatel LucentMethod and server for determining home network quality
US9485117Feb 23, 2009Nov 1, 2016Red Hat, Inc.Providing user-controlled resources for cloud computing environments
US9497661 *Sep 15, 2014Nov 15, 2016Telefonaktiebolaget L M Ericsson (Publ)Implementing EPC in a cloud computer with openflow data plane
US20090300149 *May 28, 2008Dec 3, 2009James Michael FerrisSystems and methods for management of virtual appliances in cloud-based network
US20090300210 *May 28, 2008Dec 3, 2009James Michael FerrisMethods and systems for load balancing in cloud-based networks
US20090300423 *May 28, 2008Dec 3, 2009James Michael FerrisSystems and methods for software test management in cloud-based network
US20090300607 *May 29, 2008Dec 3, 2009James Michael FerrisSystems and methods for identification and management of cloud-based virtual machines
US20090300635 *May 30, 2008Dec 3, 2009James Michael FerrisMethods and systems for providing a marketplace for cloud-based networks
US20090300719 *May 29, 2008Dec 3, 2009James Michael FerrisSystems and methods for management of secure data in cloud-based network
US20100050172 *Aug 22, 2008Feb 25, 2010James Michael FerrisMethods and systems for optimizing resource usage for cloud-based networks
US20100057831 *Aug 28, 2008Mar 4, 2010Eric WilliamsonSystems and methods for promotion of calculations to cloud-based computation resources
US20100131624 *Nov 26, 2008May 27, 2010James Michael FerrisSystems and methods for multiple cloud marketplace aggregation
US20100131649 *Nov 26, 2008May 27, 2010James Michael FerrisSystems and methods for embedding a cloud-based resource request in a specification language wrapper
US20100131948 *Nov 26, 2008May 27, 2010James Michael FerrisMethods and systems for providing on-demand cloud computing environments
US20100131949 *Nov 26, 2008May 27, 2010James Michael FerrisMethods and systems for providing access control to user-controlled resources in a cloud computing environment
US20100132016 *Nov 26, 2008May 27, 2010James Michael FerrisMethods and systems for securing appliances for use in a cloud computing environment
US20100217864 *Feb 23, 2009Aug 26, 2010James Michael FerrisMethods and systems for communicating with third party resources in a cloud computing environment
US20100306354 *May 28, 2009Dec 2, 2010Dehaan Michael PaulMethods and systems for flexible cloud management with power management support
US20100306767 *May 29, 2009Dec 2, 2010Dehaan Michael PaulMethods and systems for automated scaling of cloud computing systems
US20100318609 *Jun 15, 2009Dec 16, 2010Microsoft CorporationBridging enterprise networks into cloud
US20110055034 *Aug 31, 2009Mar 3, 2011James Michael FerrisMethods and systems for pricing software infrastructure for a cloud computing environment
US20110055377 *Aug 31, 2009Mar 3, 2011Dehaan Michael PaulMethods and systems for automated migration of cloud processes to external clouds
US20110055396 *Aug 31, 2009Mar 3, 2011Dehaan Michael PaulMethods and systems for abstracting cloud management to allow communication between independently controlled clouds
US20110055398 *Aug 31, 2009Mar 3, 2011Dehaan Michael PaulMethods and systems for flexible cloud management including external clouds
US20110078303 *Sep 30, 2009Mar 31, 2011Alcatel-Lucent Usa Inc.Dynamic load balancing and scaling of allocated cloud resources in an enterprise network
US20110107103 *Oct 30, 2009May 5, 2011Dehaan Michael PaulSystems and methods for secure distributed storage
US20110131134 *Nov 30, 2009Jun 2, 2011James Michael FerrisMethods and systems for generating a software license knowledge base for verifying software license compliance in cloud computing environments
US20110131306 *Nov 30, 2009Jun 2, 2011James Michael FerrisSystems and methods for service aggregation using graduated service levels in a cloud network
US20110131315 *Nov 30, 2009Jun 2, 2011James Michael FerrisMethods and systems for verifying software license compliance in cloud computing environments
US20110131316 *Nov 30, 2009Jun 2, 2011James Michael FerrisMethods and systems for detecting events in cloud computing environments and performing actions upon occurrence of the events
US20110131499 *Nov 30, 2009Jun 2, 2011James Michael FerrisMethods and systems for monitoring cloud computing environments
US20110213686 *Feb 26, 2010Sep 1, 2011James Michael FerrisSystems and methods for managing a software subscription in a cloud network
US20110213687 *Feb 26, 2010Sep 1, 2011James Michael FerrisSystems and methods for or a usage manager for cross-cloud appliances
US20110213691 *Feb 26, 2010Sep 1, 2011James Michael FerrisSystems and methods for cloud-based brokerage exchange of software entitlements
US20110213719 *Feb 26, 2010Sep 1, 2011James Michael FerrisMethods and systems for converting standard software licenses for use in cloud computing environments
US20110213875 *Feb 26, 2010Sep 1, 2011James Michael FerrisMethods and Systems for Providing Deployment Architectures in Cloud Computing Environments
US20110213884 *Feb 26, 2010Sep 1, 2011James Michael FerrisMethods and systems for matching resource requests with cloud computing environments
US20110214124 *Feb 26, 2010Sep 1, 2011James Michael FerrisSystems and methods for generating cross-cloud computing appliances
US20110225467 *Mar 12, 2010Sep 15, 2011International Business Machines CorporationStarting virtual instances within a cloud computing environment
US20110239039 *Mar 26, 2010Sep 29, 2011Dieffenbach Devon CCloud computing enabled robust initialization and recovery of it services
US20110289585 *Feb 24, 2011Nov 24, 2011Kaspersky Lab ZaoSystems and Methods for Policy-Based Program Configuration
US20120297059 *Mar 26, 2012Nov 22, 2012Silverspore LlcAutomated creation of monitoring configuration templates for cloud server images
US20120303835 *May 23, 2011Nov 29, 2012Telefonaktiebolaget Lm Ericsson (Publ)Implementing EPC in a Cloud Computer with Openflow Data Plane
US20130054776 *Aug 23, 2011Feb 28, 2013Tobias KunzeAutomated scaling of an application and its support components
US20130160129 *Dec 19, 2011Jun 20, 2013Verizon Patent And Licensing Inc.System security evaluation
US20130191499 *Nov 1, 2012Jul 25, 2013Akamai Technologies, Inc.Multi-domain configuration handling in an edge network server
US20130258901 *Mar 22, 2013Oct 3, 2013Fujitsu LimitedCommunication interface apparatus, computer-readable recording medium for recording communication interface program, and virtual network constructing method
US20140208214 *Jan 23, 2013Jul 24, 2014Gabriel D. SternSystems and methods for monitoring, visualizing, and managing physical devices and physical device locations
US20150036523 *Mar 11, 2013Feb 5, 2015Alcatel LucentMethod and server for determining home network quality
US20150071053 *Sep 15, 2014Mar 12, 2015Telefonaktiebolaget L M Ericsson (Publ)Implementing epc in a cloud computer with openflow data plane
US20150195164 *Jan 7, 2014Jul 9, 2015International Business Machines CorporationScalable software monitoring infrastructure, using parallel task queuing, to operate in elastic cloud environments
US20150215228 *Jan 28, 2014Jul 30, 2015Oracle International CorporationMethods, systems, and computer readable media for a cloud-based virtualization orchestrator
US20150365311 *Aug 26, 2015Dec 17, 2015International Business Machines CorporationScalable software monitoring infrastructure, using parallel task queuing, to operate in elastic cloud environments
CN102158475A *Feb 22, 2011Aug 17, 2011山东大学System architecture based on student dormitory passageway system and system data synchronization method thereof
CN102215163A *Mar 24, 2011Oct 12, 2011东莞中山大学研究院Multi-server video on demand processing method
CN102932478A *Nov 15, 2012Feb 13, 2013北京搜狐新媒体信息技术有限公司Cloud platform node selection method and system
CN103312823A *Jul 9, 2013Sep 18, 2013苏州市职业大学Cloud computing system
CN103581340A *Nov 25, 2013Feb 12, 2014星云融创(北京)信息技术有限公司Method and device for accessing domain name to proxy gateway
CN103973682A *Apr 30, 2014Aug 6, 2014北京奇虎科技有限公司Method and device for having access to webpage
WO2012087105A1 *Jun 13, 2011Jun 28, 2012Mimos BerhadMethod and system for cloud computing infrastructure monitoring
WO2013063218A1 *Oct 25, 2012May 2, 2013Fourth Wall Media, Inc.Network bandwidth regulation using traffic scheduling
WO2015031866A1 *Aug 29, 2014Mar 5, 2015Clearpath Networks, Inc.System and method of network functions virtualization of network services within and across clouds
WO2016081910A1 *Nov 20, 2015May 26, 2016Huawei Technologies Co., Ltd.System and method for modifying a service-specific data plane configuration
WO2016114866A1 *Dec 9, 2015Jul 21, 2016Intel IP CorporationTechniques for monitoring virtualized network functions or network functions virtualization infrastructure
Classifications
U.S. Classification370/252, 370/468
International ClassificationH04L12/26, H04J3/22
Cooperative ClassificationH04L41/145, H04L47/781, H04L41/0896, H04L47/822
European ClassificationH04L41/08G, H04L47/82B, H04L47/78A
Legal Events
DateCodeEventDescription
Jan 17, 2011ASAssignment
Owner name: YOTTAA INC, MASSACHUSETTS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WEI, COACH;BUFFONE, ROBERT;STATA, RAYMOND;SIGNING DATES FROM 20110105 TO 20110107;REEL/FRAME:025648/0944
Jun 21, 2016ASAssignment
Owner name: COMERICA BANK, MICHIGAN
Free format text: SECURITY INTEREST;ASSIGNOR:YOTTAA, INC.;REEL/FRAME:038973/0307
Effective date: 20160601