WO2005054980A2 - Remote network management system - Google Patents
Remote network management system Download PDFInfo
- Publication number
- WO2005054980A2 WO2005054980A2 PCT/US2004/035943 US2004035943W WO2005054980A2 WO 2005054980 A2 WO2005054980 A2 WO 2005054980A2 US 2004035943 W US2004035943 W US 2004035943W WO 2005054980 A2 WO2005054980 A2 WO 2005054980A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- circuit
- signals
- remote
- pixel
- Prior art date
Links
- 230000008878 coupling Effects 0.000 claims abstract description 5
- 238000010168 coupling process Methods 0.000 claims abstract description 5
- 238000005859 coupling reaction Methods 0.000 claims abstract description 5
- 239000000872 buffer Substances 0.000 claims description 86
- 238000007906 compression Methods 0.000 claims description 66
- 230000006835 compression Effects 0.000 claims description 65
- 230000006854 communication Effects 0.000 claims description 58
- 238000004891 communication Methods 0.000 claims description 58
- 230000005540 biological transmission Effects 0.000 claims description 17
- 238000012545 processing Methods 0.000 claims description 11
- 230000007175 bidirectional communication Effects 0.000 claims 2
- 230000007613 environmental effect Effects 0.000 abstract description 4
- 238000000034 method Methods 0.000 description 36
- 230000006837 decompression Effects 0.000 description 26
- 230000006855 networking Effects 0.000 description 23
- 239000003086 colorant Substances 0.000 description 21
- 238000009499 grossing Methods 0.000 description 21
- 230000008569 process Effects 0.000 description 16
- 230000008859 change Effects 0.000 description 14
- 206010009944 Colon cancer Diseases 0.000 description 13
- 230000006870 function Effects 0.000 description 13
- 230000002093 peripheral effect Effects 0.000 description 10
- 238000006243 chemical reaction Methods 0.000 description 8
- 238000012360 testing method Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 6
- 230000003750 conditioning effect Effects 0.000 description 6
- 230000009467 reduction Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000013519 translation Methods 0.000 description 3
- 101100005280 Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) cat-3 gene Proteins 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 125000004122 cyclic group Chemical group 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000000977 initiatory effect Effects 0.000 description 2
- 230000008054 signal transmission Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001627 detrimental effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
- H04L67/125—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/24—Keyboard-Video-Mouse [KVM] switch
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/02—Standardisation; Integration
- H04L41/0246—Exchanging or transporting network management information using the Internet; Embedding network management web servers in network elements; Web-services-based protocols
- H04L41/0253—Exchanging or transporting network management information using the Internet; Embedding network management web servers in network elements; Web-services-based protocols using browsers or web-pages for accessing management information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Definitions
- the present invention relates generally to a remote network management system for remotely controlling network and computer equipment from one or more local user workstations through a remote control device.
- a keyboard, video monitor, and cursor control device attached to a user workstation are utilized to remotely control domain servers, file/print servers, headless servers, network appliances, serial IT equipment, switches, routers, firewalls, security interfaces, application servers, load balancers, and environmental controls as their associated power supplies are connected to a remote control device.
- a software program such as pcAnywhere may be utilized to access a remote computer over the Internet or a LAN utilizing the keyboard, video monitor, and cursor control device attached to a local user workstation.
- Remote computer access programs such as pcAnywhere, typically require that host software is installed on the remote computer and client software is installed on the user workstation.
- a user of the user workstation selects the desired remote computer from a list and enters the appropriate username and password.
- Hardware solutions also exist for operating a remote computer from a user workstation over the Internet or via a modem. In contrast to software solutions, hardware solutions do not typically require host and/or client software. Instead, hardware solutions typically utilize a keyboard, video monitor, and mouse (“KNM") switch which is accessible over the Internet or LAN via a common protocol, such as TCP/IP. The hardware solutions may also utilize a modem to connect to the Internet. Generally, a user or system administrator accesses the remote computers attached to the KVM switch utilizing an Internet web-browser or client software associated with the KVM switch.
- KNM keyboard, video monitor, and mouse
- the remote computer's video signal is routed to the user workstation's video monitor and a user may then utilize a keyboard and or mouse to control the remote computer.
- the KVM switch may additionally include a connection to the power source of the remote computer for a hard reboot in case of system failure.
- the aforementioned hardware and software solutions generally utilize compression algorithms to reduce the necessary bandwidth required to transmit the video signals.
- the remote network management system of the present invention uses the compression algorithm disclosed in application serial no. 10/233,299, which is incorporated herein by reference, to reduce and compress the digital data that must be transmitted to the remote computers and/or video display devices.
- video signals generated by a personal computer have both spatial and interframe redundancies.
- the compression algorithm used by the present invention takes advantage of these redundancies, both between successive frames of video and within each individual frame, to reduce the amount of digital video signal data that is transmitted to the remote computers and/or video display devices. Reducing the amount of digital data transmitted over the communication medium decreases communication time and decreases the required bandwidth.
- Most forms of video compression known in the art require complicated calculations. For example, Moving Pictures Experts Group (“MPEG”) video compression algorithms use the discrete cosine transform as part of its algorithm.
- MPEG Moving Pictures Experts Group
- the MPEG standard relies on the recognition of "motion" between frames, which requires calculation of motion vectors that describe how portions of the video image have changed over a period of time. Since these algorithms are calculation intensive, they either require expensive hardware or extended transmission times that allow sufficient time for slower hardware to complete the calculations.
- many existing video compression techniques are lossy (i.e., they do not transmit all of the video signal information in order to reduce the required bandwidth). Typically, such lossy techniques either reduce the detail of a video image or reduce the number of colors utilized. Although reducing the number of colors could be part of an adequate compression solution for some computer management systems applications, in many other applications, such a result defeats the intended purposes of the computer management system.
- a system known in the art discloses a method and apparatus for coupling a local user workstation, including a keyboard, mouse, and/or video monitor, to a remote computer.
- the claimed invention discloses a system wherein the remote computer is selected from a menu displayed on a standard size personal computer video monitor. Upon selection of a remote computer by the system user, the remote computer's video signals are transmitted to the local user workstation's video monitor. The system user may also control the remote computer utilizing the local user workstation's keyboard and monitor. The system is also capable of bi-directionally transmitting mouse and keyboard signals between the local user workstation and the remote computer.
- the remote computer and the local user workstation may be connected either via the Public Switched Telephone System ("PSTN”) and modems or via direct cabling.
- PSTN Public Switched Telephone System
- a first signal conditioning unit includes an on-screen programming circuit that displays a list of connected remote computers on the local video monitor. To activate the menu, a user depresses, for example, the "print screen" key on the local keyboard. The user selects the desired computer from the list using the local keyboard and/or mouse.
- the on-screen programming circuit requires at least two sets of tri-state buffers, a single on-screen processor, an internal synchronization generator, a synchronization switch, a synchronization polarizer, and overlay control logic.
- the first set of tri-state buffers couples the red, green, and blue components of the video signals received from the remote computer to the video monitor. That is, when the first set of tri-state buffers are energized, the red, green, and blue video signals are passed from the remote computer to the local video monitor through the tri-state buffers. When the first set of tri-state buffers are not active, the video signals from the remote computer are blocked.
- the second set of tri-state buffers couples the outputs of the single on-screen processor to the video monitor. When the second set of tri-state buffers is energized, the video output of the on-screen programming circuit is displayed on the local video monitor. When the second set of tri-state buffers is not active, the video output from the on-screen programming circuit is blocked.
- the remote computer video signals are combined with the video signals generated by the on-screen processor prior to display on the local video monitor.
- the on-screen programming circuit disclosed in the invention also produces its own horizontal and vertical synchronization signals.
- the CPU sends instructional data to the on-screen processor. This causes the on-screen processor to retrieve characters from an internal video RAM for display on the local video monitor.
- the overlaid video image produced by the on-screen processor namely a Motorola MC 141543 on-screen processor, is limited to the size and quantity of colors and characters that are available with the single on-screen processor.
- the system is designed to produce an overlaid video that is sized for a standard size computer monitor (i.e., not a wall-size or multiple monitor type video display) and is limited to the quantity of colors and characters provided by the single on-screen processor.
- a remote computer is chosen from the overlaid video display.
- the first signal conditioning unit receives keyboard and mouse signals from the local keyboard and mouse and generates a data packet for transmission to a central cross point switch.
- the cross point switch routes the data packet to the second signal conditioning unit, which is coupled to the selected remote computer.
- the second signal conditioning unit then routes the keyboard and mouse command signals to the keyboard and mouse connectors of the remote computer.
- video signals produced by the remote computer are routed from the remote computer through the second signal conditioning unit, the cross point switch, and the first signal conditioning unit to the local video monitor.
- the horizontal and vertical synchronization video signals received from the remote computer are encoded on one of the red, green or blue video signals. This encoding reduces the quantity of cables required to transmit the video signals from the remote computer to the local video monitor.
- a keyboard, video, mouse (“KVM") switching system capable of coupling to a standard network (e.g., a Local Area Network) operating with a standard network protocol (e.g., Ethernet, TCP/IP, etc.).
- the system couples a central switch to a plurality of computers and at least one user station having a keyboard, video monitor, and mouse.
- the central switch includes a network interface card ("NIC") for connecting the central switch to a network, which may include a number of additional computers or remote terminals.
- NIC network interface card
- a user located at a remote terminal attached to the network may control any of the computers coupled to the central switch.
- Still another system known in the art discloses a computer system having remotely located I/O devices.
- the system of Thornton includes a computer, a first interface device, and a remotely located second interface device.
- the first interface device is coupled to the computer and the second interface device is coupled to a video monitor and as many as three I/O devices (e.g., keyboard, mouse, printer, joystick, trackball, etc.) such that a human interface is created.
- the first and second interface devices are coupled to each other via a four wire cable.
- the first interface device receives video signals from the connected computer and encodes the horizontal and vertical synchronization signals of the received video signals onto at least one of the red, green, and blue components of the video signal.
- the first interface device also encodes the I O signals received from the connected computer into a data packet for transmission over the fourth wire in the four wire cable. Thereafter, the encoded, red, green, and blue components of the video signals and the data packet are transmitted to the second interface device located at the human interface.
- the second interface device decodes the encoded red, green, and blue components of the video signal, separates the encoded horizontal and vertical synchronization signals, and decodes the I/O signal data packet.
- the video signal and the synchronization signals are then output to the video monitor attached to the second interface and the decoded I/O signals are routed to the proper I/O device, also attached to the second interface.
- the second interface device may optionally include circuitry to encode I/O signals received from the I/O devices attached to the second interface for transmission to the first interface device.
- KVMP keyboard, video, mouse, and power switching
- OSD On screen display circuitry embedded within the KVMP switching apparatus allows a user located at a user station to select and operate any one of the computers utilizing the keyboard, video monitor, and mouse attached to the user station.
- Secondary switching circuitry located within the KVMP switching apparatus allows a user located at a user station to additionally control the electrical power supply supplying each computer.
- a need clearly exists for a self-contained remote network management system capable of operating and controlling networking equipment, servers, and computers connected to a remote control switching unit.
- a system should allow a user to control the power supply attached to the remote networking equipment, servers, and computers.
- the system should aid in managing remote network environments, thereby reducing the need to have an on-site system administrator.
- the present invention provides a self-contained remote network management system for administrating a remote computer networking environment from one or more local user workstations with attached peripheral devices (i.e., keyboard, video monitor, cursor control device, etc.).
- the remote network management system of the present invention allows a user located at a user workstation to access, operate, and control networking equipment, servers, and computers located at a remote location.
- the remote network management system also allows a user to control the power supply to each piece of remote equipment.
- the networking equipment e.g., hubs, switches, routers, etc.
- servers and computers are controlled and operated utilizing a keyboard, video monitor, and mouse.
- the remote networking equipment, servers, and computers are all connected to a central remote management unit ("RMU"), and in turn, the RMU is connected to the Internet or a LAN via an Ethernet or modem connection.
- the RMU has serial ports for connection to the networking equipment as well as keyboard, video, and cursor control device ports for connection to the servers and computers.
- the RMU additionally contains a port for connection to a power supply capable of controlling the power to the networking equipment, servers, and computers. Standard cabling is utilized to connect the networking equipment, servers, and computers to the appropriate ports on the RMU.
- the RMU also provides compatibility between various operating systems and/or communication protocols, including but not limited to, those manufactured by Microsoft Corporation (“Microsoft”) (Windows), Apple Computer, Inc.
- a user To utilize the remote network management system of the present invention, a user first initiates a management session by utilizing client software located on a user workstation to connect to the RMU. Alternatively, the user may utilize an Internet browser to connect to the RMU. The user is then prompted by the RMU to provide a user name and a password.
- the RMU is capable of storing multiple profiles and different levels of access for each profile.
- the user is provided an option menu on the user workstation's monitor produced by option menu circuitry located in the RMU.
- the option menu consists of a menu listing all the networking equipment, servers, and computers at the remote location.
- the option menu additionally contains a menu allowing a user to control the power to each piece of remote equipment.
- the user selects the desired networking equipment, server, or computer by utilizing the keyboard and/or cursor control device attached to the user workstation. Once a user makes a selection, the user is provided access to the remote equipment as if the user is physically located at the remote site.
- the RMU and the user workstation communicate via TCP/IP.
- the unidirectional video signals are digitized by a frame grabber.
- This circuit captures video output from the initiating computer at a speed of at least 20 frames/second and converts the captured analog video signals to a digital representation of pixels.
- Each pixel is digitally represented with 5 bits for red, 5 bits for green, and 5 bits for blue.
- the digital representation is then stored in a raw frame buffer.
- the compression algorithm then processes the digital data contained in the raw frame buffer.
- the compression algorithm is actually a combination of four sub-algorithms (i.e., the Noise Reduction and Difference Test ("NRDT"), Smoothing, Caching, and Bit Splicing/Compression sub-algorithms) as described in greater detail below.
- NRDT Noise Reduction and Difference Test
- the user workstation operates as a decompression device by executing a decompression algorithm.
- the RMU transmits messages to the decompression devices regarding the portions of the video that yielded "cache" hits (i.e., portions of unchanged video).
- the decompression device constructs the video frame based upon the transmitted video signals and the blocks of pixels contained in its local cache.
- the decompression device updates its local cache with the new blocks of pixels received from the RMU. In this manner, the decompression device caches remain synchronized with the compression device cache. Both the compression device and the decompression device update their respective cache by replacing older video data with newer video data. Furthermore, the video signals transmitted by the RMU have been compressed using a lossless compression algorithm. Therefore, the decompression device (e.g., software on the user workstation) must reverse this lossless compression. This is done by identifying the changed portions of the video image, based upon flags transmitted by the RMU. From this flag information, the decompression device is able to reconstruct full frames of video.
- the decompression device e.g., software on the user workstation
- the decompression device converts the video frame to its original color scheme by reversing a color code table ("CCT") conversion.
- the decompression device like the RMU, locally stores a copy of the same CCT used to compress the video data.
- the CCT is then used to convert the video data received from the RMU to a standard RGB format that may be displayed on the monitor attached to the user workstation.
- the decompression algorithm can be implemented in the remote network management system of the present invention in a variety of embodiments. For example, in one embodiment, it can be implemented as a software application that is executed by the user workstation. In an alternate embodiment, the decompression algorithm can be implemented to execute within a web browser such as Internet Explorer or Netscape ® Navigator ® .
- Such an embodiment eliminates the need for installation of application specific software on the user workstation. Also, this embodiment allows the RMU to easily transmit the video signals to any user workstation with Internet capabilities, regardless of the distance at which the computer is located from the initiating computer. This feature reduces the cabling cost associated with the remote network management system of the present invention. Since the present invention can be used to display video signals at locations that may be at a great distance from the RMU, it is important to ensure that the video signal transmission is secure. If the transmission is not secure, hackers, competitors, or other unauthorized users could potentially view confidential information contained within the video signals. Therefore, the remote network management system of the present invention is designed to easily integrate with digital encryption techniques known in the art.
- a 128-bit encryption technique is used both to verify the identity of the RMU and to encrypt and decrypt the transmitted video and data signals.
- a 128-bit public key RSA encryption technique is used to verify the remote participant, and a 128-bit RC4 private key encryption is used to encrypt and decrypt the transmitted signals.
- other encryption techniques or security measures may be used.
- the compression algorithm utilizsed does not employ operating system specific hooks, nor does it use platform specific GDI calls.
- the compression algorithm described herein and in co-pending application serial no. 10/233,299 is used to transmit the video signals.
- the video transmission system is not limited to such an embodiment. Rather, this system may be employed with any compression algorithm without departing from the spirit of the invention. Therefore, it is an object of the present invention to provide an improved, remote network management system that enables a user to control a remote networking environment from one or more local user workstations.
- a remote networking environment may include domain servers, file/print servers, headless servers, network appliances, serial IT equipment, switches, routers, firewalls, security interfaces, application servers, load balancers, and environmental controls.
- FIG. 1 A further understanding of the present invention can be obtained by reference to a preferred embodiment set forth in the illustrations of the accompanying drawings.
- FIG. 1 A further understanding of the present invention can be obtained by reference to a preferred embodiment set forth in the illustrations of the accompanying drawings.
- FIG. 1 A further understanding of the present invention can be obtained by reference to a preferred embodiment set forth in the illustrations of the accompanying drawings.
- FIG. 1 A further understanding of the present invention can be obtained by reference to a preferred embodiment set forth in the illustrations of the accompanying drawings.
- FIG. 1 is a schematic representation of a remote network management system according to the preferred embodiment of the invention illustrating the connection of a user workstation that includes a keyboard, video monitor, and cursor control device to networking equipment, servers, and computers through a remote management unit ("RMU").
- FIG. 2 is a screen-shot of an example option menu utilized to control the networking equipment, servers and computers.
- FIG. 3 A is a block diagram of the preferred embodiment of the RMU shown in FIG. 1 according to the preferred embodiment of the present invention illustrating the internal structure of the RMU and connectors for serial devices, keyboards, video monitors, cursor control devices, and a power supply.
- FIG. 3B is a detailed block diagram of the serial card shown in FIG. 3A.
- FIG. 3C is a detailed block diagram of the KVM port header shown in FIG. 3A.
- FIG. 3D is a detailed block diagram of the video processor shown in FIG. 3A.
- FIG. 4 depicts a flowchart of the compression algorithm utilized by the preferred embodiment of the RMU in accordance with the present invention.
- FIG. 5A depicts a flowchart detailing the Noise Reduction and Difference Test and smoothing sub-algorithms of the compression algorithm utilized by the preferred embodiment of the present invention.
- FIG. 5B depicts a flowchart that details the caching and bit splicing/compression sub-algorithms of the compression algoritlim utilized by the preferred embodiment of the present invention.
- FIG. 6 depicts a flowchart that details the nearest match function and its integration with the CCT of the compression algorithm utilized by the preferred embodiment of the present invention.
- FIG. 7 depicts a flowchart that details the Noise Reduction and Difference Test sub-algorithm of the compression algorithm utilized by the preferred embodiment of the present invention.
- FIG. 8 depicts an example application of the Noise Reduction and Difference Test sub-algorithm to a sample block of pixels as performed by the compression algorithm utilized by the preferred embodiment of the present invention.
- FIG. 9 depicts a detailed flowchart of the operation of the decompression algorithm used by the preferred embodiment of the present invention.
- a remote network management system comprising user workstation 101 including keyboard 103, video monitor 105, and cursor control device 107, remote management unit ("RMU") 109, Intemet/LAN/WAN 108, public switched telephone network (“PSTN”) 106, serial devices 111 a and 11 lb, servers 113a and 113b, remote computers 115a and 115b, and power supply 117.
- RMU remote management unit
- PSTN public switched telephone network
- serial devices 111 a and 11 lb servers 113a and 113b
- remote computers 115a and 115b and power supply 117.
- user workstation 101 and RMU 109 are connected to Internet/LAN/WAN 108 via communication lines 119 and 121, respectively.
- CAT 5 cabling is the preferred cabling for communication lines 119 and 121
- other cabling may be used, such as coaxial, fiber optic or multiple CAT 5 cables.
- CAT 5 cabling is preferred because it reduces cabling cost while maintaining the strength of signals that are transmitted over an extended distance.
- wireless networking equipment may also be utilized to connect RMU 109 to Internet/LAN/WAN 108 and serial devices I l ia and 111b, servers 11 a and 113b, computers 115a and 115b, and power supply 117.
- wireless networking equipment may also be utilized to connect user workstation 101 to Internet/LAN/WAN 108.
- user workstation 101 may utilize PSTN 106 to connect to RMU 109.
- PSTN 109 is utilized to connect to RMU 109
- communication lines 120 and 122 would preferably be CAT 3 cables. As an example, this means of communication may be utilized in emergency situations, such as if Internet/LAN/WAN 108 is not functioning properly.
- Communication lines 119 and 121 are connected to user workstation 101 and RMU 109 by plugging each end into a RJ-45 socket located on the respective pieces of equipment to be coupled by the CAT 5 cable.
- RJ-45 sockets and plugs are preferred, other types of connector may be used, including but not limited to RJ- 11 , RG- 58, RG-59, British Naval Connector (“BNC”), and ST connectors.
- the remote management system includes local user workstation 101, preferably comprising dedicated peripheral devices such as keyboard 103, video monitor 105 and/or cursor control device 107. Other peripheral devices may also be located at workstation 101, such as a printer, scanner, video camera, biometric scanning device, microphone, etc. Each peripheral device is directly or indirectly connected to user workstation 101, which is attached to Internet/LAN/WAN 108 via communication line 119. Of course, wireless peripheral devices may also be used with this system. In a preferred mode of operation, all electronic signals (i.e., keyboard signals and cursor control device signals) received at user workstation 101 from attached peripheral devices are transmitted to Internet/LAN/WAN 108 via communication line 119. Thereafter, the signals are transmitted to RMU 109 via communication line 121.
- dedicated peripheral devices such as keyboard 103, video monitor 105 and/or cursor control device 107.
- Other peripheral devices may also be located at workstation 101, such as a printer, scanner, video camera, biometric scanning device, microphone, etc.
- Each peripheral device is directly or indirectly connected
- RMU transmits the received signals to the respective remote equipment, which, in this figure, includes serial devices Ilia and 111b, servers 113a and 113b, computers 115a and 115b, and power supply 117.
- RMU 109 may be compatible with all commonly used, present day computer operating systems and protocols, including, but not limited to, those manufactured by Microsoft (Windows), Apple (Macintosh), Sun (Solaris), DEC, Compaq (Alpha), IBM (RS/6000), HP (HP9000) and SGI (IRIX). Additionally, local devices may communicate with remote computers via a variety of protocols including Universal Serial Bus ("USB"), American Standard Code for Information Interchange (“ASCII”) and Recommend Standard-232 ("RS-232").
- USB Universal Serial Bus
- ASCII American Standard Code for Information Interchange
- RS-232 Recommend Standard-232
- Serial devices 113a and 113b are connected to RMU 109 via communication lines 112a and 112b, respectively.
- communication lines 112a and 112b are CAT 5 cables terminated with RJ-45 connectors.
- a special adapter may be required to properly connect communication lines 112a and 112b to serial devices I l ia and 111b since not all serial devices are outfitted with RJ-45 ports. For example, if serial device I l ia only contained a serial port, the adapter would interface the RJ-45 connector of communication line 112a to the serial port located on serial device I l ia.
- power supply 117 is connected to RMU 109 via communication line 118.
- communication line 118 is a CAT 5 cable terminated with an RJ-45 connector on each end.
- Servers 113a and 113b and computers 115a and 115b are connected to RMU 109 via communication lines 114a, 114b, 116a, and 116b, respectively.
- communication lines 114a, 114b, 116a, and 116b are three-to-one coaxial cables which allow the keyboard, video, and cursor control device ports of servers 113a and 113b and computers 115a and 115b to be connected to a single port on RMU 109 as shown.
- a user initiates a remote management session at user workstation 101.
- the user first accesses client software located using workstation 101, which prompts the user for a user name and password.
- the system may utilize any combination of identification data to identify and/or authenticate a particular user.
- Utilizing the attached keyboard 103, cursor control device 107 or other peripheral device the user enters the user name and password.
- user workstation 101 connects to Internet/LAN/WAN 108 via communication line 119.
- User workstation 101 may connect to Internet/LAN/WAN 108 in a variety of ways.
- user workstation 101 may be connected to Internet/LAN/WAN 108 through an Ethernet connection.
- communication line 119 would be a CAT 5 cable.
- the connection to Internet/LAN/WAN 108 may also be accomplished through a wireless connection which precludes the need for communication line 119.
- RMU 109 may utilize standard Wireless Fidelity ("Wi-Fi") networking equipment to communicate with Internet/LAN/WAN 108.
- user workstation 101 may connect to RMU 109 via PSTN 106 by utilizing a modem connection.
- communication lines 120 and 122 would be CAT 3 cables.
- the username and password are then routed through Internet/LAN/WAN 108 to RMU 109 via communication line 121.
- RMU 109 receives the username and password and authenticates the user located at user workstation 101.
- an option menu circuit located in RMU 109 provides an option menu to user 101 via monitor 105 listing all the devices accessible through RMU 109. The user makes selections from this option menu utilizing keyboard 103, cursor control device 105, or some other peripheral device attached to user workstation 101.
- option menu 201 consists of device list 203, first desktop window 205, power control window 207, second desktop window 209, and serial device window 211.
- Device list 203 lists all active and inactive devices connected to RMU 109. A user utilizes this menu to select the desired device for control.
- first desktop window 205 displays the desktop of one of the remote computers.
- first desktop window 205 By selecting first desktop window 205, a user may utilize keyboard 103, cursor control device 107, or some other peripheral device to control the displayed remote computer.
- a user may utilize power control window 207 to access and operate power supply 117.
- Power control window 207 displays a list of all devices connected to power supply 117 as well as the status of each attached device such as average power utilized, RMS current, RMS voltage, internal temperature, etc.
- Power control window 207 is primarily utilized to cycle the power to the devices attached to power supply 117. However, since power supply 117 is programmable, power control window 207 may be utilized to perform any functions possible with power supply 117.
- Second desktop window 209 is utilized to access and operate a second remote computer or server.
- Serial device window 211 is utilized to operate and access any remote serial device attached to remote management unit 109.
- Serial device window 211 displays the current output produced by the serial device as well as the previous output produced by the serial device.
- the previous output of the serial device is stored in a buffer located in RMU 109.
- option menu 201 consists of a menu in which the attached devices are arranged by their connection to RMU 109.
- serial devices I l ia and 111b preferably would be listed in a menu different from servers 113a and 113b and computers 115a and 115b.
- the option menu also consists of a sub-menu for controlling power supply 117.
- RMU 109 may additionally contain an attached keyboard 123, cursor control device 125, and video monitor 127 which allow a user local to RMU 109 to control the attached serial devices 111a and 11 lb, servers 113a and 113b, and computers 115a and 115b, power supply 117, etc.
- Keyboard 123, cursor control device 125, and video monitor 127 may also be utilized to configure RMU 109 locally.
- Keyboard 123, cursor control device 125, and video monitor 127 are connected to RMU 109 via interface cable 129.
- keyboard 123, cursor control device 125, and video monitor 127 may be connected to RMU 109 via standard keyboard, cursor control device, and video monitor connectors.
- RMU 109 depicted is the preferred embodiment of RMU 109 according to the present invention.
- Keyboard and mouse signals arrive at RJ-45 port 201 from Internet/LAN/WAN 108 via communication line 121.
- RMU 109 consists of RJ-45 port 201, RJ-11 port 202, Ethernet connector 205, modem module 204, communications port connector 206, CPU 207, communications port connector 208, PCI riser card 209, serial card 211, video processor 212, serial ports 213, frame grabber 215, KVM port header 217, KVM ports 219, power supply 221 , power port 223, reset circuitry 225, local KVM port 227, and option menu circuit 229.
- the keyboard and/or cursor control device signals initially arrive at RJ-45 port 201 if RMU 109 is connected to Internet/LAN/WAN 108 via an Ethernet connection.
- the signals are then transmitted to Ethernet connector 205 which depacketizes the signals.
- the signals may arrive from PSTN 106 at RJ-11 port 202 if the keyboard and/or cursor control device signals were transmitted via a modem.
- the signals are transmitted to modem module 204, which demodulates the received signals, and subsequently to communications port connector 206 which depacketizes the signals. From Ethernet connector 205 or communications port connector 206, the keyboard and/or cursor control device signals are then transmitted to CPU 207 via video processor 212.
- CPU 207 utilizes routing information contained within the keyboard and/or cursor control device signals to determine the proper destination for the keyboard and cursor control device signals. If the keyboard and cursor control device signals specify a command to power supply 117, CPU 207 interprets the received command (e.g., utilizing a look-up table) and sends the proper command to power supply 117 via communications port connector 208 and power port 210.
- power port 210 is an RJ-45 connector to allow the RMU to interface with a power strip and control it as if it were a serial device. If CPU 207 determines that the keyboard and cursor control device signals contain a serial device routing instruction, the keyboard and cursor control device signals are transmitted to serial card 211 through PCI riser card 209. As shown in FIG.
- serial port 211 consists of UART/switch 301, serial transceivers 303, and programmable memory 305.
- Serial card 211 is capable of bidirectional signal transmission. When keyboard and/or cursor control device signals are being transmitted from PCI riser card 209 to serial port 213, the signals are initially transmitted to UART/switch 301 which, utilizing data and logic stored in memory 305, determines the proper serial transceiver 303 to which the keyboard and/or cursor control device signals are to be sent.
- UART/switch 301 is an EXAR XR17cl58.
- serial transceiver 303 which converts the signals from a parallel format to a serial format.
- Serial transceiver 303 is preferably a HIN23E serial transceiver from Intersil.
- the keyboard and/or cursor control device signals are then transmitted to serial port 213.
- commands from serial device 11 la or 11 lb are transmitted to CPU 207 via serial port 213, serial card 211, and PCI riser card 209, the commands are initially transmitted to serial transceiver 303 which converts the serial commands to a parallel format. Subsequently, the commands are transmitted to UART/switch 301 which re-transmits the commands to CPU 207 via PCI riser card 209.
- CPU 207 interprets the received commands and emulates a virtual terminal for display on video monitor 105.
- the present invention may incorporate any number of serial ports 213.
- two serial devices, I l ia and 11 lb are connected to serial ports 213a and 213b, respectively.
- CPU 207 determines that the keyboard and/or cursor control device signals are mean i fui I 13a and 113b or computers I I 5a nd 115b, CPU 207 iidiisu ⁇ il.> the keyboard and cursor control device signals through PCI riser card 209 and frame grabber 215 to KVM port header 217 which transmits the signals to the appropriate KVM port 219.
- FIG. 3A two serial devices, I l ia and 11 lb, are connected to serial ports 213a and 213b, respectively.
- KVM port header 217 consists of switch 350, video switch 352, and UARTs 354.
- keyboard and/or cursor control device signals are transmitted from KVM port 219 to KVM port header 217, the signals are initially received at UART 354.
- UART 354 converts the received serial keyboard and/or cursor control device signals to a parallel format.
- the converted keyboard and/or cursor control device signals are then transmitted to switch 350 which retransmits the signals to frame grabber 215.
- bi-directional keyboard and/or cursor control device signals are also transmitted from frame grabber 215 to KVM port 219. Keyboard and/or cursor control device signals received from frame grabber 215 are transmitted to switch 350 located in KVM port header 217.
- switch 350 transmits the received keyboard and/or cursor control device signals to the appropriate UART 354.
- UART 354 then converts the keyboard and/or cursor control device signals from a parallel format to a serial format for transmission to KVM port 219.
- KVM port header 217 also transmits uni-directional video signals received at KVM port 219 to frame grabber 215.
- the analog video signals received from KVM port 219 initially are transmitted to video switch 352.
- Video switch 352 then retransmits the video signals to frame grabber 215 which converts the received analog video signals to a digital format.
- Video processor 212 consists of video-in port 370, R-out 376a, G-out 376b, B-out 376c, pixel pusher 378, frame buffers 380, compression device 382, flash memory 384, RAM 386, microprocessor 388, and switch 390. Shown at the top of FIG. 3D, video-in port 370 receives the digitized video signals from CPU 207.
- Video-in port 370 outputs these digitized video signal components in the form of pixels, which are transmitted to and stored in pixel pusher 378.
- Pixel pusher 378, flash memory 384, and Random Access Memory (“RAM") 386 communicate with microprocessor 388 via communication bus 387.
- Pixel pusher 378 also communicates with frame buffers 380 (e.g., raw frame buffer, compare frame buffer, etc.) and compression device 382 via communication buses 379 and 381, respectively.
- the compression algorithm is executed by microprocessor 388.
- the compression operates as follows: Noise Reduction and Difference Test: As discussed above, digitization of the analog video signals is necessary to allow these signals to be transmitted via a digital communication medium (e.g., a network, LAN, WAN, Internet, etc.). However, a detrimental side effect of the digitization process is the introduction of quantization errors and noise into the video signals. Therefore, the Noise Reduction and Difference Test sub-algorithm (“NRDT sub-algorithm”) is designed to reduce the noise introduced during the digitization of the video signals. In addition, the NRDT sub-algorithm simultaneously determines the differences between the recently captured frame of video (i.e., the "current frame") and the previously captured frame of video (i.e., the "compare frame").
- a digital communication medium e.g., a network, LAN, WAN, Internet, etc.
- NRDT sub-algorithm the Noise Reduction and Difference Test sub-algorithm
- the NRDT sub-algorithm simultaneously determines the differences between the recently
- the NRDT sub-algorithm divides the current frame, which is contained in the raw frame buffer, into 64 x 32 blocks of pixels. Alternatively, other sizes of blocks may be used (e.g., 8x8 pixels, 16 16 pixels, 32x32 pixels, etc.) based upon criteria such as the size of the entire video frame, the bandwidth of the communication medium, desired compression yield, etc. After the current frame is divided into blocks, a two-level threshold model is applied to the block of pixels to determine whether it has changed with respect to the compare frame.
- pixel threshold and the “block threshold.”
- pixel threshold i.e., the first threshold of the two-level threshold
- this distance value is added to a distance sum. This process is performed for each pixel in the block.
- the resulting value of the distance sum is compared to the block threshold (i.e., the second threshold of the two-level threshold).
- this block of pixels is considered changed in comparison to the corresponding block of pixels in the compare frame. If a change is determined, the compare frame, which is stored in the compare frame buffer, will be updated with the new block of pixels. Furthermore, the new block of pixels will be further processed and transmitted in a compressed format to the user workstation. In contrast, if the distance sum is not greater than the block threshold, the block of pixels is determined to be unchanged. Consequently, the compare frame buffer is not updated, and this block of pixels is not transmitted to the user workstation. Eliminating the transmission of unchanged blocks of pixels reduces the overall quantity of data to be transmitted, thereby increasing transmission time and decreasing the required bandwidth.
- the NRDT sub-algorithm is ideal for locating both a large change in a small quantity of pixels and a small change in a large quantity of pixels. Consequently, the NRDT sub-algorithm is more efficient and more accurate than known percentage threshold algorithms that simply count the number of changed pixels in a block of pixels. With such an algorithm, if a few pixels within the block of pixels have changed drastically (e.g., from black to white), the algorithm would consider the block of pixels to be unchanged since the total number of changed pixels would not exceed the percentage threshold value. This result will often lead to display errors in the transmission of computer video. Consider, for example, a user that is editing a document.
- a percentage threshold algorithm would not register this change and, therefore, would lead to a display error.
- a percentage threshold algorithm by only looking at the number of pixels within a block that have changed, generally fails to recognize a video image change in which a few pixels have changed substantially.
- the NRDT sub-algorithm used by the present invention by virtue of its two-level threshold, will recognize that such a block of pixels has significantly changed between successive frames of video.
- each digital pixel representation is converted to a representation that uses a lower quantity of bits for each pixel. It is known in the art to compress color video by using a fewer number of bits to represent each color of each pixel. For example, a common video standard uses 8 bits to represent each of the red, green, and blue components of a video signal. Because 24 total bits are used to represent a pixel, this representation is commonly referred to as "24 bit RGB representation".
- the smoothing sub-algorithm of the present invention incorporates a more intelligent method of compression.
- This method uses a Color Code Table ("CCT") to map specific RGB representations to more compact RGB representations.
- CCT Color Code Table
- Both the compression and decompression algorithms of the present invention use the same CCT. However, different color code tables may be chosen depending on the available bandwidth, the capabilities of the local display device, etc.
- a histogram of pixel values is created and sorted by frequency such that the smoothing sub-algorithm may determine how often each pixel value occurs. Pixel values that occur less frequently are compared to pixel values that occur more frequently. To determine how similar pixel values are, a distance value is calculated based upon the color values of the red, green, and blue ("RGB") components of each pixel.
- RGB red, green, and blue
- a map of RGB values to color codes i.e., a CCT
- the CCT is used to map the less frequently occurring pixel value to the color code of the more frequently occurring pixel value.
- the noise is efficiently removed from each block and the number of bits used to represent each pixel is reduced.
- an 8x8 pixel block is being processed.
- 59 are blue
- 4 are red
- 1 is light blue.
- a low frequency threshold of 5 and a high frequency threshold of 25 are used. In other words, if a pixel value occurs less than 5 times within a block, it is considered to have a low frequency. Similarly, if a pixel value occurs more than 25 times within a block, it is considered to have a high frequency.
- the smoothing sub-algorithm ignores pixel values occurring between these two thresholds.
- the smoothing sub-algorithm determines that the red and light blue pixels occur with low frequency, and the blue pixels occur with high frequency.
- the values of the 4 red pixels and the 1 light blue pixel are compared with the value of the 59 blue pixels.
- a pre-determined distance threshold is used. If the distance between the less frequent pixel value and the more frequent pixel value is within this distance threshold, then the less frequent pixel value is converted to the more frequent pixel value. Therefore, in our present example, it is likely that the light blue pixel is close enough in value to the blue pixel that its distance is less than the distance threshold. Consequently, the light blue pixel is mapped to the blue pixel.
- the smoothing sub-algorithm of the present invention increases the redundancy in compared images by eliminating changes caused by superfluous noise introduced during the analog-to-digital conversion while retaining real changes in the video image.
- an optional caching sub-algorithm may be applied to further minimize the bandwidth required for transmitting the video images.
- the caching sub-algorithm uses a cache of previously transmitted blocks of pixels. Similar to the NRDT sub-algorithm, the caching sub-algorithm is performed on a block of pixels within the video frame. Again, any block size may be used (e.g., 8x8, 16x16, 32x32 or 64x32).
- the caching sub-algorithm performs a cache check, which compares the current block of pixels with blocks of pixels stored in the cache.
- the size of the cache may be arbitrarily large. Large caches generally yield a higher percentage of "cache hits."
- memory and hardware requirements increase when the size of the cache is increased.
- the number of comparisons, and thus the processing power requirements also increases when the size of the cache increases.
- a "cache hit” occurs when a matching block of pixels is located within the cache.
- a "cache miss” occurs if a matching block of pixels is not found in the cache.
- the new block of pixels does not have to be retransmitted. Instead, a message and a cache entry identification ("ID") are sent to the remote participant equipment. Generally, this message and cache entry ID will consume less bandwidth than that required to transmit an entire block of pixels.
- ID a cache entry identification
- the new block of pixels is compressed and transmitted to the user workstation. Also, both the RMU and user workstation update their respective cache by storing the new block of pixels in the cache. Since the cache is of limited size, older data is overwritten.
- a simple algorithm can be employed to overwrite the oldest block ofpixels within the cache, wherein the oldest block is defined as the least recently transmitted block.
- the new block ofpixels In order to search for a cache hit, the new block ofpixels must be compared with all corresponding blocks ofpixels located within the cache. There are several ways in which this may be performed.
- a cyclic redundancy check (“CRC") is computed for the new block ofpixels and all corresponding blocks ofpixels.
- the CRC is similar to a hash code for the block.
- a hash code is a smaller, yet unique, representation of a larger data source.
- the cache check process can compare CRCs for a match instead of comparing the whole block ofpixels. If the CRC of the current block ofpixels matches the CRC of any of the blocks ofpixels in the cache, a "cache hit" has been found. Because the CRC is a smaller representation of the block, less processing power is needed to compare CRCs. Furthermore, it is possible to construct a cache in which only the CRCs of blocks of ixels are stored at the remote participant locations. Thus, comparing the CRCs in lieu of comparing a full block of pixels saves processor time and thus improves performance.
- each block ofpixels that must be transmitted is compressed.
- each block is compressed using the Joint Bi-level Image Group (“JBIG") lossless compression algorithm.
- JBIG Joint Bi-level Image Group
- the JBIG compression algorithm was designed for black and white images, such as those transmitted by facsimile machines.
- the compression algorithm utilized by the present invention can compress and transmit color video images. Therefore, when utilizing the JBIG compression algorithm, the color video image must be bit-sliced, and the resulting bit-planes must be compressed separately.
- a bit plane of a color video image is created by extracting a single bit from each pixel color value in the color video image.
- the color video image is divided into 8 bit planes.
- the compression algorithm in conjunction with the CCT discussed above, transmits the bit plane containing the most significant bits first, the bit plane containing the second most significant bits second, etc.
- the CCT is designed such that the most significant bits of each pixel color are stored first and the lesser significant bits are stored last. Consequently, the bit planes transmitted first will always contain the most significant data, and the bit planes transmitted last will always contain the least significant data.
- the remote video monitor will receive video from the RMU progressively, receiving and displaying the most significant bits of the image before receiving the remaining bits.
- RMU 109 also contains a power supply 221 which provides power to RMU 109.
- power supply 221 is a redundant power supply which contains backup circuitry in case the main circuitry fails.
- Power supply 221 • receives power through power port 223 from an external power supply. The power to RMU is controlled by reset circuitry 225 which is interfaced directly to CPU 207.
- Reset circuitry 225 is utilized to turn the power on/off and reset RMU 109.
- RMU 109 also contains local KVM port 227 interfaced to CPU 207.
- Local KVM port 227 allows for connection of local keyboard 123, video monitor 127, and cursor control device 125 to RMU 227 via cable 129 (FIG. 1).
- Local keyboard 123, video monitor 127, and cursor control device 125 may be utilized for onsite control of the attached serial devices I l ia and 111b, servers 113a and 113b, computers 1 15a and 115b, and power supply 117.
- Option menu circuit 229 under control of CPU 207, provides the option menu to a user of the present invention.
- the option menu contains menus for selecting a serial device, a remote server or computer, or options to control the power to all devices connected to power supply 117.
- a user first initiates a remote management session at user workstation 101 and enters the required username and password. However, any unique combination of authentication information may be utilized.
- User workstation 101 packetizes the entered information and routes it to Internet/LAN/WAN 108 via communication line 119 and then to RMU 109 via communication line 121.
- the entered data is received at CPU 207 via RJ-45 connector 201 (or alternatively RJ-11 connector 202).
- Ethernet connector 205 removes the network protocol and transmits the received keyboard and or cursor control device signals to CPU 207.
- CPU 207 utilizes a lookup table containing all user profiles stored in the system to authenticate the user. Different user profiles may be given different levels of access to the system. For example, certain users may only be able to access and operate computers 115a and 115b and be restricted from operating servers 113a and 113b, serial devices I l ia and 111b, and power supply 117.
- option menu circuit 229 produces an option menu containing all the devices attached to RMU 109. In this case, the attached devices include serial devices I l ia and 111b, servers 113a and 113b, computers 1 15a and 115b, and power supply 117.
- RMU 109 may accommodate any number of serial devices, servers, computers, and associated power supplies.
- the option menu produced by option menu circuit 229 is compressed by video processor 212 and packetized by Ethernet connector 205 and then transmitted to user workstation 101 through RJ-45 connector 201, communication line 121, Internet/LAN/WAN 108, and communication line 119, in that order.
- the option menu is depacketized and decompressed at user workstation 101 for display on video monitor 105. The user then utilizes keyboard 103 and cursor control device 107 to select the desired device from the option menu.
- the user-entered keyboard and cursor control device signals are then encoded by user workstation 101, transmitted to RMU 109 via Internet/LAN/WAN 108, and subsequently decoded by CPU 207 located in RMU 109.
- CPU 207 interprets the received keyboard and cursor control device signals and interfaces the user with the selected device as previously described. If the user selects to be interfaced with servers 113a or 113b or computers 115a and 115b, the video signal of the selected device is displayed on video monitor 105.
- the video signal initially arrives from the selected device at KVM port 219 and is routed to KVM port header 217.
- the video signal is then routed to frame grabber 21 which converts the analog video signal to a digital signal.
- the resulting digitized video signal is then routed to CPU 207 through PCI riser card 209.
- CPU 207 determines the correct location to transmit the video signal (i.e., to local KVM port 227 or video processor 212). If the video signal is routed to local KVM port 227, the video signal is displayed on local video monitor 127. Alternatively, if the video signal is routed to video processor 212, it is compressed by video processor 212 and packetized by either Ethernet connector 205 or communications port connector 206 for transmission via communication line 121 through either RJ-45 port 201 or RJ-11 port 202.
- Ethernet connector 205 or communications port connector 206 also appends any other signals (i.e., keyboard signals, cursor control device signals, etc.) onto the compressed video signal for transmission to user workstation 101.
- a hotkey such as "printscreen” or "FI” on keyboard 103 attached to user workstation 101 (FIG. 1).
- This causes option menu 229 to open an option menu allowing the user to select a new serial device, server, computer, or modify the power supply to one of the connected devices.
- FIG. 4 depicted is a flowchart illustrating the operation of the compression algorithm utilized by video processor 212 in the preferred embodiment of the present invention.
- the compression algorithm is executed internal to RMU 109 by video processor 212 (FIG. 3).
- the digitized video signal is initially stored in a raw frame buffer (step 402), which is one of the frame buffers 380 (FIG. 3D).
- the compression algorithm is performed to process the captured video data contained in the raw frame buffer and prepare it for transmission to user workstation 101.
- the first step of the compression algorithm is the NRDT (step 403).
- the NRDT sub-algorithm is also executed internal to RMU 109 by video processor 212 (FIG. 3).
- the NRDT sub-algorithm determines which blocks ofpixels, if any, have changed between the current frame and the compare frame, also discussed above.
- the video frame is first divided into 64x32 pixel blocks.
- the NRDT sub-algorithm is applied to each block ofpixels independently.
- Alternative embodiments of the present invention may utilize smaller or larger blocks depending on criteria such as desired video resolution, available bandwidth, etc.
- the NRDT sub-algorithm employs a two-threshold model to determine whether differences exist between a block ofpixels in the current frame and the corresponding block ofpixels in the compare frame. These two thresholds are the pixel threshold and the block threshold.
- each pixel of the pixel block is examined to determine if that pixel has changed relative to the corresponding pixel of the corresponding block in the compare frame.
- each of the three colors (i.e., red, green, and blue) of each pixel in relation to the corresponding compare pixel is calculated, as described in greater detail below with respect to FIG. 7. If the distance value is larger than the pixel threshold (i.e., the first threshold of the two-threshold model), this distance value is added to a distance sum value. Then, after all pixels within the pixel block have been examined, if the resulting distance sum value is greater than the block threshold (i.e., the second threshold of the two-threshold model), the block is determined to have changed. Every block ofpixels in the video frame undergoes the same process.
- the pixel threshold i.e., the first threshold of the two-threshold model
- the process will have identified all pixel blocks that the process has determined have changed since the previous video frame.
- the compare frame is updated with the changed pixel blocks.
- the pixel blocks of the compare frame that correspond to unchanged pixel blocks of the current frame will remain unchanged.
- the two-threshold model used by the NRDT sub- algorithm eliminates pixel value changes that are introduced by noise created during the analog to digital conversion and also captures the real changes in the video frame.
- the smoothing sub-algorithm is designed to create a smooth, higher-quality video image by reducing the roughness of the video image caused by noise introduced during the analog to digital conversion.
- the smoothing sub-algorithm first converts the pixel representation that resulted from the NRDT sub-algorithm into a pixel representation that uses a lesser quantity of bits to represent each pixel. This is performed using a CCT that is specially organized to minimize the size of the pixel representation.
- the smoothing sub-algorithm uses the CCT to choose color codes with the least number of 1 -bits for the most commonly used colors. For example, white and black are assumed to be very common colors. Thus, white is always assigned 0 and black is always assigned 1.
- D is the distance
- Rj is the red value of the low frequency pixel
- R 2 is the red value of the high frequency pixel
- Gj is the green value of the low frequency pixel
- G 2 is the green value of the high frequency pixel
- Bi is the blue value of the low frequency pixel
- B is the blue value of the high frequency pixel.
- Caching is a sub-algorithm of the overall compression algorithm executed by video processor 212 of RMU 109 (FIG. 3). Caching requires RMU 109 (FIG. 3) to retain a cache of recently transmitted images. Such a cache can be implemented and stored in RAM 386 (FIG. 3D).
- the caching sub- algorithm compares the most recent block ofpixels with the corresponding block of pixels in the video images stored in the cache (step 405). If the most recently transmitted block ofpixels is the same as one of the corresponding blocks ofpixels stored in the cache, the caching sub-algorithm does not retransmit this portion of the video image. Instead, a "cache hit" message is sent to user workstation 101, which indicates that the most recently transmitted block is already stored in the cache (step 407).
- the "cache hit" message contains information regarding which cache contains the corresponding block of pixels, thereby allowing user workstation 101 to retrieve the block ofpixels from its cache and use it do create the video image to be displayed on its attached video display device.
- step 409 determines if the NRDT determined that the block ofpixels has changed since the corresponding block ofpixels in the compare frame.
- This step can also be implemented before or in parallel with step 405. Also, steps 421 , 405, and 407 may be eliminated entirely.
- the main purpose of step 409 is to determine whether the block has changed since the last frame. If the block has not changed, there is no need to send an updated block to user workstation 101. Otherwise, if the block ofpixels has changed, it is prepared for compression (step 411).
- step 409 uses a different technique than step 405. With two ways of checking for redundancy, higher compression will result.
- steps 409 and 411 are executed by a caching sub-algorithm executed by microprocessor 388 of video processor 212 (FIG. 3D).
- the cache is updated, and the data is compressed before being sent to the server stack.
- the image is compressed using the IBM JBIG compression algorithm. JBIG is designed to compress black and white images.
- the present invention is designed to transmit color video images. Therefore, bit planes of the image are extracted (step 411), and each bit plane is compressed separately (step 413). Finally, the compressed image is transmitted to server stack 417 (step 415), which transmits the data to switch 390 (FIG. 3D).
- FIG. 5B provide detailed flowcharts of a preferred embodiment of the compression process.
- the digital representation of the captured video image is transferred and stored in either frame buffer 0 503 or frame buffer 1 505.
- a frame buffer is an area of memory that is capable of storing one frame of video. The use of two frame buffers allows faster capture of image data.
- the captured frames of video are stored in frame buffer 0 503 and frame buffer 1 505 in an alternating manner. This allows the next frame of video to be captured while compression is being performed on the previous frame of video.
- frame buffer 0 503 and frame buffer 1 505 comprise a portion of frame buffers 380 (FIG. 3D).
- Step 519 An NRDT test is performed on each block of ixels stored in frame buffer 0 503 and frame buffer 1 505 (step 519), which compares each block of the captured video image to the corresponding block of the previously captured video image.
- Step 519 compares blocks ofpixels from the video image stored in the current raw frame buffer (i.e., frame buffer 0 503 or frame buffer 1 505) with the corresponding block ofpixels stored in compare frame buffer 521. This step is discussed in greater detail below with respect to FIGS. 6A and 6B.
- step 519 determines that the current block of pixels has changed, then nearest color match function processes the video images contained in frame buffer 0 503 and frame buffer 1 505 (step 509) in conjunction with the information contained in the client color code table ("CCT from client") 511, which is stored in flash memory 239 (FIG. 3).
- the nearest color match function can be executed as software by microprocessor 388. A detailed explanation of the nearest color match function is provided below with respect to FIG. 6.
- the CCT obtained from CCT 513 by the nearest color match function is used for color code translation (step 515), which translates the digital RGB representation of each pixel of the changed block of pixels to reduce the amount of digital data required to represent the video data.
- Color code translation receives blocks ofpixels that the NRDT sub-algorithm (step 519) has determined have changed relative to the previous captured video image. Color code translation then translates this digital data into a more compact form and stores the result in coded frame buffer 517.
- Coded frame buffer 517 can be implemented as a portion of RAM 386 (FIG. 3D).
- steps 509 and 515 may be performed in parallel with step 519. Performing these steps in parallel reduces the processing time required for each block of pixels that has changed. In this scenario, steps 509 and 515 are performed in anticipation of the block of ixels having changed. If this is the case, the processing for steps 509 and 515 may be completed at the same time as the processing for step 519 is completed.
- step 523 caching begins by performing a cyclical redundancy check (CRC)(step 523).
- Cyclic redundancy check is a method known in the art for producing a checksum or hash code of a particular block of data. The CRCs may be computed for two blocks of data and then compared. If the CRCs match, the blocks are the same. Thus, CRCs are commonly used to check for errors.
- the CRC is used to compare a block ofpixels with blocks ofpixels stored in a cache.
- the CRC is computed for each block ofpixels that was determined to have changed by the NRDT sub-algorithm.
- the array of CRCs is stored in CRC array 525.
- FIG. 5B depicted is an overview of the caching and bit splicing/compression sub-algorithms. This portion of the algorithm begins waiting for information from coded frame buffer 517 and CRC array 525 (step 527).
- a decision is made as to whether a new video mode has been declared (step 529).
- a new video mode can be declared if, for example, user workstation 101 has different bandwidth or color requirements.
- Step 531 If a new video mode has been declared, all data is invalidated (step 531) and the sub-algorithm returns to step 527 to wait for new information from coded frame buffer 517 and CRC array 525.
- Downscaler circuit 362 and or upscaler circuit 364, located in LCD controller 215, may be utilized to adjust the outputted digitized video to be compatible with the new video mode.
- Steps 527, 529, and 531 are all steps of the overall compression algorithm that is executed by microprocessor 388 (FIG. 3D). If in step 529 it is deemed that a new video mode has not been declared, then the comparison of the current block of pixel's CRC with the cached CRCs is performed (step 533).
- Block info array 535 stores the cache of pixel blocks and the CRCs of the pixel blocks and can be implemented as a device in RAM 386 (FIG. 3D). Step 533 is also a part of the overall compression algorithm executed by microprocessor 388 (FIG. 3D).
- Step 537 if the current block ofpixels is located within the pixel block cache contained in block info array 535 (step 537), a cache hit message is sent to user workstation 101 and the block ofpixels is marked as complete, or processed (step 539). Since user workstation 101 contains the same pixel block cache as RMU 109 (FIG.
- the cache hit message simply directs user workstation 101 to use a specific block of pixels contained in its cache to create the portion of the video image that corresponds to the processed block ofpixels.
- a check is performed for unprocessed blocks ofpixels (step 539). All blocks ofpixels that need to be processed, or updated, are combined to create a compute next update rectangle. If there is nothing to update (i.e., if the video has not changed between frames), then the algorithm returns to step 527 (step 543). Thus, the current frame will not be sent to the remote participation equipment. By eliminating the retransmission of a current frame of video, the sub-algorithm reduces the bandwidth required for transmitting the video.
- the update rectangle is first compressed.
- the update rectangle must first be bit sliced (step 545).
- a bit plane of the update rectangle is constructed by taking the same bit from each pixel of the update rectangle.
- the update rectangle includes 8-bit pixels, it can be deconstructed into 8 bit planes.
- the resulting bit planes are stored in bit plane buffer 547.
- steps 541, 543, and 545 are all part of the bit splicing/compression sub-algorithm executed by microprocessor 388 of RMU 109 (FIG. 3).
- Each bit plane is compressed separately by the compression sub-algorithm (step 549).
- compression is performed on each bit plane and the resulting data is sent to server stack 417 (step 551).
- compression is performed by video compression device 382 (FIG. 3) (step 549).
- the compressed bit planes are sent to switch 390 (FIG. 3D). Since the preferred embodiment captures frames 20 times per second, it is necessary to wait 300 ms between video frame captures. Thus, the algorithm waits until 300 ms have passed since the previous frame capture before returning the sub-algorithm to step 527 (step 553).
- the nearest color match function step 509 of FIG. 5 A
- Nearest color match function 509 processes each block of pixels of the video image stored in frame buffer 0 503 or frame buffer 1 505 successively. As shown in FIG. 6, a block ofpixels is extracted from the video image stored in frame buffer 0 503 or frame buffer 1 505 (step 600). In the preferred embodiment, the extracted block has a size of 64 by 32 pixels, however, any block size may be utilized.
- the nearest color match function eliminates noise introduced by the A/D conversion by converting less frequently occurring pixel values to similar, more frequently occurring pixel values. The function utilizes histogram analysis and difference calculations. First, nearest color match function 509 generates a histogram of pixel values (step 601 ). The histogram measures the frequency of each pixel value in the block ofpixels extracted during step 600.
- the histogram is sorted, such that a list of frequently occurring colors (popular color list 603) and a list of least frequently occurring colors (rare color list 605) are generated.
- the threshold for each list is adjustable.
- nearest color match function 509 analyzes each low frequently occurring pixel to determine if the pixel should be mapped to a value that occurs often.
- a pixel value is chosen from rare color list 605 (step 607).
- a pixel value is chosen from popular color list 603 (step 609).
- These distance between these two values is then computed (step 611). In this process, distance is a metric computed by comparing the separate red, green and blue values of the two pixels.
- Rl is the red value of the low frequency pixel
- R2 is the red value of the high frequency pixel
- Gl is the green value of the low frequency pixel
- G2 is the green value of the high frequency pixel
- BI is the blue value of the low frequency pixel
- B2 is the blue value of the high frequency pixel.
- D a distance value, which indicates the magnitude of the similarity or difference of the colors of two pixels, such as a less frequently occurring pixel versus a more frequently occurring pixel.
- the goal of the sub-algorithm is to find a more frequently occurring pixel having a color that yields the lowest distance value when compared to the color of a less frequently occurring pixel.
- a comparison is performed for each computed distance value (step 613). Every time a distance value is computed that is less than all previous distance values, the distance value is written to the closest distance variable (step 615). Once it is determined that all more frequently occurring pixels have been compared to less frequently occurring pixels (step 617), a computation is performed to determine if the lowest occurring D is within a predefined threshold (step 619). If this D is within the predefined threshold, CCT 513 is updated by mapping the low frequently occurring pixel to the color code value of the high frequently occurring pixel that yielded this D value (step 621). This process is repeated for all low frequency pixels and CCT 513 is updated accordingly. Turning to FIG. 7, RGB NRDT step 519 (FIG. 5 A) is illustrated in further detail.
- Current pixel block 700 represents a block ofpixels of the video image contained in the current frame buffer (i.e., frame buffer 0 503 or frame buffer 1 505 (FIG. 5 A)).
- Previous pixel block 701 contains the corresponding block ofpixels of the video image contained in compare frame buffer 521 (FIG. 5A).
- Step 519 begins by extracting corresponding pixel values for one pixel from the current pixel block 700 and previous pixel block 701 (step 703). Then, the pixel color values are used to calculate a distance value, which indicates the magnitude of the similarity or difference between the colors of the two pixels (step 705).
- Rl , G 1 , and B 1 are the red, green and blue values respectively of the frame buffer pixel.
- R2, G2, and B2 are the red, green and blue values respectively for the compare frame buffer pixel.
- the computed distance value D is compared with a pixel threshold (step 707). If D is greater than the pixel threshold, it is added to an accumulating distance sum (step 709). If the value of D is less than the pixel threshold, the difference is considered to be insignificant (i.e., noise) and it is not added to the distance sum.
- This process of computing distance values and summing distance values that are greater than a predefined pixel threshold continues until it is determined that the last pixel of the block ofpixels has been processed (step 711).
- the distance sum is compared with a second threshold, the block threshold (step 713). If the distance sum is greater than the block threshold, the current block ofpixels designated as changed as compared to the corresponding block ofpixels from the previously captured frame. Otherwise, if the distance sum is less than the block threshold, the block of ixels is designated as unchanged. If the block ofpixels is designated as changed, step 715 is executed. Step 715 sets a flag that indicates that the particular block ofpixels has changed. Furthermore, the new block ofpixels is written to compare frame buffer 521 (FIG. 5A) to replace the corresponding previous block of pixels.
- FIG. 8 further illustrates the two level thresholding used by the NRDT sub- algorithm shown in FIG. 7. For illustrative purposes only, 4x4 blocks ofpixels are shown.
- Previous pixel block 751 is a block ofpixels grabbed from compare frame buffer 521 (FIG. 5A).
- Previous pixel 1 752 is the pixel in the upper, left corner of previous pixel block 751. Since every pixel of previous pixel block 751 has a value of 0, previous pixel block 751 represents a 4x4 pixel area that is completely black.
- Current pixel block 753 represents the same spatial area of the video frame as previous pixel block 751, but it is one frame later.
- current pixel 1 754 is the same pixel 1 as previous pixel 1 752, but is one frame later.
- a small white object such as a white cursor
- This change occurs in current pixel 1 754 of current pixel block 753.
- current pixel block 753 the majority of the pixels remained black, but current pixel 1 754 is now white, as represented by the RGB color values of 255, 255, and 255.
- noise has been introduced by the A/D conversion, such that previous pixel 755 has changed from black, as represented by its RGB values of 0, 0, and 0, to gray.
- the new gray color is represented by the RGB values of 2, 2, and 2 assigned to current pixel 756.
- the NRDT sub-algorithm calculates the distance value between each pixel of current pixel block 753 and previous pixel block 751.
- This distance value is added to the distance sum because 195,075 exceeds 2 the pixel threshold of 100.
- the distance value between the black previous pixel 3 755 and the gray current pixel 756 is not added to the distance sum because the distance 4 between the pixels, as calculated using the above distance formula, equals 12, which does 5 not exceed the pixel threshold of 100.
- the distance value is computed for all of 6 the remaining pixels in the two pixel blocks. Each of these distance values equals zero, 7 therefore, since these distance values are less than the pixel threshold, they are not added 8 to the distance sum. 9 Consequently, after the distance values for all pixels have been processed, the
- FIG. 9 shown is a flowchart of the decompression algorithm 6 executed by user workstation 101 (FIG. 1).
- the decompression algorithm begins by 7 waiting for a message (step 801). This message is transmitted from server stack 417 of
- Client stack 803 may be a register or .0 some other device capable of permanently or temporarily storing digital data. In one
- client stack 803 is the local TCP/IP stack. Other embodiments may use a protocol other than TCP/IP. However, irrespective of the communication protocol, the present invention uses client stack 803 to store received messages for processing. Once a message is received in client stack 803, it is processed to determine whether the message is a new video mode message (step 805). A new video mode message may be sent for a variety of reasons including a bandwidth change, a change in screen resolution or color depth, a new client, etc. This list is not intended to limit the reasons for sending a new video mode message, but instead to give examples of when it may occur. If the message is a new video mode message, application layer 823 is notified of the new video mode (step 807).
- application layer 823 is software executed by user workstation 101 that interfaces with the input and output devices of user workstation 101 (i.e., keyboard 103, video monitor 105, and cursor control device 107). Any video updates must therefore be sent to application layer 823. Also, the old buffers are freed, including all memory devoted to storing previously transmitted frames, and new buffers are allocated (step 809). The decompression algorithm then returns to step 801. If the new message is not a video mode message, the message is further processed to determine if it is a cache hit message (step 811).
- the cache hit message is deciphered to determine which block of ixels, of the blocks ofpixels stored in the three cache frame buffers 815, should be used to reconstruct the respective portion of the video image.
- three cache frame buffers 815 are used in the preferred embodiment of the present invention, any quantity of cache frame buffers may be used without departing from the spirit of the invention.
- Cache frame buffers 815 store the same blocks ofpixels that are stored in the cache frame buffers located internal to RMU 109 (FIG. 3). Thus, the cache hit message does not include video data, but rather simply directs the remote participation equipment as to which block of ixels contained in the cache frame buffer 815 should be sent to merge frame buffer 817.
- the block ofpixels contained withm the specified cache is then copied from cache frame buffer 815 to merge buffer 817 (step 813).
- application layer 823 is notified that an area of the video image has been updated (step 825)
- Merge buffer 817 contains the current representation of the entire frame of video in color code pixels
- Application layer 823 copies the pixel data from merge buffer 817 and formats the data to match the pixel format of the connected video monitor 105 (step 819)
- the formatted pixel data is w ⁇ tten to update frame buffer 821 , which then transmits the data to video monitor 105
- the formatted pixel data may be written to a video card, memory, and or any other hardware or software commonly used with video display devices.
- the new message is not a new video mode or cache hit message, it is tested to determine if it is a message containing compressed video data (step 827). If the message does not contain compressed video data, the decompression algo ⁇ thm returns to step 801 and waits for a new message to be transmitted from server stack 417 Otherwise, if the message does contain compressed video data, the data is decompressed and transferred to bit plane frame buffer 833 (step 829).
- the preferred embodiment incorporates the JBIG lossless compression technique Therefore, decompression of the video data must be performed for each individual bit plane. After each bit plane is decompressed, it is merged with previously decompressed bit planes, which are stored in bit plane frame buffer 833 (step 829).
- bit plane frame buffer 833 When a sufficient number of bit planes have been merged, the merged data contained in bit plane frame buffer 833 is transferred to merge frame buffer 817 (step 831). Alternatively, individual bit planes may be decompressed and stored directly in merge frame buffer 817, thereby eliminating step 831.
- application layer 823 copies the data in merge frame buffer 817 to update frame buffer 821 (step 819). Thereafter, the data is transferred to video monitor 105.
- the video displayed on video monitor 105 can be updated after each bit plane is received. In other words, a user does not have to wait until the whole updated frame of video is received to update portions of the displayed video.
- the decompression algorithm determines whether all of the color code data from one field of the current video frame has been received (step 835). If a full field has not been received, the decompression algorithm returns to step 801 and waits for the remainder of the message, which is transmitted from server stack 417 to client stack 803 in the form of a new message. Otherwise, if a full field has been received, the decompression method notifies application layer 823 (step 837).
- this notification directs application layer 823 to read the data in merge frame buffer 817 and convert it to the current screen pixel format (step 819). Thereafter, the formatted data is written to update frame buffer 821 , which transmits the data to video monitor 105. After a full field has been received and application layer 823 has been notified, a second determination is made to determine if the full field is the last field included in the message. If it is, the newly decompressed block ofpixels is written to one of the cache frame buffers 815 (step 841). Otherwise, the decompression algorithm returns to step 801 and continues to wait for a new message.
- the new block ofpixels written to cache frame buffer 815 overwrites the oldest block ofpixels contained therein.
- Step 841 ensures that the cache is up-to-date and synchronized with the cache of RMU 109.
- the decompression algorithm returns to step 801.
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP04820003A EP1695230A4 (en) | 2003-11-26 | 2004-10-28 | Remote network management system |
CA002546952A CA2546952A1 (en) | 2003-11-26 | 2004-10-28 | Remote network management system |
JP2006541203A JP2007524284A (en) | 2003-11-26 | 2004-10-28 | Network remote management system |
AU2004295966A AU2004295966A1 (en) | 2003-11-26 | 2004-10-28 | Remote network management system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/723,992 US8176155B2 (en) | 2003-11-26 | 2003-11-26 | Remote network management system |
US10/723,992 | 2003-11-26 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2005054980A2 true WO2005054980A2 (en) | 2005-06-16 |
WO2005054980A3 WO2005054980A3 (en) | 2006-05-11 |
Family
ID=34633280
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2004/035943 WO2005054980A2 (en) | 2003-11-26 | 2004-10-28 | Remote network management system |
Country Status (6)
Country | Link |
---|---|
US (1) | US8176155B2 (en) |
EP (1) | EP1695230A4 (en) |
JP (1) | JP2007524284A (en) |
AU (1) | AU2004295966A1 (en) |
CA (1) | CA2546952A1 (en) |
WO (1) | WO2005054980A2 (en) |
Families Citing this family (66)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7945652B2 (en) * | 2002-08-06 | 2011-05-17 | Sheng (Ted) Tai Tsao | Display multi-layers list item in web-browser with supporting of concurrent multi-users |
US7904536B2 (en) * | 2002-06-03 | 2011-03-08 | Oracle America, Inc. | Method and system for remote management of customer servers |
US20120079389A1 (en) * | 2002-08-06 | 2012-03-29 | Tsao Sheng Tai Ted | Method and Apparatus For Information Exchange Over a Web Based Environment |
US8812640B2 (en) * | 2002-08-06 | 2014-08-19 | Sheng Tai (Ted) Tsao | Method and system for providing multi-layers item list in browsers with supporting of concurrent multiple users |
US7818480B2 (en) * | 2002-08-29 | 2010-10-19 | Raritan Americas, Inc. | Wireless management of remote devices |
US8068546B2 (en) * | 2002-08-29 | 2011-11-29 | Riip, Inc. | Method and apparatus for transmitting video signals |
US8558795B2 (en) * | 2004-03-12 | 2013-10-15 | Riip, Inc. | Switchless KVM network with wireless technology |
US7684483B2 (en) * | 2002-08-29 | 2010-03-23 | Raritan Americas, Inc. | Method and apparatus for digitizing and compressing remote video signals |
US7606314B2 (en) * | 2002-08-29 | 2009-10-20 | Raritan America, Inc. | Method and apparatus for caching, compressing and transmitting video signals |
JP2004206463A (en) * | 2002-12-25 | 2004-07-22 | Sharp Corp | Remote maintenance system |
US8749561B1 (en) | 2003-03-14 | 2014-06-10 | Nvidia Corporation | Method and system for coordinated data execution using a primary graphics processor and a secondary graphics processor |
US7546584B2 (en) | 2003-06-16 | 2009-06-09 | American Megatrends, Inc. | Method and system for remote software testing |
US7543277B1 (en) | 2003-06-27 | 2009-06-02 | American Megatrends, Inc. | Method and system for remote software debugging |
US8683024B2 (en) * | 2003-11-26 | 2014-03-25 | Riip, Inc. | System for video digitization and image correction for use with a computer management system |
US7162092B2 (en) * | 2003-12-11 | 2007-01-09 | Infocus Corporation | System and method for processing image data |
WO2005064851A1 (en) * | 2003-12-30 | 2005-07-14 | Bce Inc. | Remotely managed subscriber station |
CA2454408C (en) * | 2003-12-30 | 2012-01-10 | Bce Inc | Subscriber station |
US20050172039A1 (en) * | 2004-02-04 | 2005-08-04 | C&C Technic Taiwan Co., Ltd. | KVM switch |
US20050198245A1 (en) * | 2004-03-06 | 2005-09-08 | John Burgess | Intelligent modular remote server management system |
US7853663B2 (en) * | 2004-03-12 | 2010-12-14 | Riip, Inc. | Wireless management system for control of remote devices |
US7426542B2 (en) * | 2004-06-29 | 2008-09-16 | Intel Corporation | Providing a remote terminal capability |
US8069239B2 (en) * | 2004-07-20 | 2011-11-29 | Beckman Coulter, Inc. | Centralized monitor and control system for laboratory instruments |
JP2006058921A (en) * | 2004-08-17 | 2006-03-02 | Fujitsu Ltd | Cooperation application starting program |
US8819213B2 (en) * | 2004-08-20 | 2014-08-26 | Extreme Networks, Inc. | System, method and apparatus for traffic mirror setup, service and security in communication networks |
US7519749B1 (en) * | 2004-08-25 | 2009-04-14 | American Megatrends, Inc. | Redirecting input and output for multiple computers |
TWI253586B (en) * | 2004-09-01 | 2006-04-21 | Aten Int Co Ltd | Control system for controlling a plurality of computers |
CA2490645A1 (en) * | 2004-12-16 | 2006-06-16 | Ibm Canada Limited - Ibm Canada Limitee | Data-centric distributed computing |
US7698405B2 (en) * | 2005-01-07 | 2010-04-13 | Lantronix, Inc. | MILARRS systems and methods |
US8516171B2 (en) * | 2005-04-06 | 2013-08-20 | Raritan Americas Inc. | Scalable, multichannel remote device KVM management system for converting received signals into format suitable for transmission over a command network |
US8332523B2 (en) * | 2005-04-06 | 2012-12-11 | Raritan Americas, Inc. | Architecture to enable keyboard, video and mouse (KVM) access to a target from a remote client |
US8743019B1 (en) | 2005-05-17 | 2014-06-03 | Nvidia Corporation | System and method for abstracting computer displays across a host-client network |
US8478884B2 (en) | 2005-09-30 | 2013-07-02 | Riip, Inc. | Wireless remote device management utilizing mesh topology |
US8010843B2 (en) | 2005-12-14 | 2011-08-30 | American Megatrends, Inc. | System and method for debugging a target computer using SMBus |
US8171174B2 (en) * | 2006-01-19 | 2012-05-01 | Dell Products L.P. | Out-of-band characterization of server utilization via remote access card virtual media for auto-enterprise scaling |
US7852873B2 (en) * | 2006-03-01 | 2010-12-14 | Lantronix, Inc. | Universal computer management interface |
US8775704B2 (en) | 2006-04-05 | 2014-07-08 | Nvidia Corporation | Method and system for communication between a secondary processor and an auxiliary display subsystem of a notebook |
EP1876549A1 (en) * | 2006-07-07 | 2008-01-09 | Swisscom Mobile AG | Method and system for encrypted data transmission |
US8112534B2 (en) * | 2006-07-26 | 2012-02-07 | Dell Products L.P. | Apparatus and method for remote power control |
US9286026B2 (en) * | 2006-09-08 | 2016-03-15 | Aten International Co., Ltd. | System and method for recording and monitoring user interactions with a server |
US20080273113A1 (en) * | 2007-05-02 | 2008-11-06 | Windbond Electronics Corporation | Integrated graphics and KVM system |
CA2607537A1 (en) * | 2007-10-22 | 2009-04-22 | Ibm Canada Limited - Ibm Canada Limitee | Software engineering system and method for self-adaptive dynamic software components |
TW200919207A (en) * | 2007-10-25 | 2009-05-01 | Inventec Corp | Embedded system and remoteness-control serving apparatus |
US8488500B2 (en) | 2008-05-02 | 2013-07-16 | Dhaani Systems | Power management of networked devices |
US8736617B2 (en) | 2008-08-04 | 2014-05-27 | Nvidia Corporation | Hybrid graphic display |
US8799425B2 (en) | 2008-11-24 | 2014-08-05 | Nvidia Corporation | Configuring display properties of display units on remote systems |
US8195692B2 (en) * | 2008-12-11 | 2012-06-05 | International Business Machines Corporation | System and method for managing semantic and syntactic metadata |
US9075559B2 (en) | 2009-02-27 | 2015-07-07 | Nvidia Corporation | Multiple graphics processing unit system and method |
US9286088B2 (en) * | 2009-03-09 | 2016-03-15 | Microsoft Technology Licensing, Llc | User interface for interaction with virtual machine |
US9135675B2 (en) | 2009-06-15 | 2015-09-15 | Nvidia Corporation | Multiple graphics processing unit display synchronization system and method |
US8766989B2 (en) | 2009-07-29 | 2014-07-01 | Nvidia Corporation | Method and system for dynamically adding and removing display modes coordinated across multiple graphics processing units |
JP5318699B2 (en) * | 2009-08-17 | 2013-10-16 | 富士通コンポーネント株式会社 | KVM switch, KVM system and program |
US8780122B2 (en) | 2009-09-16 | 2014-07-15 | Nvidia Corporation | Techniques for transferring graphics data from system memory to a discrete GPU |
US9111325B2 (en) | 2009-12-31 | 2015-08-18 | Nvidia Corporation | Shared buffer techniques for heterogeneous hybrid graphics |
US10042656B2 (en) | 2011-08-01 | 2018-08-07 | Avocent Corporation | System and method for providing migrateable virtual serial port services |
US9281716B2 (en) | 2011-12-20 | 2016-03-08 | Kohler Co. | Generator controller configured for preventing automatic transfer switch from supplying power to the selected load |
US20130158726A1 (en) | 2011-12-20 | 2013-06-20 | Kohler Co. | System and method for using a network to control multiple power management systems |
US9866656B2 (en) * | 2012-06-29 | 2018-01-09 | Avocent Huntsville, Llc | System and method for single KVM client accommodating multiple different video compression technologies |
US20150163282A1 (en) * | 2012-07-10 | 2015-06-11 | Avocent Huntsville Corp. | System and method for accessing remote disk images using a vmedia client and through a remote access appliance |
US9313602B2 (en) | 2012-10-24 | 2016-04-12 | Beta Brain, Inc. | Remotely accessing a computer system |
US9818379B2 (en) | 2013-08-08 | 2017-11-14 | Nvidia Corporation | Pixel data transmission over multiple pixel interfaces |
US9842532B2 (en) | 2013-09-09 | 2017-12-12 | Nvidia Corporation | Remote display rendering for electronic devices |
EP3335325A4 (en) | 2015-08-14 | 2019-06-19 | Icron Technologies Corporation | Systems for enhancing boardroom tables to include usb type-c power and connectivity functionality |
US10824501B2 (en) * | 2019-01-07 | 2020-11-03 | Mellanox Technologies, Ltd. | Computer code integrity checking |
CN112099753A (en) * | 2020-09-14 | 2020-12-18 | 巨洋神州(苏州)数字技术有限公司 | KVM seat management monitoring system based on optical fiber communication |
CN112702391B (en) * | 2020-12-09 | 2022-12-30 | 湖南新九方科技有限公司 | Remote networking method and networking system for industrial control equipment |
US11741232B2 (en) | 2021-02-01 | 2023-08-29 | Mellanox Technologies, Ltd. | Secure in-service firmware update |
Family Cites Families (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4463342A (en) | 1979-06-14 | 1984-07-31 | International Business Machines Corporation | Method and means for carry-over control in the high order to low order pairwise combining of digits of a decodable set of relatively shifted finite number strings |
US4286256A (en) | 1979-11-28 | 1981-08-25 | International Business Machines Corporation | Method and means for arithmetic coding utilizing a reduced number of operations |
US4295125A (en) | 1980-04-28 | 1981-10-13 | International Business Machines Corporation | Method and means for pipeline decoding of the high to low order pairwise combined digits of a decodable set of relatively shifted finite number of strings |
US4467317A (en) | 1981-03-30 | 1984-08-21 | International Business Machines Corporation | High-speed arithmetic compression coding using concurrent value updating |
US4633490A (en) | 1984-03-15 | 1986-12-30 | International Business Machines Corporation | Symmetrical optimized adaptive data compression/transfer/decompression system |
JPS6229372A (en) | 1985-07-31 | 1987-02-07 | インタ−ナショナル ビジネス マシ−ンズ コ−ポレ−ション | Compression of binary data |
US5099440A (en) | 1985-12-04 | 1992-03-24 | International Business Machines Corporation | Probability adaptation for arithmetic coders |
US4652856A (en) | 1986-02-04 | 1987-03-24 | International Business Machines Corporation | Multiplication-free multi-alphabet arithmetic code |
US4935882A (en) | 1986-09-15 | 1990-06-19 | International Business Machines Corporation | Probability adaptation for arithmetic coders |
US4891643A (en) | 1986-09-15 | 1990-01-02 | International Business Machines Corporation | Arithmetic coding data compression/de-compression by selectively employed, diverse arithmetic coding encoders and decoders |
US4905297A (en) | 1986-09-15 | 1990-02-27 | International Business Machines Corporation | Arithmetic coding encoder and decoder system |
JP2534276B2 (en) | 1987-10-09 | 1996-09-11 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Original image pel signal processing method |
US4873577A (en) | 1988-01-22 | 1989-10-10 | American Telephone And Telegraph Company | Edge decomposition for the transmission of high resolution facsimile images |
US4870497A (en) | 1988-01-22 | 1989-09-26 | American Telephone And Telegraph Company | Progressive transmission of high resolution two-tone facsimile images |
US5025258A (en) | 1989-06-01 | 1991-06-18 | At&T Bell Laboratories | Adaptive probability estimator for entropy encoding/decoding |
US5031053A (en) | 1989-06-01 | 1991-07-09 | At&T Bell Laboratories | Efficient encoding/decoding in the decomposition and recomposition of a high resolution image utilizing pixel clusters |
US4979049A (en) | 1989-06-01 | 1990-12-18 | At&T Bell Laboratories | Efficient encoding/decoding in the decomposition and recomposition of a high resolution image utilizing its low resolution replica |
US5023611A (en) | 1989-07-28 | 1991-06-11 | At&T Bell Laboratories | Entropy encoder/decoder including a context extractor |
US4973961A (en) | 1990-02-12 | 1990-11-27 | At&T Bell Laboratories | Method and apparatus for carry-over control in arithmetic entropy coding |
US5323420A (en) | 1991-07-26 | 1994-06-21 | Cybex Corporation | Circuitry for regenerating digital signals in extended distance communications systems |
US5732212A (en) | 1992-10-23 | 1998-03-24 | Fox Network Systems, Inc. | System and method for remote monitoring and operation of personal computers |
US5546502A (en) | 1993-03-19 | 1996-08-13 | Ricoh Company, Ltd. | Automatic invocation of computational resources without user intervention |
US5740246A (en) | 1994-12-13 | 1998-04-14 | Mitsubishi Corporation | Crypt key system |
TW292365B (en) | 1995-05-31 | 1996-12-01 | Hitachi Ltd | Computer management system |
US5721842A (en) | 1995-08-25 | 1998-02-24 | Apex Pc Solutions, Inc. | Interconnection system for viewing and controlling remotely connected computers with on-screen video overlay for controlling of the interconnection switch |
US5917552A (en) | 1996-03-29 | 1999-06-29 | Pixelvision Technology, Inc. | Video signal interface system utilizing deductive control |
US6070253A (en) | 1996-12-31 | 2000-05-30 | Compaq Computer Corporation | Computer diagnostic board that provides system monitoring and permits remote terminal access |
US6104414A (en) | 1997-03-12 | 2000-08-15 | Cybex Computer Products Corporation | Video distribution hub |
US6333750B1 (en) | 1997-03-12 | 2001-12-25 | Cybex Computer Products Corporation | Multi-sourced video distribution hub |
US6557170B1 (en) * | 1997-05-05 | 2003-04-29 | Cybex Computer Products Corp. | Keyboard, mouse, video and power switching apparatus and method |
US6389464B1 (en) | 1997-06-27 | 2002-05-14 | Cornet Technology, Inc. | Device management system for managing standards-compliant and non-compliant network elements using standard management protocols and a universal site server which is configurable from remote locations via internet browser technology |
US6304895B1 (en) | 1997-08-22 | 2001-10-16 | Apex Inc. | Method and system for intelligently controlling a remotely located computer |
WO1999010801A1 (en) * | 1997-08-22 | 1999-03-04 | Apex Inc. | Remote computer control system |
US6038616A (en) | 1997-12-15 | 2000-03-14 | Int Labs, Inc. | Computer system with remotely located interface where signals are encoded at the computer system, transferred through a 4-wire cable, and decoded at the interface |
US6947943B2 (en) * | 2001-10-26 | 2005-09-20 | Zeosoft Technology Group, Inc. | System for development, management and operation of distributed clients and servers |
US6378009B1 (en) | 1998-08-25 | 2002-04-23 | Avocent Corporation | KVM (keyboard, video, and mouse) switch having a network interface circuit coupled to an external network and communicating in accordance with a standard network protocol |
EP1116086B1 (en) | 1998-09-22 | 2007-02-21 | Avocent Huntsville Corporation | System for accessing personal computers remotely |
IES990431A2 (en) | 1999-05-26 | 2000-11-26 | Cybex Comp Products Internat L | High end KVM switching system |
US6172640B1 (en) | 1999-06-18 | 2001-01-09 | Jennifer Durst | Pet locator |
US6378014B1 (en) | 1999-08-25 | 2002-04-23 | Apex Inc. | Terminal emulator for interfacing between a communications port and a KVM switch |
US6704769B1 (en) * | 2000-04-24 | 2004-03-09 | Polycom, Inc. | Media role management in a video conferencing network |
US6681250B1 (en) | 2000-05-03 | 2004-01-20 | Avocent Corporation | Network based KVM switching system |
US6959380B2 (en) | 2001-03-13 | 2005-10-25 | International Business Machines Corporation | Seamless computer system remote control |
US7424551B2 (en) * | 2001-03-29 | 2008-09-09 | Avocent Corporation | Passive video multiplexing method and apparatus priority to prior provisional application |
US20020198978A1 (en) * | 2001-06-22 | 2002-12-26 | Watkins Gregg S. | System to remotely control and monitor devices and data |
KR20030024260A (en) * | 2001-09-17 | 2003-03-26 | 주식회사 플레넷 | Subnet of power line communication network, method for setting up the same, electronic appliance connected to the same and, communication module used in the same |
US7003563B2 (en) * | 2001-11-02 | 2006-02-21 | Hewlett-Packard Development Company, L.P. | Remote management system for multiple servers |
US7684483B2 (en) * | 2002-08-29 | 2010-03-23 | Raritan Americas, Inc. | Method and apparatus for digitizing and compressing remote video signals |
US7260624B2 (en) * | 2002-09-20 | 2007-08-21 | American Megatrends, Inc. | Systems and methods for establishing interaction between a local computer and a remote computer |
-
2003
- 2003-11-26 US US10/723,992 patent/US8176155B2/en active Active
-
2004
- 2004-10-28 WO PCT/US2004/035943 patent/WO2005054980A2/en active Application Filing
- 2004-10-28 EP EP04820003A patent/EP1695230A4/en not_active Withdrawn
- 2004-10-28 CA CA002546952A patent/CA2546952A1/en not_active Abandoned
- 2004-10-28 AU AU2004295966A patent/AU2004295966A1/en not_active Abandoned
- 2004-10-28 JP JP2006541203A patent/JP2007524284A/en active Pending
Non-Patent Citations (1)
Title |
---|
See references of EP1695230A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP1695230A2 (en) | 2006-08-30 |
AU2004295966A1 (en) | 2005-06-16 |
JP2007524284A (en) | 2007-08-23 |
CA2546952A1 (en) | 2005-06-16 |
WO2005054980A3 (en) | 2006-05-11 |
US20050125519A1 (en) | 2005-06-09 |
US8176155B2 (en) | 2012-05-08 |
EP1695230A4 (en) | 2012-08-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8176155B2 (en) | Remote network management system | |
US8683024B2 (en) | System for video digitization and image correction for use with a computer management system | |
US7684483B2 (en) | Method and apparatus for digitizing and compressing remote video signals | |
US7606314B2 (en) | Method and apparatus for caching, compressing and transmitting video signals | |
US20120079522A1 (en) | Method And Apparatus For Transmitting Video Signals | |
US7818480B2 (en) | Wireless management of remote devices | |
US20050198245A1 (en) | Intelligent modular remote server management system | |
US7113978B2 (en) | Computer interconnection system | |
JP2006229952A (en) | Video compression system | |
US20080079757A1 (en) | Display resolution matching or scaling for remotely coupled systems | |
US20050044236A1 (en) | Method and apparatus for transmitting keyboard/video/mouse data to and from digital video appliances | |
US20040215742A1 (en) | Image perfection for virtual presence architecture (VPA) | |
CN101309259A (en) | Distributed image display method | |
CN108650216B (en) | Substation monitoring background information checking method based on wireless transmission | |
CN101098474A (en) | Wireless remote LED display screen control transmission system | |
EP1695749A1 (en) | Device and method for the provisioning of personal service hosting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2546952 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006541203 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2004820003 Country of ref document: EP Ref document number: 2004295966 Country of ref document: AU |
|
ENP | Entry into the national phase |
Ref document number: 2004295966 Country of ref document: AU Date of ref document: 20041028 Kind code of ref document: A |
|
WWP | Wipo information: published in national office |
Ref document number: 2004295966 Country of ref document: AU |
|
WWP | Wipo information: published in national office |
Ref document number: 2004820003 Country of ref document: EP |