CA2483017A1 - Flexible streaming hardware - Google Patents
Flexible streaming hardware Download PDFInfo
- Publication number
- CA2483017A1 CA2483017A1 CA002483017A CA2483017A CA2483017A1 CA 2483017 A1 CA2483017 A1 CA 2483017A1 CA 002483017 A CA002483017 A CA 002483017A CA 2483017 A CA2483017 A CA 2483017A CA 2483017 A1 CA2483017 A1 CA 2483017A1
- Authority
- CA
- Canada
- Prior art keywords
- data
- packet
- digital media
- engine
- media asset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/0078—Avoidance of errors by organising the transmitted data in a format specifically designed to deal with errors, e.g. location
- H04L1/0083—Formatting with frames or packets; Protocol or part of protocol for error control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/1066—Session management
- H04L65/1101—Session protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/70—Media network packetisation
Abstract
A hardware engine (400) that streams media asset data from a media buffer (330) to a network (340) under instructions provided by a host PC is disclosed. The PC preferably stores control blocks that provide packet header formatting instructions in a media buffer (330) along with the media asset data to be streamed. In a preferred embodiment, the hardware engine 400) comprises programmable logic devices so that the engine can be upgraded. The present invention further comprises methods for designing the hardware engine, methods for upgrading the hardware engine, and methods for streaming digital media asset data.
Description
FLEXIBLE STREAMING HARDWARE
'CROSS REFERENCE TO RELATED APPLICATT01~1~S
[0001] This application claims benefit of U.S. provisional patent application serial No.
60/374,086, filed April 19, 2002, entitled "Flexible Streaming Hardware," U.S.
provisional patent application serial No. 60/374,090, filed April 19, 2002, entitled "Hybrid Streaming Platform," U.S. provisional patent application serial No. 60/374,037, filed April 19, 2002, entitled "Optimized Digital Media Delivery Engine," and U.S. patent application serial No.
60/373,991, filed April 19, 2002, entitled "Optimized Digital Media Delivery Engine," each of which is hereby incorporated by reference for each of its teachings and embodiments.
FIELD OF THE INVENTION
'CROSS REFERENCE TO RELATED APPLICATT01~1~S
[0001] This application claims benefit of U.S. provisional patent application serial No.
60/374,086, filed April 19, 2002, entitled "Flexible Streaming Hardware," U.S.
provisional patent application serial No. 60/374,090, filed April 19, 2002, entitled "Hybrid Streaming Platform," U.S. provisional patent application serial No. 60/374,037, filed April 19, 2002, entitled "Optimized Digital Media Delivery Engine," and U.S. patent application serial No.
60/373,991, filed April 19, 2002, entitled "Optimized Digital Media Delivery Engine," each of which is hereby incorporated by reference for each of its teachings and embodiments.
FIELD OF THE INVENTION
[0002] . This invention relates to the field of digital media servers.
BACKGROUND OF THE INVENTION
BACKGROUND OF THE INVENTION
[0003] A digital media server is a computing device that streams digital media content onto a digital data transmission network. . In the past, digital media servers have been designed using a general-purpose personal computer (PC) based architecture in which PCs provide all significant processing relating to wire packet generation. But digital media are, by their very nature, bandwidth intensive and time sensitive, a particularly difficult combination for PC-based architectures whose stored-computing techniques require repeated data copying. .This repeated data copying creates bottlenecks that diminish overall system performance especially in high-bandwidth applications. And because digital media are time sensitive, ,any such compromise of server performance typically impacts directly on the end-user's experience when viewing the media.
[0004] Fig. 1 demonstrates the required steps for generating a single wire packet in a traditional media server comprising a general-purpose PC architecture. The figure makes no assumptions regarding hardware acceleration of any aspect of the PC
architecture using add-on cards. Therefore, the flow and number of memory copies are representative of the prior art whether data blocks read from the storage device are reassembled in hardware or software.
"' -(On05] Refernng-now to Fig: 1, irustep 101,-awapplication prograin'Yurining on a general-purpose PC requests data from a storage.device. Using direct memory access (DMA), a storage controller transfers blocks of data to operating system (OS) random access memory (RAM). In step 102, the OS reassembles the data from the blocks in RAM.
°~~~~iri ~fepv103.;~thE=data is°copied from~the-OS RAM: to a-memory location.set aside>byahe~OS
w~'foT'tlie°user application (application RAM):wThese first three steps areperfflrrned i.n~-.:.:... .
response to a user application's request for data from the memory storage device.
[0006] In step 104, the application copies the data from RAM into central processing unit (CPU) registers. In step 105, the CPU performs the necessary data manipulations to convert the data from file format to wire format. In step 106, the wire-format data is copied back into application RAM from the CPU registers.
[0007] In step 107, the application submits the wire-format data to the OS for transmission on the network and the OS allocates a new memory location for storing the packet format data. In step 108, the OS writes packet-header information to the allocated packet memory from the CPU registers. In step 109, the OS copies the media data from the application RAM to the allocated packet RAM, thus completing the process of generating a wire packet. In step 110, the completed packet is transferred .from the allocated packet RAM to OS RAM.
[0008] Finally, the OS sends the wire packet out to the network. In particular, in step 111, the OS reads the packet data from the OS RAM into CPU registers and, in step 112, computes a checksum for the packet. In step 113, the OS writes the checksum .to OS RAM.
In step 114, the OS writes network headers to the OS RAM. In step 115, the OS
copies the wire packet from OS RAM to the network interface device over the shared I/O
bus, using a DMA transfer. In step 116, the network interface sends the packet to the network.
[0009] As will be recognized,'a general-purpose-PC architecture accomplishes the packet-generation flow illustrated in Fig. 1 using a number of memory transfers. These memory transfers are described in more detail in connection with Fig. 2.
[0010] As shown in Fig. 2, the transfer from storage device 201 to file system cache 202 uses a fast Direct Memory Access (DMA) transfer. The transfer from file system cache 202 to file format data 203 requires each 32 bit word to be copied into a CPU
register and back out into random access memory (RAM). This kind of copy is often referred to as a mem copy (or memcpy from the C language procedure), and is a relatively slow process when compared to the wire speed at which hardware algorithms execute. The copy from file format data 203 to wire format data 204 and from wile format data 204 to OS
Kernel RAM
205 are also mem copies. Network headers are added to the data while in the OS
Kernel ' ~ RRlVf205, which requires a write of header information from the CPU- to OS
Kernel RAM.
Determining the checksum requires a complete read of the entire data packet, and exhibits performance similar to a mem copy. The copy from the OS Kernel RAM 205 to Nerivork Interface Card 206 is a DMA transfer across a shared peripheral component interconnect (PCI) bus. Thus, a total:of.~.~eopiesand l complete iterative read into the CPU, of the payload data are requiredwta~ generate a single network wire packet.
SUMMARY OF THE INVENTION
[0011] In a preferred embodiment, the present system and method comprise a hardware engine adapted to transfer media asset data from a media buffer to a network.
The hardware engine receives media asset streaming instructions from a general-purpose PC
via control blacks stored in the buffer along with the media asset data. The hardware engine eliminates the redundant copying of data and the shared I/O bus, bottlenecks typically found in a general-purpose PC that delivers digital media. By eliminating these bottlenecks, the hardware engine improves overall delivery performance and significantly reduces the cost and size associated with delivering digital media to a large number of end users.
(0012] In a preferred embodiment, the hardware engine comprises a programmable logic device (PLD) to provide significantly higher data processing speeds than a general-purpose CPU. Advantageously, such PLDs can be reprogrammed without replacing hardware components such as read-only memories. Consequently, the present system provides flexibility and future=proofing not usually found in a dedicated hardware device, while maintaining hardware-level wire-speed performance.
[0013] In addition to extending the life cycle of the hardware solution by providing the ability to incorporate additional functional components in the future, the hardware engine's wire-speed performance increases the number of unique streams that can be processed and delivered by the digital media server. This increase in stream density in a smaller physical package (compared to servers that use a general-purpose PC architecture) leads to improved scalability which can be measured by reduced space requirements and louver environmental costs, such as air conditioning and electricity. Because each server unit has a higher stream density than previous media server units, fewer servers are required, which directly relates to a smaller capital investment for deployment of streaming video services.
Fewer servers also result in lower operating costs such as reducing the need for operations personnel to maintain and upgrade the servers.
[0014] In one aspect, the present invention is directed to a system under the control of a general-purpose computer for converting digital media assets into wire data packets for trarismission fo a client, the assets being stored on a digital media storage device comprising an input interface for retrieving digital media asset data from the storage device, a media buffer for receiving the digital media asset data from the storage interface., a programmable logic device adapted to transfer the digital media asset data from the input interface to the ~med=is buffer.wprocess thb digital media asset ziata from the media buffer;~.and vgenerate wire : a datavpackets,°~a~'network~interfacewcoupled~to~the device and adapted to-transmit~the vivire-data w packets to the client, and a general-purpose interface coupled to the device and adapted to receive control information from the general-purpose computer for storage in the media buffer and to enable the device to communicate with the general-purpose computer.
[0015] In another aspect of the present invention, the media buffer is further adapted to store control blocks comprising packet header formatting instructions and digital media asset payload information, and the programmable logic device is further adapted to generate packet headers from the instructions.
[0016] In another aspect of the present invention, the digital media asset payload information comprises a pointer to the digital media asset data.
[0017] In another aspect of the present invention, the digital media asset payload information comprises the digital media asset data.
(0018] In another aspect of the present invention, the programmable logic device is a field programmable gate array.
[0019] In another aspect of the present invention, the network interface comprises a Gigabit Ethernet interface.
[0020] In another aspect of the present invention, the data generation rate is greater than or equal to the data transmission rate, the programmable logic device data reception rate is greater than or equal to the data generation rate, and the media buffer data reception rate is greater than or equal to the programmable logic device data reception rate.
[0021] In another aspect of the present invention, two or more programmable logic devices cooperatively increase the data transmission rate of the system.
[0022] In another aspect of the present invention, the programmable logic device comprises an MPEG-2 stitching engine for targeted ad insertion.
[0023] In another aspect of the present invention, the programmable logic device is further adapted to encrypt the data stream thereby increasing the quality of content security.
[0024] In another aspect, the present invention is directed to a secure method of providing an upgrade package for changing the logic in a field programmable gate array used as an engine for streaming digital media, comprising encrypting the upgrade package, compressing the upgrade package, distributing the upgrade package, decompressing the upgrade package, loading the package into the field programmable gate array,~supplying a key to the field programmable gate array for decrypting the upgrade package, and rebooting the field programmable gate array, thereby installing the upgrade package.
(0025] In another aspect, the present invention is directed to a method of streaming a block of a~digital media asset across a digital network using a hardware engine,-comprising transfernng the block of the asset 'into ~a iriedia buffer, writing wire packet generation control instructions into the media buffer; fragmenting the block into one or more data packets, generating packet headers for a packet in accordance with the instructions, calculating a checksum for the packet, transmitting the packet onto the network, and repeating the generating, calculating, and transmitting steps until all the data packets have been transmitted.
[0026] In another aspect of the present invention, the method further comprises the steps of receiving a message to process the instructions and sending a message that the block has been sent.
[0027] In another aspect, the present invention is directed to a method for designing a streaming media hardware engine, comprising: (a) identifying one or more components that comprise the hardware engine, (b) designing a last component having a fully saturated output bandwidth greater than.or equal to the required bandwidth of the hardware engine (c) calculating the input bandwidth required to fully saturate the designed component, (d) designing an adjacent preceding component having a fully saturated output bandwidth greater than or equal to the input bandwidth calculated in step (c), and recursively repeating steps (c) and (d) for remaining components identified in step (a).
BRIEF DESCRIPTION OF THE DRAWINGS
[0028] Fig. 1 is a flow chart illustrating a process for generating wire data packets in a general-purpose personal computer;
[0029] Fig. 2 is a block diagram that illustrates hardware and software components in a general-purpose personal computer used to generate a wire packet; .
[0030] Fig. 3 is a block diagram that illustrates components of a hardware engine in one embodiment;
[0031] Fig. 4 is a block diagram that illustrates an embodiment of the hardware engine that uses a field programmable gate array, and depicts the internal architecture of same;
[0032] Fig. 5 is a block diagram that illustrates an embodiment of the internal architecture of a format conversion and packet generation engine found in the field programmable gate array;
[0033] Fig. 6 is a flow chart that illustrates an embodiment of the design 'methodology for a media asset streaming hardware engine;
-S-luu.~41 rig. i is a iiow cnart that illustrates an emnoaiment of the W
stallation of an upgrade package in. an FPGA;
[0035]-= Fig. 8 is an example control block for a Quick Time-media file streamed over RTP/UDP/1P;
[0036] Fig. 9 is an example control block for an MPEG-2 file streamed over UDP/IP;
[0037] Fig. 10 is a flow diagram illustrating the process of generating wire packets in a preferred embodiment;
[0038] Fig. 11 is a diagram of the Ethernet header or media access control layer (MAC) control block entry structure;
[0039] Fig. 12 is a diagram of the Internet protocol (IP) header control block entry structure;
[0040] Fig. 13 is a diagram of the user datagram protocol (UDP) header control block entry structure;
[0041] Fig. 14 is a diagram of the transport control protocol (TCP) header control block entry structure;
[0042] Fig. 15 is a diagram of the hypertext transport protocol (HTTP) header control block entry structure;
[0043] Fig. 16 is a diagram of the realtime transport protocol (RTP) header control.
block entry structure; and [0044] Fig. 17 is a diagram of the payload data control block entry structure.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Hardware Engine Components (0045] One preferred embodiment of a hardware engine for streaming digital media assets is shown in Fig. 3. As shown in Fig..3, hardware engine 300 preferably comprises several components including dedicated buses 310, an input interface 320, a media buffer 330, a network interface 340, a general-purpose interface 350, and one or more programmable logic devices (PLDs) 360. Dedicated buses 310 provide an exclusive data pathway between PLD 360 and other hardware engine components. Input interface 320 is preferably adapted to control data storage devices containing media assets to be streamed and transmits asset data through PLD 360 to media buffer 330, as described below.
Network interface 340 provides a controller for communicating with other devices across a data network. General-purpose interface 350 provides a controller for communicating with a general-purpose computing device. PLD 360 translates asset data that is held in media buffer 330 into wire data packets and sends the packets out to the network through network interface 340.
[0046] Fig. 4 is a block diagram depicting a preferred embodiment of PLD 360.
In the preferred embodiment of Fig. 4, PLD 360 comprises a Field Programmable Gate Array (FPGA) device. Those skilled in the art will recognize that other PLDs may alternatively be used.
[0047] As shown in Fig. 4, FPGA device 400 preferably comprises a plurality of objects created using Hardware Description Language (HDL). These HDL objects preferably comprise interface objects 420-460, a series of first-in, first-out (FIFO) queues 471-475, and a packet engine 480. Interface objects 420-460 provide the necessary control and addressing signals through dedicated buses 310 to communicate with interface devices 320-350. FIFO queues 471-475 provide internal data communication paths between interface objects and packet engine 480. Packet engine 480 converts asset data held in media buffer 330 into wire data packets that are sent out to the network.
[0048] In more detail; interface objects 420-460 preferably comprise a storage peripheral component interface (PCI) interface 420, a media buffer interface 430, a gigabit Ethernet controller interface 440, a general-purpose PCI interface 450, and a security interface 460. Interface HDL objects 420-460 provide the signals required to send or receive data from the FPGA to components 320-360, respectively.
_7_ [UU4y~ The series of FIFO queues preferably comprises five sets of FIFO queues 475 FIFO queue HDL objects 471=475rbuffer the flow of data betv~eri~the interface HDL
objects and packet engine 480 in FPGAwdevi~ce 400.
[0050] Fig. 5 is a block diagram of a preferred embodiment of packet engine 480. As shown in Fig. 5, engine 480 preferably comprises a collection of state machines. There are three main groups of state machines: parser state machines 510, header formatter state machines 520, and packet assembler state machines 530. Parser state machines 510 read-control blocks stored in media buffer 330 and retrieve the associated media asset data for processing. Header formatter state machines 520 generate the protocol headers for the communications protocols used in each data packet. Packet assembler state machines 530 create wire data packets by connecting the generated packet headers with the asset data and generating checksums for the data. Packet engine 480 further comprises memory writing state machine 540 #~or sending information back to media buffer 330. Memory writing state.
machine 540 updates control block entries for TCP, and RTP packets, as described below in the Streaming Media Operation section.
[0051] Parser state machines 510 preferably comprise three components, a control block parser 519, a payload builder 517, and a facilitator 515. Control block parser 519 is adapted to read a control block stored in media buffer 330 and pass appropriate data from the control block to header formatting state machines 520. Under control of control block parser 519, payload builder 517 reads asset data from media buffer 330.
Facilitator 515 is adapted to schedule the output from packet header formatters 520.
[0052] Packet header formatter state machines 520 preferably comprise state machines that produce packet headers which adhere to the communication protocols necessary for streaming video across an Internet Protocol data network including IP 521, UDP
522, TCP
523, RTP 524 and HTTP 525. Each packet header formatter is responsible for generating a packet header in the appropriate format for inclusion in the wire packet. The packet headers are preferably generated from control block data determined by control block parser 519.
[0053] Packet assembly state machines preferably comprise a multiplexer 531, a payload packer engine 532, a header packer 533, a checksum generator 534, and a packet writer 535. Multiplexer 531 multiplexes the output of the various header format state machines and the payload builder into packets. Payload packer engine 532 shifts and concatenates the data to eliminate empty bytes in the packet data stream.
Packer 533 shifts and concatenates the packet headers to eliminate empty bytes in the packet data stream.
Checksum generator 534 generates the checksum of the wire data packet. Packet writer 535 sends the wire data packet out to the gigabit Ethemet controller. It manages payload buffers _g_ included in gigabit Ethernet controller 440, inserts checksums into the packet data stream, and creates:a.data entry indicating that the asset has been sent:
[0054] In am alternative preferred embodiment, packet engine 480 may include additional packet generation and protocol engines that replace many of the algorithms traditionally executed on a general-purpose CPU. For example, packet engine 480 may comprise an MPEG-2 stitching engine for targeted ad insertion, or a unique stream-encryption engine for increasing the quality of content security.
Design Methodolo;gy for Hardware En ine [0055] Each component in hardware engine 300 is designed specifically for the sustained delivery of digital media so that any given component will not restrict the flow of data and form a bottleneck in the device. Preferably, the criterion used to calculate how much input bandwidth is required for a component is determined from the full bandwidth saturation of the output interface of the component. By determining the amount of input bandwidth that will achieve a desired output bandwidth for a particular component, the output bandwidth of its upstream component can be selected so that the upstream component will supply at least the bandwidth required at the component's input to saturate its output.
[0056] This design principle is preferably applied to all components in hardware engine 300, including those that may have a higher input bandwidth than output bandwidth at full saturation. This situation may occur where some of the data supplied to a component is not transmitted by the component. Illustratively, a component that reads data storage blocks from a hard drive and processes the blocks into data packets may not use the entire contents of the block. The packet data required may be slightly larger than one block, requiring that two blocks be read into media buffer 330. Although two full blocks are read, only a small percentage of the second block is required for generating the packet. Thus, the output bandwidth for the component may be less than its input bandwidth. .
(0057] This design process is illustrated in more detail in Fig. 6. In step 610, the components of the hardware engine are identified. Then, the components in the data stream generating chain are evaluated in reverse order. In step 620, the last component in the data stream generation chain-is designed so that it has an output bandwidth greater than or equal to the required bandwidth that the hardware engine must supply. Next, the input.necessary to saturate this output is calculated based on the selected component's functions and data it processes (step 630). If the selected component is not the first component in the data stream generation chain (step 640), the next upstream component is designed to have an output bandwidth greater than or equal to the calculated input bandwidth of the previously selected-cormponent~ (step 650): Once the first component has been evaluated '(step ~640);~ the ~~
design process is' complete.
[0058] Because the throughput of each component and bus are selected or designed to fully saturate the next component, bottlenecks within the device are eliminated and the device operates with fully saturated output connections.
Repr~amming the FPGA
(0059] In a preferred embodiment, upgrade packages may be used to reprogram the FPGA using the hardware description language (HDL). By replacing the FPGA's configuration, the HDL components included in the FPGA are changed. The process for installing an upgrade package.is illustrated in Fig. 7.
[0060] As shown in Fig. 7, in step 710 upgrade packages are created to replace the configuration in the FPGA. In step 720, these packages are preferably encrypted to protect their contents from scrutiny, and in step 730, compressed for distribution.
The upgrade package may then be downloaded (step 740), decompressed (step 750), and decrypted (step 755) before it is copied into the FPGA (step 760). In step 770, after the upgrade package is loaded into the FPGA, the FPGA is stopped and rebooted. When the system restarts, the FPGA is reloaded with the upgraded logic.
[0061] In a preferred embodiment, security interface 560 protects the logic programmed into the FPGA from being copied. As known in the art, different security interfaces may be designed or purchased that provide varying degrees of security and implementation overhead. Those skilled in the art may balance competing desires to maximize security while minimizing implementation time and cost in selecting an appropriate security interface for the FPGA.
(0062] The flexibility achieved by reprogramming the hardware device is illustrated by the following example. Suppose that the initial hardware description language implemented in the FPGA includes packetization algorithms and protocols specific to MPEG-2 transport streams. In the future, users may require delivery of media content in other formats such as MPEG-4. Because hardware engine 300 comprises an FPGA, .new algorithms for manipulating MPEG-4 formats can be added to the layout of the chip using HDL
in the form of an upgrade package.
FSH Streaming Media Operation [0063] In operation, hardware engine 300 assembles wire packets in accordance with instructions specified in a control block found in media buffer 330. In a preferred embodiment the control block is a 128-byte data structure comprising a series of control block entries (CBE) of at least eight bytes in length. Each CBE either contains data that will be part of a media packet, or a pointer to that data. The media packet can be constructed by traversing the entire control block and concatenating the data contained in each entry or data pointed at by each entry.
[0064] Fig. 8 illustrates an exemplary control block for a Quick Time media file streamed over RTP/LJDP/IP. The exemplary control block comprises a cookie control block entry 810 that uniquely identifies a data stream. The exemplary control block further comprises a series of format CBEs 820-850, along with a series of one or more media packet payload CBEs 860-890. Media packet payload CBEs 860-890 identify the address of the associated media packets in media buffer 330. Hardware engine 300 processes control blocks and associated media packet payload data to generate wire data packets, as described below.
[0065] Fig. 9 is an example control block for an MPEG-2 file streamed over UDP/IP.
Analogous control blocks may be created for use with other public domain or proprietary streaming formats. Each such control block also comprises a cookie control block entry, one or more format CBEs, and one or more media packet payload CBEs.
[0066] Fig. 10 illustrates a preferred embodiment of a process for streaming media. As shown in Fig. 10, in step 1010, a block of media asset data is moved from data storage through the hardware engine's input interface 310 and placed into media buffer 330 under control of a general-purpose PC as described in copending U.S. patent application No.
10/ , , entitled "Hybrid Streaming Platform," filed on even date herewith (and identified by-Pennie & Edmonds attorney docket no. 11055-005-999), which is hereby incorporated by reference in its entirety for each of its teachings and embodiments. In step 1020, a control block is written to media buffer 330. The control block preferably identifies the location of the media asset in the media buffer anal includes instructions for processing the media asset data. In step 1030, hardware engine 300 receives a data message to commence streaming the media asset data. The message preferably contains a pointer to the control block and a stream identifier corresponding to the control block.
[0067] Engine 300 then converts the media packet payload from file format to wire format. If the media packet is larger than the maximum transmission unit (MTV, this conversion process preferably comprises fragmentation of the media packet into several wire fot-mat~ data' packets (step 1040). Instep 1050, engine 300~generates protocol-fbiimat w.
headers specified iri the~CBEs for insertion into the wire packet. Next; in step 1060, engine ' 300 assembles the packet and calculates a checksum for the wire packet. In step 1070, engine 300 sends a wire packet out thorough gigabit Ethernet interface 340. If the last wire packet has not been sent (step 1080), engine 300 updates packet headers and checksum and sends the next wire packet. After the last packet has been transmitted, engine 300 generates a message that indicates the control block has been processed.
[0068] A preferred header-formatting process is now described in more detail.
In a preferred embodiment, engine 300 adds an Ethernet header to every packet unless the control block has a "pass thru" identifier. The Ethernet header control block contains a source address, destination address, and a packet type field. In a preferred embodiment, header information for the Ethernet header is included in a CBE, as shown, for example, in Fig. 11.. When transmitting packets, engine 300 preferably uses the same Ethernet header information from the control block for every packet in a particular stream. If necessary, the destination address can be changed as directed by a separate CBE. Each packet is also preferably provided with any additional headers required by its associated CBE.
[0069] In a preferred embodiment, when the packet includes an IP header, the CBE
preferably includes the following fields, illustrated in Fig. 12: a version, a header length, a type-of service field, a total length, an identification field, flags, a fragment offset, a time-to-live field, a protocol byte field, a header checksum, a source IP address and a destination IP address. Before sending the wire packet, engine 300 preferably performs the following functions. First, engine 300 computes the total length in bytes by adding up the length fields from all CBEs. Next, engine 300 computes the header checksum by setting the field to zero, then computing the 16-bit sum over the IP header only. Finally, engine 300 stores the 16-bit ones-complement of the sum in the header checksum field, and copies the other fields to generate the IP packet header.
[0070] In a preferred embodiment, when the packet includes a UDP header, the CBE
preferably includes fields for a source port number, destination port number, LJDP length;
and UDP checksum fields as shown in Fig. 13. Before sending the wire packet, engine 300.
preferably performs the following functions. First, engine 300 computes the UDP length by adding. up the length fields from all CBEs including and after the one pointing to. the UDP
header. ' Theri, engine 300 computes the UDP checksum by performing a 16-bit add of the source IP address field from the IP packet header, the destination IP address field from the IP header, the protocol field (as the lower 8 bits) from the IP header, the UDP length as calculated above, and the entire UDP header, plus the remaining wire packet headers and media packet payload. Then the~anes-corriplerrierit of the sum is stored-in the UDP
checksum field, and the remaining fields are copied into the UDP header from the UDP
control block. For more details of the generation of IP/LTDP packets by hardware engine 300, see copending U.S. patent application No. 10/ ,-, entitled "Optimized Digital Media Delivery Engine," filed.on even date herewith (and identified by Pennie & Edmonds attorney docket no. 11055-011-999), which is hereby incorporated by reference in its entirety for each of its teachings and embodiments.
[0071) In a preferred embodiment, when the packet includes an TCP header, the CBE
preferably includes fields for a source port number, destination port number, a sequence number, an acknowledgment number, a header length, a reserved field, flags, a window size, a TCP checksum, and an urgent pointer, as shown in Fig. 14. Before sending the wire packet, engine 300 preferably performs the following functions. First, the TCP
checksum is calculated by performing a 16-bit add of the source IP address from the IP
header, the destination IP address from the IP header, the protocol field (as the lower 8 bits) from the IP
header, the total-length field from the IP header, the entire TCP header, plus the remaining wire packet headers and media packet payload. Then the ones-complement of the sum is stored in the.TCP checksum field, and the remaining fields are copied from the TCP CBE to generate the TCP packet header.
[0072] After sending the wire packet, engine 300 preferably increments the sequence number in the TCP control block entry. If the TCP packet is segmented, the sequence number is preferably updated in every wire data packet sent, but the sequence number in the control block is incremented after the entire media packet has been processed.
(0073] In a preferred embodiment, when the packet includes an HTTP header, the CBE
preferably contains a "$" character, an HDCE byte field, and a total length field, as shown in Fig. 15. Before sending the wire packet, engine 300 preferably fills in the total length field based on the payload length field from the IP CBE and any headers that follow the HTTP header, such as RTP, to generate the HTTP packet header.
[0074] In a preferred embodiment, when the packet includes an RTP header, the CBE
preferably includes flags, a CSRC count field, a payload type field, a sequence number, a timestamp, and a SSRC identifier, as shown in Fig. 16. . Engine 300 copies the RTP CBE in order to generate the RTP packet header before sending out the wire packet.
[0075] After sending the wire packet, engine 300 preferably increments the sequence number field in the RTP CBE by 1., [0076] In a preferred embodiment, the control block contains a payload data CBE, as shown in Fig. '17. The payload CBE contains a'flag field, ID field, payload length field, and either an address to the payload data or a null value if'the ID field indicates that the payload data is appended to the end of the CBE. The length field is used by engine 300 to determine whether to fragment the payload and for inclusion in the packet header fields.
The address field is used by engine 300 to locate the payload data in media buffer 330.
[0077] In an alternative preferred embodiment, multiple PLDs may be pipelined together to execute additional algorithms, or more complex algorithms, in tandem.
Embodiments comprising multiple PLDs preferably comprise additional communications structures in the PLD for inter-process communications between the PLDs in order to execute parallel algorithms.
[0078] While the invention has been described in conjunction with specific embodiments, it is evident that numerous alternatives, modifications, and variations will be apparent to those skilled .in the art in light of the foregoing description.
architecture using add-on cards. Therefore, the flow and number of memory copies are representative of the prior art whether data blocks read from the storage device are reassembled in hardware or software.
"' -(On05] Refernng-now to Fig: 1, irustep 101,-awapplication prograin'Yurining on a general-purpose PC requests data from a storage.device. Using direct memory access (DMA), a storage controller transfers blocks of data to operating system (OS) random access memory (RAM). In step 102, the OS reassembles the data from the blocks in RAM.
°~~~~iri ~fepv103.;~thE=data is°copied from~the-OS RAM: to a-memory location.set aside>byahe~OS
w~'foT'tlie°user application (application RAM):wThese first three steps areperfflrrned i.n~-.:.:... .
response to a user application's request for data from the memory storage device.
[0006] In step 104, the application copies the data from RAM into central processing unit (CPU) registers. In step 105, the CPU performs the necessary data manipulations to convert the data from file format to wire format. In step 106, the wire-format data is copied back into application RAM from the CPU registers.
[0007] In step 107, the application submits the wire-format data to the OS for transmission on the network and the OS allocates a new memory location for storing the packet format data. In step 108, the OS writes packet-header information to the allocated packet memory from the CPU registers. In step 109, the OS copies the media data from the application RAM to the allocated packet RAM, thus completing the process of generating a wire packet. In step 110, the completed packet is transferred .from the allocated packet RAM to OS RAM.
[0008] Finally, the OS sends the wire packet out to the network. In particular, in step 111, the OS reads the packet data from the OS RAM into CPU registers and, in step 112, computes a checksum for the packet. In step 113, the OS writes the checksum .to OS RAM.
In step 114, the OS writes network headers to the OS RAM. In step 115, the OS
copies the wire packet from OS RAM to the network interface device over the shared I/O
bus, using a DMA transfer. In step 116, the network interface sends the packet to the network.
[0009] As will be recognized,'a general-purpose-PC architecture accomplishes the packet-generation flow illustrated in Fig. 1 using a number of memory transfers. These memory transfers are described in more detail in connection with Fig. 2.
[0010] As shown in Fig. 2, the transfer from storage device 201 to file system cache 202 uses a fast Direct Memory Access (DMA) transfer. The transfer from file system cache 202 to file format data 203 requires each 32 bit word to be copied into a CPU
register and back out into random access memory (RAM). This kind of copy is often referred to as a mem copy (or memcpy from the C language procedure), and is a relatively slow process when compared to the wire speed at which hardware algorithms execute. The copy from file format data 203 to wire format data 204 and from wile format data 204 to OS
Kernel RAM
205 are also mem copies. Network headers are added to the data while in the OS
Kernel ' ~ RRlVf205, which requires a write of header information from the CPU- to OS
Kernel RAM.
Determining the checksum requires a complete read of the entire data packet, and exhibits performance similar to a mem copy. The copy from the OS Kernel RAM 205 to Nerivork Interface Card 206 is a DMA transfer across a shared peripheral component interconnect (PCI) bus. Thus, a total:of.~.~eopiesand l complete iterative read into the CPU, of the payload data are requiredwta~ generate a single network wire packet.
SUMMARY OF THE INVENTION
[0011] In a preferred embodiment, the present system and method comprise a hardware engine adapted to transfer media asset data from a media buffer to a network.
The hardware engine receives media asset streaming instructions from a general-purpose PC
via control blacks stored in the buffer along with the media asset data. The hardware engine eliminates the redundant copying of data and the shared I/O bus, bottlenecks typically found in a general-purpose PC that delivers digital media. By eliminating these bottlenecks, the hardware engine improves overall delivery performance and significantly reduces the cost and size associated with delivering digital media to a large number of end users.
(0012] In a preferred embodiment, the hardware engine comprises a programmable logic device (PLD) to provide significantly higher data processing speeds than a general-purpose CPU. Advantageously, such PLDs can be reprogrammed without replacing hardware components such as read-only memories. Consequently, the present system provides flexibility and future=proofing not usually found in a dedicated hardware device, while maintaining hardware-level wire-speed performance.
[0013] In addition to extending the life cycle of the hardware solution by providing the ability to incorporate additional functional components in the future, the hardware engine's wire-speed performance increases the number of unique streams that can be processed and delivered by the digital media server. This increase in stream density in a smaller physical package (compared to servers that use a general-purpose PC architecture) leads to improved scalability which can be measured by reduced space requirements and louver environmental costs, such as air conditioning and electricity. Because each server unit has a higher stream density than previous media server units, fewer servers are required, which directly relates to a smaller capital investment for deployment of streaming video services.
Fewer servers also result in lower operating costs such as reducing the need for operations personnel to maintain and upgrade the servers.
[0014] In one aspect, the present invention is directed to a system under the control of a general-purpose computer for converting digital media assets into wire data packets for trarismission fo a client, the assets being stored on a digital media storage device comprising an input interface for retrieving digital media asset data from the storage device, a media buffer for receiving the digital media asset data from the storage interface., a programmable logic device adapted to transfer the digital media asset data from the input interface to the ~med=is buffer.wprocess thb digital media asset ziata from the media buffer;~.and vgenerate wire : a datavpackets,°~a~'network~interfacewcoupled~to~the device and adapted to-transmit~the vivire-data w packets to the client, and a general-purpose interface coupled to the device and adapted to receive control information from the general-purpose computer for storage in the media buffer and to enable the device to communicate with the general-purpose computer.
[0015] In another aspect of the present invention, the media buffer is further adapted to store control blocks comprising packet header formatting instructions and digital media asset payload information, and the programmable logic device is further adapted to generate packet headers from the instructions.
[0016] In another aspect of the present invention, the digital media asset payload information comprises a pointer to the digital media asset data.
[0017] In another aspect of the present invention, the digital media asset payload information comprises the digital media asset data.
(0018] In another aspect of the present invention, the programmable logic device is a field programmable gate array.
[0019] In another aspect of the present invention, the network interface comprises a Gigabit Ethernet interface.
[0020] In another aspect of the present invention, the data generation rate is greater than or equal to the data transmission rate, the programmable logic device data reception rate is greater than or equal to the data generation rate, and the media buffer data reception rate is greater than or equal to the programmable logic device data reception rate.
[0021] In another aspect of the present invention, two or more programmable logic devices cooperatively increase the data transmission rate of the system.
[0022] In another aspect of the present invention, the programmable logic device comprises an MPEG-2 stitching engine for targeted ad insertion.
[0023] In another aspect of the present invention, the programmable logic device is further adapted to encrypt the data stream thereby increasing the quality of content security.
[0024] In another aspect, the present invention is directed to a secure method of providing an upgrade package for changing the logic in a field programmable gate array used as an engine for streaming digital media, comprising encrypting the upgrade package, compressing the upgrade package, distributing the upgrade package, decompressing the upgrade package, loading the package into the field programmable gate array,~supplying a key to the field programmable gate array for decrypting the upgrade package, and rebooting the field programmable gate array, thereby installing the upgrade package.
(0025] In another aspect, the present invention is directed to a method of streaming a block of a~digital media asset across a digital network using a hardware engine,-comprising transfernng the block of the asset 'into ~a iriedia buffer, writing wire packet generation control instructions into the media buffer; fragmenting the block into one or more data packets, generating packet headers for a packet in accordance with the instructions, calculating a checksum for the packet, transmitting the packet onto the network, and repeating the generating, calculating, and transmitting steps until all the data packets have been transmitted.
[0026] In another aspect of the present invention, the method further comprises the steps of receiving a message to process the instructions and sending a message that the block has been sent.
[0027] In another aspect, the present invention is directed to a method for designing a streaming media hardware engine, comprising: (a) identifying one or more components that comprise the hardware engine, (b) designing a last component having a fully saturated output bandwidth greater than.or equal to the required bandwidth of the hardware engine (c) calculating the input bandwidth required to fully saturate the designed component, (d) designing an adjacent preceding component having a fully saturated output bandwidth greater than or equal to the input bandwidth calculated in step (c), and recursively repeating steps (c) and (d) for remaining components identified in step (a).
BRIEF DESCRIPTION OF THE DRAWINGS
[0028] Fig. 1 is a flow chart illustrating a process for generating wire data packets in a general-purpose personal computer;
[0029] Fig. 2 is a block diagram that illustrates hardware and software components in a general-purpose personal computer used to generate a wire packet; .
[0030] Fig. 3 is a block diagram that illustrates components of a hardware engine in one embodiment;
[0031] Fig. 4 is a block diagram that illustrates an embodiment of the hardware engine that uses a field programmable gate array, and depicts the internal architecture of same;
[0032] Fig. 5 is a block diagram that illustrates an embodiment of the internal architecture of a format conversion and packet generation engine found in the field programmable gate array;
[0033] Fig. 6 is a flow chart that illustrates an embodiment of the design 'methodology for a media asset streaming hardware engine;
-S-luu.~41 rig. i is a iiow cnart that illustrates an emnoaiment of the W
stallation of an upgrade package in. an FPGA;
[0035]-= Fig. 8 is an example control block for a Quick Time-media file streamed over RTP/UDP/1P;
[0036] Fig. 9 is an example control block for an MPEG-2 file streamed over UDP/IP;
[0037] Fig. 10 is a flow diagram illustrating the process of generating wire packets in a preferred embodiment;
[0038] Fig. 11 is a diagram of the Ethernet header or media access control layer (MAC) control block entry structure;
[0039] Fig. 12 is a diagram of the Internet protocol (IP) header control block entry structure;
[0040] Fig. 13 is a diagram of the user datagram protocol (UDP) header control block entry structure;
[0041] Fig. 14 is a diagram of the transport control protocol (TCP) header control block entry structure;
[0042] Fig. 15 is a diagram of the hypertext transport protocol (HTTP) header control block entry structure;
[0043] Fig. 16 is a diagram of the realtime transport protocol (RTP) header control.
block entry structure; and [0044] Fig. 17 is a diagram of the payload data control block entry structure.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Hardware Engine Components (0045] One preferred embodiment of a hardware engine for streaming digital media assets is shown in Fig. 3. As shown in Fig..3, hardware engine 300 preferably comprises several components including dedicated buses 310, an input interface 320, a media buffer 330, a network interface 340, a general-purpose interface 350, and one or more programmable logic devices (PLDs) 360. Dedicated buses 310 provide an exclusive data pathway between PLD 360 and other hardware engine components. Input interface 320 is preferably adapted to control data storage devices containing media assets to be streamed and transmits asset data through PLD 360 to media buffer 330, as described below.
Network interface 340 provides a controller for communicating with other devices across a data network. General-purpose interface 350 provides a controller for communicating with a general-purpose computing device. PLD 360 translates asset data that is held in media buffer 330 into wire data packets and sends the packets out to the network through network interface 340.
[0046] Fig. 4 is a block diagram depicting a preferred embodiment of PLD 360.
In the preferred embodiment of Fig. 4, PLD 360 comprises a Field Programmable Gate Array (FPGA) device. Those skilled in the art will recognize that other PLDs may alternatively be used.
[0047] As shown in Fig. 4, FPGA device 400 preferably comprises a plurality of objects created using Hardware Description Language (HDL). These HDL objects preferably comprise interface objects 420-460, a series of first-in, first-out (FIFO) queues 471-475, and a packet engine 480. Interface objects 420-460 provide the necessary control and addressing signals through dedicated buses 310 to communicate with interface devices 320-350. FIFO queues 471-475 provide internal data communication paths between interface objects and packet engine 480. Packet engine 480 converts asset data held in media buffer 330 into wire data packets that are sent out to the network.
[0048] In more detail; interface objects 420-460 preferably comprise a storage peripheral component interface (PCI) interface 420, a media buffer interface 430, a gigabit Ethernet controller interface 440, a general-purpose PCI interface 450, and a security interface 460. Interface HDL objects 420-460 provide the signals required to send or receive data from the FPGA to components 320-360, respectively.
_7_ [UU4y~ The series of FIFO queues preferably comprises five sets of FIFO queues 475 FIFO queue HDL objects 471=475rbuffer the flow of data betv~eri~the interface HDL
objects and packet engine 480 in FPGAwdevi~ce 400.
[0050] Fig. 5 is a block diagram of a preferred embodiment of packet engine 480. As shown in Fig. 5, engine 480 preferably comprises a collection of state machines. There are three main groups of state machines: parser state machines 510, header formatter state machines 520, and packet assembler state machines 530. Parser state machines 510 read-control blocks stored in media buffer 330 and retrieve the associated media asset data for processing. Header formatter state machines 520 generate the protocol headers for the communications protocols used in each data packet. Packet assembler state machines 530 create wire data packets by connecting the generated packet headers with the asset data and generating checksums for the data. Packet engine 480 further comprises memory writing state machine 540 #~or sending information back to media buffer 330. Memory writing state.
machine 540 updates control block entries for TCP, and RTP packets, as described below in the Streaming Media Operation section.
[0051] Parser state machines 510 preferably comprise three components, a control block parser 519, a payload builder 517, and a facilitator 515. Control block parser 519 is adapted to read a control block stored in media buffer 330 and pass appropriate data from the control block to header formatting state machines 520. Under control of control block parser 519, payload builder 517 reads asset data from media buffer 330.
Facilitator 515 is adapted to schedule the output from packet header formatters 520.
[0052] Packet header formatter state machines 520 preferably comprise state machines that produce packet headers which adhere to the communication protocols necessary for streaming video across an Internet Protocol data network including IP 521, UDP
522, TCP
523, RTP 524 and HTTP 525. Each packet header formatter is responsible for generating a packet header in the appropriate format for inclusion in the wire packet. The packet headers are preferably generated from control block data determined by control block parser 519.
[0053] Packet assembly state machines preferably comprise a multiplexer 531, a payload packer engine 532, a header packer 533, a checksum generator 534, and a packet writer 535. Multiplexer 531 multiplexes the output of the various header format state machines and the payload builder into packets. Payload packer engine 532 shifts and concatenates the data to eliminate empty bytes in the packet data stream.
Packer 533 shifts and concatenates the packet headers to eliminate empty bytes in the packet data stream.
Checksum generator 534 generates the checksum of the wire data packet. Packet writer 535 sends the wire data packet out to the gigabit Ethemet controller. It manages payload buffers _g_ included in gigabit Ethernet controller 440, inserts checksums into the packet data stream, and creates:a.data entry indicating that the asset has been sent:
[0054] In am alternative preferred embodiment, packet engine 480 may include additional packet generation and protocol engines that replace many of the algorithms traditionally executed on a general-purpose CPU. For example, packet engine 480 may comprise an MPEG-2 stitching engine for targeted ad insertion, or a unique stream-encryption engine for increasing the quality of content security.
Design Methodolo;gy for Hardware En ine [0055] Each component in hardware engine 300 is designed specifically for the sustained delivery of digital media so that any given component will not restrict the flow of data and form a bottleneck in the device. Preferably, the criterion used to calculate how much input bandwidth is required for a component is determined from the full bandwidth saturation of the output interface of the component. By determining the amount of input bandwidth that will achieve a desired output bandwidth for a particular component, the output bandwidth of its upstream component can be selected so that the upstream component will supply at least the bandwidth required at the component's input to saturate its output.
[0056] This design principle is preferably applied to all components in hardware engine 300, including those that may have a higher input bandwidth than output bandwidth at full saturation. This situation may occur where some of the data supplied to a component is not transmitted by the component. Illustratively, a component that reads data storage blocks from a hard drive and processes the blocks into data packets may not use the entire contents of the block. The packet data required may be slightly larger than one block, requiring that two blocks be read into media buffer 330. Although two full blocks are read, only a small percentage of the second block is required for generating the packet. Thus, the output bandwidth for the component may be less than its input bandwidth. .
(0057] This design process is illustrated in more detail in Fig. 6. In step 610, the components of the hardware engine are identified. Then, the components in the data stream generating chain are evaluated in reverse order. In step 620, the last component in the data stream generation chain-is designed so that it has an output bandwidth greater than or equal to the required bandwidth that the hardware engine must supply. Next, the input.necessary to saturate this output is calculated based on the selected component's functions and data it processes (step 630). If the selected component is not the first component in the data stream generation chain (step 640), the next upstream component is designed to have an output bandwidth greater than or equal to the calculated input bandwidth of the previously selected-cormponent~ (step 650): Once the first component has been evaluated '(step ~640);~ the ~~
design process is' complete.
[0058] Because the throughput of each component and bus are selected or designed to fully saturate the next component, bottlenecks within the device are eliminated and the device operates with fully saturated output connections.
Repr~amming the FPGA
(0059] In a preferred embodiment, upgrade packages may be used to reprogram the FPGA using the hardware description language (HDL). By replacing the FPGA's configuration, the HDL components included in the FPGA are changed. The process for installing an upgrade package.is illustrated in Fig. 7.
[0060] As shown in Fig. 7, in step 710 upgrade packages are created to replace the configuration in the FPGA. In step 720, these packages are preferably encrypted to protect their contents from scrutiny, and in step 730, compressed for distribution.
The upgrade package may then be downloaded (step 740), decompressed (step 750), and decrypted (step 755) before it is copied into the FPGA (step 760). In step 770, after the upgrade package is loaded into the FPGA, the FPGA is stopped and rebooted. When the system restarts, the FPGA is reloaded with the upgraded logic.
[0061] In a preferred embodiment, security interface 560 protects the logic programmed into the FPGA from being copied. As known in the art, different security interfaces may be designed or purchased that provide varying degrees of security and implementation overhead. Those skilled in the art may balance competing desires to maximize security while minimizing implementation time and cost in selecting an appropriate security interface for the FPGA.
(0062] The flexibility achieved by reprogramming the hardware device is illustrated by the following example. Suppose that the initial hardware description language implemented in the FPGA includes packetization algorithms and protocols specific to MPEG-2 transport streams. In the future, users may require delivery of media content in other formats such as MPEG-4. Because hardware engine 300 comprises an FPGA, .new algorithms for manipulating MPEG-4 formats can be added to the layout of the chip using HDL
in the form of an upgrade package.
FSH Streaming Media Operation [0063] In operation, hardware engine 300 assembles wire packets in accordance with instructions specified in a control block found in media buffer 330. In a preferred embodiment the control block is a 128-byte data structure comprising a series of control block entries (CBE) of at least eight bytes in length. Each CBE either contains data that will be part of a media packet, or a pointer to that data. The media packet can be constructed by traversing the entire control block and concatenating the data contained in each entry or data pointed at by each entry.
[0064] Fig. 8 illustrates an exemplary control block for a Quick Time media file streamed over RTP/LJDP/IP. The exemplary control block comprises a cookie control block entry 810 that uniquely identifies a data stream. The exemplary control block further comprises a series of format CBEs 820-850, along with a series of one or more media packet payload CBEs 860-890. Media packet payload CBEs 860-890 identify the address of the associated media packets in media buffer 330. Hardware engine 300 processes control blocks and associated media packet payload data to generate wire data packets, as described below.
[0065] Fig. 9 is an example control block for an MPEG-2 file streamed over UDP/IP.
Analogous control blocks may be created for use with other public domain or proprietary streaming formats. Each such control block also comprises a cookie control block entry, one or more format CBEs, and one or more media packet payload CBEs.
[0066] Fig. 10 illustrates a preferred embodiment of a process for streaming media. As shown in Fig. 10, in step 1010, a block of media asset data is moved from data storage through the hardware engine's input interface 310 and placed into media buffer 330 under control of a general-purpose PC as described in copending U.S. patent application No.
10/ , , entitled "Hybrid Streaming Platform," filed on even date herewith (and identified by-Pennie & Edmonds attorney docket no. 11055-005-999), which is hereby incorporated by reference in its entirety for each of its teachings and embodiments. In step 1020, a control block is written to media buffer 330. The control block preferably identifies the location of the media asset in the media buffer anal includes instructions for processing the media asset data. In step 1030, hardware engine 300 receives a data message to commence streaming the media asset data. The message preferably contains a pointer to the control block and a stream identifier corresponding to the control block.
[0067] Engine 300 then converts the media packet payload from file format to wire format. If the media packet is larger than the maximum transmission unit (MTV, this conversion process preferably comprises fragmentation of the media packet into several wire fot-mat~ data' packets (step 1040). Instep 1050, engine 300~generates protocol-fbiimat w.
headers specified iri the~CBEs for insertion into the wire packet. Next; in step 1060, engine ' 300 assembles the packet and calculates a checksum for the wire packet. In step 1070, engine 300 sends a wire packet out thorough gigabit Ethernet interface 340. If the last wire packet has not been sent (step 1080), engine 300 updates packet headers and checksum and sends the next wire packet. After the last packet has been transmitted, engine 300 generates a message that indicates the control block has been processed.
[0068] A preferred header-formatting process is now described in more detail.
In a preferred embodiment, engine 300 adds an Ethernet header to every packet unless the control block has a "pass thru" identifier. The Ethernet header control block contains a source address, destination address, and a packet type field. In a preferred embodiment, header information for the Ethernet header is included in a CBE, as shown, for example, in Fig. 11.. When transmitting packets, engine 300 preferably uses the same Ethernet header information from the control block for every packet in a particular stream. If necessary, the destination address can be changed as directed by a separate CBE. Each packet is also preferably provided with any additional headers required by its associated CBE.
[0069] In a preferred embodiment, when the packet includes an IP header, the CBE
preferably includes the following fields, illustrated in Fig. 12: a version, a header length, a type-of service field, a total length, an identification field, flags, a fragment offset, a time-to-live field, a protocol byte field, a header checksum, a source IP address and a destination IP address. Before sending the wire packet, engine 300 preferably performs the following functions. First, engine 300 computes the total length in bytes by adding up the length fields from all CBEs. Next, engine 300 computes the header checksum by setting the field to zero, then computing the 16-bit sum over the IP header only. Finally, engine 300 stores the 16-bit ones-complement of the sum in the header checksum field, and copies the other fields to generate the IP packet header.
[0070] In a preferred embodiment, when the packet includes a UDP header, the CBE
preferably includes fields for a source port number, destination port number, LJDP length;
and UDP checksum fields as shown in Fig. 13. Before sending the wire packet, engine 300.
preferably performs the following functions. First, engine 300 computes the UDP length by adding. up the length fields from all CBEs including and after the one pointing to. the UDP
header. ' Theri, engine 300 computes the UDP checksum by performing a 16-bit add of the source IP address field from the IP packet header, the destination IP address field from the IP header, the protocol field (as the lower 8 bits) from the IP header, the UDP length as calculated above, and the entire UDP header, plus the remaining wire packet headers and media packet payload. Then the~anes-corriplerrierit of the sum is stored-in the UDP
checksum field, and the remaining fields are copied into the UDP header from the UDP
control block. For more details of the generation of IP/LTDP packets by hardware engine 300, see copending U.S. patent application No. 10/ ,-, entitled "Optimized Digital Media Delivery Engine," filed.on even date herewith (and identified by Pennie & Edmonds attorney docket no. 11055-011-999), which is hereby incorporated by reference in its entirety for each of its teachings and embodiments.
[0071) In a preferred embodiment, when the packet includes an TCP header, the CBE
preferably includes fields for a source port number, destination port number, a sequence number, an acknowledgment number, a header length, a reserved field, flags, a window size, a TCP checksum, and an urgent pointer, as shown in Fig. 14. Before sending the wire packet, engine 300 preferably performs the following functions. First, the TCP
checksum is calculated by performing a 16-bit add of the source IP address from the IP
header, the destination IP address from the IP header, the protocol field (as the lower 8 bits) from the IP
header, the total-length field from the IP header, the entire TCP header, plus the remaining wire packet headers and media packet payload. Then the ones-complement of the sum is stored in the.TCP checksum field, and the remaining fields are copied from the TCP CBE to generate the TCP packet header.
[0072] After sending the wire packet, engine 300 preferably increments the sequence number in the TCP control block entry. If the TCP packet is segmented, the sequence number is preferably updated in every wire data packet sent, but the sequence number in the control block is incremented after the entire media packet has been processed.
(0073] In a preferred embodiment, when the packet includes an HTTP header, the CBE
preferably contains a "$" character, an HDCE byte field, and a total length field, as shown in Fig. 15. Before sending the wire packet, engine 300 preferably fills in the total length field based on the payload length field from the IP CBE and any headers that follow the HTTP header, such as RTP, to generate the HTTP packet header.
[0074] In a preferred embodiment, when the packet includes an RTP header, the CBE
preferably includes flags, a CSRC count field, a payload type field, a sequence number, a timestamp, and a SSRC identifier, as shown in Fig. 16. . Engine 300 copies the RTP CBE in order to generate the RTP packet header before sending out the wire packet.
[0075] After sending the wire packet, engine 300 preferably increments the sequence number field in the RTP CBE by 1., [0076] In a preferred embodiment, the control block contains a payload data CBE, as shown in Fig. '17. The payload CBE contains a'flag field, ID field, payload length field, and either an address to the payload data or a null value if'the ID field indicates that the payload data is appended to the end of the CBE. The length field is used by engine 300 to determine whether to fragment the payload and for inclusion in the packet header fields.
The address field is used by engine 300 to locate the payload data in media buffer 330.
[0077] In an alternative preferred embodiment, multiple PLDs may be pipelined together to execute additional algorithms, or more complex algorithms, in tandem.
Embodiments comprising multiple PLDs preferably comprise additional communications structures in the PLD for inter-process communications between the PLDs in order to execute parallel algorithms.
[0078] While the invention has been described in conjunction with specific embodiments, it is evident that numerous alternatives, modifications, and variations will be apparent to those skilled .in the art in light of the foregoing description.
Claims (14)
1. A system under the control of a general-purpose computer for converting digital media assets into wire data packets for transmission to a client, the assets being stored on a digital media storage device comprising:
an input interface for retrieving digital media asset data from the storage device;
a media buffer for receiving the digital media asset data from the storage interface, a programmable logic device adapted to transfer the digital media asset data from the input interface to the media buffer, to process the digital media asset data from the media buffer, and to generate wire data packets, a network interface coupled to the device and adapted to transmit the wire data packets to the client, and a general-purpose interface coupled to the device and adapted to receive control information from the general-purpose computer for storage in the media buffer and to enable the device to communicate with the general-purpose computer.
an input interface for retrieving digital media asset data from the storage device;
a media buffer for receiving the digital media asset data from the storage interface, a programmable logic device adapted to transfer the digital media asset data from the input interface to the media buffer, to process the digital media asset data from the media buffer, and to generate wire data packets, a network interface coupled to the device and adapted to transmit the wire data packets to the client, and a general-purpose interface coupled to the device and adapted to receive control information from the general-purpose computer for storage in the media buffer and to enable the device to communicate with the general-purpose computer.
2. The system of claim 1, wherein the media buffer is further adapted to store control blocks comprising packet header formatting instructions and digital media asset payload information, and the programmable logic device is further adapted to generate packet headers from the instructions.
3. The system of claim 2, wherein the digital media asset payload information comprises a pointer to the digital media asset data.
4. The system of claim 2, wherein the digital media asset payload information comprises the digital media asset data.
5. The system of claim 1, wherein the programmable logic device is a field programmable gate array.
6. The system of claim 1, wherein the network interface comprises a Gigabit Ethernet interface.
7. The system of claim 1, wherein the data generation rate is greater than or equal to the data transmission rate, the programmable logic device data reception rate is greater than or equal to the programmable logic device data reception rate.
8. The system of claim 1, wherein two or more programmable logic devices cooperatively increase the data transmission rate of the system.
9. The system of claim 1, wherein the programmable logic device comprises an stitching engine for targeted ad insertion.
10. The system of claim 1, wherein the programmable logic device is further adapted to encrypt the data stream thereby increasing the quality of content security.
11. A secure method of providing an upgrade package for changing the logic in a field programmable gate array used as an engine for streaming digital media, comprising:
encrypting the upgrade package, compressing the upgrade package, distributing the upgrade package, decompressing the upgrade package, decrypting the upgrade package, loading the package into the field programmable gate array, supplying a key to the field programmable gate array for decrypting the upgrade package, and rebooting the field programmable gate array, thereby installing the upgrade package.
encrypting the upgrade package, compressing the upgrade package, distributing the upgrade package, decompressing the upgrade package, decrypting the upgrade package, loading the package into the field programmable gate array, supplying a key to the field programmable gate array for decrypting the upgrade package, and rebooting the field programmable gate array, thereby installing the upgrade package.
12. A method of streaming a block of a digital media asset across a digital network using a hardware engine, comprising:
transferring the block of the asset into a media buffer;
writing wire packet generation control instructions into the media buffer;
fragmenting the block into one or more data packets;
generating packet headers for a packet in accordance with the instructions;
transmitting the packet onto the network; and repeating the generating, calculating, and transmitting steps until all the data packets have been transmitted.
transferring the block of the asset into a media buffer;
writing wire packet generation control instructions into the media buffer;
fragmenting the block into one or more data packets;
generating packet headers for a packet in accordance with the instructions;
transmitting the packet onto the network; and repeating the generating, calculating, and transmitting steps until all the data packets have been transmitted.
13. The method of claim 12, further comprising the steps of:
receiving a message to process the instructions; and sending a message that the block has been sent.
receiving a message to process the instructions; and sending a message that the block has been sent.
14. A method for designing a streaming media hardware engine, comprising:
(a) identifying one or more components that comprise the hardware engine;
(b) designing a last component having a fully saturated output bandwidth greater than or equal to the required bandwidth of the hardware engine;
(c) calculating the input bandwidth required to fully saturate the designed component;
(d) designing an adjacent preceding component having a fully saturated output bandwidth greater than or equal to the input bandwidth calculated in step (c);
and recursively repeating steps (c) and (d) for remaining components identified in step (a).
(a) identifying one or more components that comprise the hardware engine;
(b) designing a last component having a fully saturated output bandwidth greater than or equal to the required bandwidth of the hardware engine;
(c) calculating the input bandwidth required to fully saturate the designed component;
(d) designing an adjacent preceding component having a fully saturated output bandwidth greater than or equal to the input bandwidth calculated in step (c);
and recursively repeating steps (c) and (d) for remaining components identified in step (a).
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US37408602P | 2002-04-19 | 2002-04-19 | |
US60/374,086 | 2002-04-19 | ||
US10/369,306 US7899924B2 (en) | 2002-04-19 | 2003-02-19 | Flexible streaming hardware |
US10/369,306 | 2003-02-19 | ||
PCT/US2003/011575 WO2003089944A1 (en) | 2002-04-19 | 2003-04-14 | Flexible streaming hardware |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2483017A1 true CA2483017A1 (en) | 2003-10-30 |
Family
ID=29254388
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002483017A Abandoned CA2483017A1 (en) | 2002-04-19 | 2003-04-14 | Flexible streaming hardware |
Country Status (6)
Country | Link |
---|---|
US (1) | US7899924B2 (en) |
EP (1) | EP1497666A4 (en) |
AU (1) | AU2003221941A1 (en) |
CA (1) | CA2483017A1 (en) |
TW (1) | TWI316341B (en) |
WO (1) | WO2003089944A1 (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040006636A1 (en) * | 2002-04-19 | 2004-01-08 | Oesterreicher Richard T. | Optimized digital media delivery engine |
US20040006635A1 (en) * | 2002-04-19 | 2004-01-08 | Oesterreicher Richard T. | Hybrid streaming platform |
US7937595B1 (en) * | 2003-06-27 | 2011-05-03 | Zoran Corporation | Integrated encryption/decryption functionality in a digital TV/PVR system-on-chip |
JP2005204001A (en) * | 2004-01-15 | 2005-07-28 | Hitachi Ltd | Data distribution server, software, and system |
US7773630B2 (en) * | 2005-11-12 | 2010-08-10 | Liquid Computing Corportation | High performance memory based communications interface |
US20070162972A1 (en) * | 2006-01-11 | 2007-07-12 | Sensory Networks, Inc. | Apparatus and method for processing of security capabilities through in-field upgrades |
US20080212942A1 (en) * | 2007-01-12 | 2008-09-04 | Ictv, Inc. | Automatic video program recording in an interactive television environment |
US9826197B2 (en) | 2007-01-12 | 2017-11-21 | Activevideo Networks, Inc. | Providing television broadcasts over a managed network and interactive content over an unmanaged network to a client device |
CN102301652A (en) | 2009-04-27 | 2011-12-28 | 国际商业机器公司 | Message switching |
US8527677B1 (en) * | 2010-06-25 | 2013-09-03 | Altera Corporation | Serial communications links with bonded first-in-first-out buffer circuitry |
EP2798843A4 (en) * | 2011-12-28 | 2015-07-29 | Intel Corp | Systems and methods for integrated metadata insertion in a video encoding system |
KR101394884B1 (en) * | 2012-06-18 | 2014-05-13 | 현대모비스 주식회사 | Congestion Control Device and Method for Inter-Vehicle Communication |
US10673723B2 (en) * | 2017-01-13 | 2020-06-02 | A.T.E. Solutions, Inc. | Systems and methods for dynamically reconfiguring automatic test equipment |
CN113038274B (en) * | 2019-12-24 | 2023-08-29 | 瑞昱半导体股份有限公司 | Video interface conversion device and method |
US11496382B2 (en) * | 2020-09-30 | 2022-11-08 | Charter Communications Operating, Llc | System and method for recording a routing path within a network packet |
Family Cites Families (132)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4800431A (en) * | 1984-03-19 | 1989-01-24 | Schlumberger Systems And Services, Inc. | Video stream processing frame buffer controller |
FR2582175A1 (en) | 1985-05-20 | 1986-11-21 | Alcatel Espace | TIME DIVISION MULTIPLE ACCESS SATELLITE TELECOMMUNICATIONS METHOD AND DEVICE |
GB8829919D0 (en) | 1988-12-22 | 1989-02-15 | Int Computer Limited | File system |
DE69114788T2 (en) | 1990-08-29 | 1996-07-11 | Honeywell Inc | Data transmission system with checksum computing means. |
US5367636A (en) | 1990-09-24 | 1994-11-22 | Ncube Corporation | Hypercube processor network in which the processor indentification numbers of two processors connected to each other through port number n, vary only in the nth bit |
US6400996B1 (en) * | 1999-02-01 | 2002-06-04 | Steven M. Hoffberg | Adaptive pattern recognition based control system and method |
US5333299A (en) | 1991-12-31 | 1994-07-26 | International Business Machines Corporation | Synchronization techniques for multimedia data streams |
US5742760A (en) | 1992-05-12 | 1998-04-21 | Compaq Computer Corporation | Network packet switch using shared memory for repeating and bridging packets at media rate |
US5430842A (en) | 1992-05-29 | 1995-07-04 | Hewlett-Packard Company | Insertion of network data checksums by a network adapter |
US5857109A (en) * | 1992-11-05 | 1999-01-05 | Giga Operations Corporation | Programmable logic device for real time video processing |
US5515536A (en) | 1992-11-13 | 1996-05-07 | Microsoft Corporation | Method and system for invoking methods of an object through a dispatching interface |
US5719786A (en) | 1993-02-03 | 1998-02-17 | Novell, Inc. | Digital media data stream network management system |
US5768598A (en) | 1993-09-13 | 1998-06-16 | Intel Corporation | Method and apparatus for sharing hardward resources in a computer system |
EP0790743B1 (en) | 1993-09-16 | 1998-10-28 | Kabushiki Kaisha Toshiba | Apparatus for synchronizing compressed video and audio signals |
US5515379A (en) | 1993-10-18 | 1996-05-07 | Motorola, Inc. | Time slot allocation method |
US5566174A (en) | 1994-04-08 | 1996-10-15 | Philips Electronics North America Corporation | MPEG information signal conversion system |
US5638516A (en) | 1994-08-01 | 1997-06-10 | Ncube Corporation | Parallel processor that routes messages around blocked or faulty nodes by selecting an output port to a subsequent node from a port vector and transmitting a route ready signal back to a previous node |
US5848192A (en) | 1994-08-24 | 1998-12-08 | Unisys Corporation | Method and apparatus for digital data compression |
WO1996017306A2 (en) | 1994-11-21 | 1996-06-06 | Oracle Corporation | Media server |
EP0716370A3 (en) * | 1994-12-06 | 2005-02-16 | International Business Machines Corporation | A disk access method for delivering multimedia and video information on demand over wide area networks |
US5794062A (en) | 1995-04-17 | 1998-08-11 | Ricoh Company Ltd. | System and method for dynamically reconfigurable computing using a processing unit having changeable internal hardware organization |
US5925099A (en) * | 1995-06-15 | 1999-07-20 | Intel Corporation | Method and apparatus for transporting messages between processors in a multiple processor system |
US5710908A (en) | 1995-06-27 | 1998-01-20 | Canon Kabushiki Kaisha | Adaptive network protocol independent interface |
US6119154A (en) | 1995-07-14 | 2000-09-12 | Oracle Corporation | Method and apparatus for non-sequential access to an in-progress video feed |
US6112226A (en) | 1995-07-14 | 2000-08-29 | Oracle Corporation | Method and apparatus for concurrently encoding and tagging digital information for allowing non-sequential access during playback |
US6138147A (en) | 1995-07-14 | 2000-10-24 | Oracle Corporation | Method and apparatus for implementing seamless playback of continuous media feeds |
US6047323A (en) | 1995-10-19 | 2000-04-04 | Hewlett-Packard Company | Creation and migration of distributed streams in clusters of networked computers |
US5751951A (en) | 1995-10-30 | 1998-05-12 | Mitsubishi Electric Information Technology Center America, Inc. | Network interface |
US5729292A (en) | 1995-12-21 | 1998-03-17 | Thomson Multimedia, S.A. | Optimizing performance in a packet slot priority packet transport system |
US6842785B1 (en) | 1996-01-22 | 2005-01-11 | Svi Systems, Inc. | Entertainment and information systems and related management networks for a remote video delivery system |
JP3183155B2 (en) | 1996-03-18 | 2001-07-03 | 株式会社日立製作所 | Image decoding apparatus and image decoding method |
US5815516A (en) * | 1996-04-05 | 1998-09-29 | International Business Machines Corporation | Method and apparatus for producing transmission control protocol checksums using internet protocol fragmentation |
US5892535A (en) * | 1996-05-08 | 1999-04-06 | Digital Video Systems, Inc. | Flexible, configurable, hierarchical system for distributing programming |
US6088360A (en) * | 1996-05-31 | 2000-07-11 | Broadband Networks Corporation | Dynamic rate control technique for video multiplexer |
US5781227A (en) | 1996-10-25 | 1998-07-14 | Diva Systems Corporation | Method and apparatus for masking the effects of latency in an interactive information distribution system |
US6253375B1 (en) | 1997-01-13 | 2001-06-26 | Diva Systems Corporation | System for interactively distributing information services |
US6166730A (en) | 1997-12-03 | 2000-12-26 | Diva Systems Corporation | System for interactively distributing information services |
US6208335B1 (en) | 1997-01-13 | 2001-03-27 | Diva Systems Corporation | Method and apparatus for providing a menu structure for an interactive information distribution system |
AU6656998A (en) | 1997-02-11 | 1998-08-26 | Xaqti Corporation | Media access control architectures and network management systems |
US5953335A (en) | 1997-02-14 | 1999-09-14 | Advanced Micro Devices, Inc. | Method and apparatus for selectively discarding packets for blocked output queues in the network switch |
US5819049A (en) | 1997-02-28 | 1998-10-06 | Rietmann; Sandra D. | Multi-media recording system and method |
US5948065A (en) | 1997-03-28 | 1999-09-07 | International Business Machines Corporation | System for managing processor resources in a multisystem environment in order to provide smooth real-time data streams while enabling other types of applications to be processed concurrently |
US6101255A (en) * | 1997-04-30 | 2000-08-08 | Motorola, Inc. | Programmable cryptographic processing system and method |
US6021440A (en) * | 1997-05-08 | 2000-02-01 | International Business Machines Corporation | Method and apparatus for coalescing and packetizing data |
US6108695A (en) | 1997-06-24 | 2000-08-22 | Sun Microsystems, Inc. | Method and apparatus for providing analog output and managing channels on a multiple channel digital media server |
US6023731A (en) | 1997-07-30 | 2000-02-08 | Sun Microsystems, Inc. | Method and apparatus for communicating program selections on a multiple channel digital media server having analog output |
US6879266B1 (en) | 1997-08-08 | 2005-04-12 | Quickshift, Inc. | Memory module including scalable embedded parallel data compression and decompression engines |
US5995974A (en) | 1997-08-27 | 1999-11-30 | Informix Software, Inc. | Database server for handling a plurality of user defined routines (UDRs) expressed in a plurality of computer languages |
US6442168B1 (en) | 1997-09-17 | 2002-08-27 | Sony Corporation | High speed bus structure in a multi-port bridge for a local area network |
US6122670A (en) | 1997-10-30 | 2000-09-19 | Tsi Telsys, Inc. | Apparatus and method for constructing data for transmission within a reliable communication protocol by performing portions of the protocol suite concurrently |
US5996015A (en) | 1997-10-31 | 1999-11-30 | International Business Machines Corporation | Method of delivering seamless and continuous presentation of multimedia data files to a target device by assembling and concatenating multimedia segments in memory |
US6222838B1 (en) | 1997-11-26 | 2001-04-24 | Qwest Communications International Inc. | Method and system for delivering audio and data files |
US7152027B2 (en) | 1998-02-17 | 2006-12-19 | National Instruments Corporation | Reconfigurable test system |
US6697846B1 (en) | 1998-03-20 | 2004-02-24 | Dataplow, Inc. | Shared file system |
US6138219A (en) | 1998-03-27 | 2000-10-24 | Nexabit Networks Llc | Method of and operating architectural enhancement for multi-port internally cached dynamic random access memory (AMPIC DRAM) systems, eliminating external control paths and random memory addressing, while providing zero bus contention for DRAM access |
US6260155B1 (en) | 1998-05-01 | 2001-07-10 | Quad Research | Network information server |
US6246683B1 (en) * | 1998-05-01 | 2001-06-12 | 3Com Corporation | Receive processing with network protocol bypass |
GB9809685D0 (en) | 1998-05-06 | 1998-07-01 | Sony Uk Ltd | Ncam AV/C CTS subunit proposal |
US6498897B1 (en) * | 1998-05-27 | 2002-12-24 | Kasenna, Inc. | Media server system and method having improved asset types for playback of digital media |
US6314573B1 (en) | 1998-05-29 | 2001-11-06 | Diva Systems Corporation | Method and apparatus for providing subscription-on-demand services for an interactive information distribution system |
WO1999062261A1 (en) | 1998-05-29 | 1999-12-02 | Diva Systems Corporation | Interactive information distribution system and method |
US6157955A (en) | 1998-06-15 | 2000-12-05 | Intel Corporation | Packet processing system including a policy engine having a classification unit |
US6876653B2 (en) * | 1998-07-08 | 2005-04-05 | Broadcom Corporation | Fast flexible filter processor based architecture for a network device |
US6157051A (en) | 1998-07-10 | 2000-12-05 | Hilevel Technology, Inc. | Multiple function array based application specific integrated circuit |
US7035278B2 (en) | 1998-07-31 | 2006-04-25 | Sedna Patent Services, Llc | Method and apparatus for forming and utilizing a slotted MPEG transport stream |
US6959288B1 (en) | 1998-08-13 | 2005-10-25 | International Business Machines Corporation | Digital content preparation system |
US6192027B1 (en) | 1998-09-04 | 2001-02-20 | International Business Machines Corporation | Apparatus, system, and method for dual-active fibre channel loop resiliency during controller failure |
TW447201B (en) | 1998-09-24 | 2001-07-21 | Alteon Web Systems Inc | Distributed load-balancing internet servers |
US6148414A (en) | 1998-09-24 | 2000-11-14 | Seek Systems, Inc. | Methods and systems for implementing shared disk array management functions |
US6618363B1 (en) | 1998-10-09 | 2003-09-09 | Microsoft Corporation | Method for adapting video packet generation and transmission rates to available resources in a communications network |
US6560674B1 (en) | 1998-10-14 | 2003-05-06 | Hitachi, Ltd. | Data cache system |
TW465211B (en) | 1998-10-27 | 2001-11-21 | Port Corp C | Digital communications processor |
US6275907B1 (en) | 1998-11-02 | 2001-08-14 | International Business Machines Corporation | Reservation management in a non-uniform memory access (NUMA) data processing system |
TW475111B (en) | 1998-11-04 | 2002-02-01 | Inventec Corp | Method for testing image data transmission between memories |
JP2000175189A (en) | 1998-12-07 | 2000-06-23 | Univ Tokyo | Moving picture encoding method and moving picture encoding device used for the same |
US6510470B1 (en) | 1998-12-18 | 2003-01-21 | International Business Machines Corporation | Mechanism allowing asynchronous access to graphics adapter frame buffer physical memory linear aperture in a multi-tasking environment |
TW465209B (en) | 1999-03-25 | 2001-11-21 | Telephony & Amp Networking Com | Method and system for real-time voice broadcast and transmission on Internet |
US6289376B1 (en) | 1999-03-31 | 2001-09-11 | Diva Systems Corp. | Tightly-coupled disk-to-CPU storage server |
US6240553B1 (en) | 1999-03-31 | 2001-05-29 | Diva Systems Corporation | Method for providing scalable in-band and out-of-band access within a video-on-demand environment |
US6721794B2 (en) | 1999-04-01 | 2004-04-13 | Diva Systems Corp. | Method of data management for efficiently storing and retrieving data to respond to user access requests |
US6233607B1 (en) | 1999-04-01 | 2001-05-15 | Diva Systems Corp. | Modular storage server architecture with dynamic data management |
US6820144B2 (en) * | 1999-04-06 | 2004-11-16 | Microsoft Corporation | Data format for a streaming information appliance |
US6502194B1 (en) | 1999-04-16 | 2002-12-31 | Synetix Technologies | System for playback of network audio material on demand |
US6651103B1 (en) * | 1999-04-20 | 2003-11-18 | At&T Corp. | Proxy apparatus and method for streaming media information and for increasing the quality of stored media information |
IL130796A (en) * | 1999-07-05 | 2003-07-06 | Brightcom Technologies Ltd | Packet processor |
US6496692B1 (en) | 1999-12-06 | 2002-12-17 | Michael E. Shanahan | Methods and apparatuses for programming user-defined information into electronic devices |
US7327761B2 (en) * | 2000-02-03 | 2008-02-05 | Bandwiz Inc. | Data streaming |
US6757291B1 (en) | 2000-02-10 | 2004-06-29 | Simpletech, Inc. | System for bypassing a server to achieve higher throughput between data network and data storage system |
US7200138B2 (en) | 2000-03-01 | 2007-04-03 | Realtek Semiconductor Corporation | Physical medium dependent sub-system with shared resources for multiport xDSL system |
US20020174227A1 (en) | 2000-03-03 | 2002-11-21 | Hartsell Neal D. | Systems and methods for prioritization in information management environments |
US20020107989A1 (en) | 2000-03-03 | 2002-08-08 | Johnson Scott C. | Network endpoint system with accelerated data path |
US6947430B2 (en) * | 2000-03-24 | 2005-09-20 | International Business Machines Corporation | Network adapter with embedded deep packet processing |
US6594775B1 (en) | 2000-05-26 | 2003-07-15 | Robert Lawrence Fair | Fault handling monitor transparently using multiple technologies for fault handling in a multiple hierarchal/peer domain file server with domain centered, cross domain cooperative fault handling mechanisms |
GB2366709A (en) * | 2000-06-30 | 2002-03-13 | Graeme Roy Smith | Modular software definable pre-amplifier |
US7200670B1 (en) * | 2000-06-30 | 2007-04-03 | Lucent Technologies Inc. | MPEG flow identification for IP networks |
US7228358B1 (en) | 2000-07-25 | 2007-06-05 | Verizon Services Corp. | Methods, apparatus and data structures for imposing a policy or policies on the selection of a line by a number of terminals in a network |
US6944152B1 (en) | 2000-08-22 | 2005-09-13 | Lsi Logic Corporation | Data storage access through switched fabric |
US6944585B1 (en) | 2000-09-01 | 2005-09-13 | Oracle International Corporation | Dynamic personalized content resolution for a media server |
US20020107971A1 (en) | 2000-11-07 | 2002-08-08 | Bailey Brian W. | Network transport accelerator |
US7031343B1 (en) | 2000-11-17 | 2006-04-18 | Alloptic, Inc. | Point-to-multipoint passive optical network that utilizes variable-length packets |
US6831931B2 (en) * | 2000-12-06 | 2004-12-14 | International Business Machines Corporation | System and method for remultiplexing of a filtered transport stream |
US6944154B2 (en) * | 2000-12-06 | 2005-09-13 | International Business Machines Corporation | System and method for remultiplexing of a filtered transport stream with new content in real-time |
US6963561B1 (en) | 2000-12-15 | 2005-11-08 | Atrica Israel Ltd. | Facility for transporting TDM streams over an asynchronous ethernet network using internet protocol |
US6940873B2 (en) * | 2000-12-27 | 2005-09-06 | Keen Personal Technologies, Inc. | Data stream control system for associating counter values with stored selected data packets from an incoming data transport stream to preserve interpacket time interval information |
US20030223735A1 (en) * | 2001-02-28 | 2003-12-04 | Boyle William B. | System and a method for receiving and storing a transport stream for deferred presentation of a program to a user |
US20030097481A1 (en) | 2001-03-01 | 2003-05-22 | Richter Roger K. | Method and system for performing packet integrity operations using a data movement engine |
EP1374080A2 (en) | 2001-03-02 | 2004-01-02 | Kasenna, Inc. | Metadata enabled push-pull model for efficient low-latency video-content distribution over a network |
WO2002085016A1 (en) * | 2001-04-11 | 2002-10-24 | Cyber Operations, Llc | System and method for network delivery of low bit rate multimedia content |
US6971043B2 (en) | 2001-04-11 | 2005-11-29 | Stratus Technologies Bermuda Ltd | Apparatus and method for accessing a mass storage device in a fault-tolerant server |
US7266609B2 (en) | 2001-04-30 | 2007-09-04 | Aol Llc | Generating multiple data streams from a single data source |
US6904057B2 (en) * | 2001-05-04 | 2005-06-07 | Slt Logic Llc | Method and apparatus for providing multi-protocol, multi-stage, real-time frame classification |
US7042899B1 (en) * | 2001-05-08 | 2006-05-09 | Lsi Logic Corporation | Application specific integrated circuit having a programmable logic core and a method of operation thereof |
US6732104B1 (en) | 2001-06-06 | 2004-05-04 | Lsi Logic Corporatioin | Uniform routing of storage access requests through redundant array controllers |
US6981167B2 (en) | 2001-06-13 | 2005-12-27 | Siemens Energy & Automation, Inc. | Programmable controller with sub-phase clocking scheme |
US6996618B2 (en) | 2001-07-03 | 2006-02-07 | Hewlett-Packard Development Company, L.P. | Method for handling off multiple description streaming media sessions between servers in fixed and mobile streaming media systems |
JP2003037623A (en) | 2001-07-23 | 2003-02-07 | Philips Japan Ltd | Direct rtp delivery method and system over mpeg network |
US20030079018A1 (en) | 2001-09-28 | 2003-04-24 | Lolayekar Santosh C. | Load balancing in a storage network |
US7174086B2 (en) | 2001-10-23 | 2007-02-06 | Thomson Licensing | Trick mode using dummy predictive pictures |
US6732243B2 (en) | 2001-11-08 | 2004-05-04 | Chaparral Network Storage, Inc. | Data mirroring using shared buses |
US7043663B1 (en) | 2001-11-15 | 2006-05-09 | Xiotech Corporation | System and method to monitor and isolate faults in a storage area network |
US20030095783A1 (en) | 2001-11-21 | 2003-05-22 | Broadbus Technologies, Inc. | Methods and apparatus for generating multiple network streams from a large scale memory buffer |
US20030135577A1 (en) | 2001-12-19 | 2003-07-17 | Weber Bret S. | Dual porting serial ATA disk drives for fault tolerant applications |
US7245620B2 (en) * | 2002-03-15 | 2007-07-17 | Broadcom Corporation | Method and apparatus for filtering packet data in a network device |
US20040006636A1 (en) * | 2002-04-19 | 2004-01-08 | Oesterreicher Richard T. | Optimized digital media delivery engine |
US20040006635A1 (en) * | 2002-04-19 | 2004-01-08 | Oesterreicher Richard T. | Hybrid streaming platform |
US7657917B2 (en) | 2002-05-23 | 2010-02-02 | Microsoft Corporation | Interactivity emulator for broadcast communication |
US20030227913A1 (en) * | 2002-06-05 | 2003-12-11 | Litchfield Communications, Inc. | Adaptive timing recovery of synchronous transport signals |
US7260576B2 (en) | 2002-11-05 | 2007-08-21 | Sun Microsystems, Inc. | Implementing a distributed file system that can use direct connections from client to disk |
US20030108030A1 (en) * | 2003-01-21 | 2003-06-12 | Henry Gao | System, method, and data structure for multimedia communications |
US6879598B2 (en) * | 2003-06-11 | 2005-04-12 | Lattice Semiconductor Corporation | Flexible media access control architecture |
US7460531B2 (en) * | 2003-10-27 | 2008-12-02 | Intel Corporation | Method, system, and program for constructing a packet |
JP4729570B2 (en) | 2004-07-23 | 2011-07-20 | ビーチ・アンリミテッド・エルエルシー | Trick mode and speed transition |
-
2003
- 2003-02-19 US US10/369,306 patent/US7899924B2/en active Active
- 2003-04-14 EP EP03718402A patent/EP1497666A4/en not_active Withdrawn
- 2003-04-14 CA CA002483017A patent/CA2483017A1/en not_active Abandoned
- 2003-04-14 WO PCT/US2003/011575 patent/WO2003089944A1/en not_active Application Discontinuation
- 2003-04-14 AU AU2003221941A patent/AU2003221941A1/en not_active Abandoned
- 2003-04-18 TW TW092109078A patent/TWI316341B/en not_active IP Right Cessation
Also Published As
Publication number | Publication date |
---|---|
TWI316341B (en) | 2009-10-21 |
WO2003089944A1 (en) | 2003-10-30 |
AU2003221941A1 (en) | 2003-11-03 |
EP1497666A1 (en) | 2005-01-19 |
US7899924B2 (en) | 2011-03-01 |
EP1497666A4 (en) | 2008-01-23 |
TW200308155A (en) | 2003-12-16 |
US20030229778A1 (en) | 2003-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7899924B2 (en) | Flexible streaming hardware | |
US8032670B2 (en) | Method and apparatus for generating DMA transfers to memory | |
EP1791060B1 (en) | Apparatus performing network processing functions | |
US7561573B2 (en) | Network adaptor, communication system and communication method | |
US6629141B2 (en) | Storing a frame header | |
US7620746B2 (en) | Functional DMA performing operation on DMA data and writing result of operation | |
US7814218B1 (en) | Multi-protocol and multi-format stateful processing | |
US6728265B1 (en) | Controlling frame transmission | |
US7813339B2 (en) | Direct assembly of a data payload in an application memory | |
US20050060538A1 (en) | Method, system, and program for processing of fragmented datagrams | |
US20040013117A1 (en) | Method and apparatus for zero-copy receive buffer management | |
EP1435717A2 (en) | Encapsulation mechanism for packet processing | |
US20060174058A1 (en) | Recirculation buffer for semantic processor | |
US7539204B2 (en) | Data and context memory sharing | |
US20040006636A1 (en) | Optimized digital media delivery engine | |
US7532644B1 (en) | Method and system for associating multiple payload buffers with multidata message | |
Braun et al. | Performance evaluation and cache analysis of an ILP protocol implementation | |
EP1049292A2 (en) | System and method for network monitoring |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
FZDE | Discontinued |