CN103207775A - Processing method for adopting graphic processing unit (GPU) to accelerate real-time network flow application program - Google Patents

Processing method for adopting graphic processing unit (GPU) to accelerate real-time network flow application program Download PDF

Info

Publication number
CN103207775A
CN103207775A CN2013100756616A CN201310075661A CN103207775A CN 103207775 A CN103207775 A CN 103207775A CN 2013100756616 A CN2013100756616 A CN 2013100756616A CN 201310075661 A CN201310075661 A CN 201310075661A CN 103207775 A CN103207775 A CN 103207775A
Authority
CN
China
Prior art keywords
gpu
time
real
cpu
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013100756616A
Other languages
Chinese (zh)
Other versions
CN103207775B (en
Inventor
张凯
华蓓
祖渊
江盛杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Institute for Advanced Study USTC
Original Assignee
Suzhou Institute for Advanced Study USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Institute for Advanced Study USTC filed Critical Suzhou Institute for Advanced Study USTC
Priority to CN201310075661.6A priority Critical patent/CN103207775B/en
Publication of CN103207775A publication Critical patent/CN103207775A/en
Application granted granted Critical
Publication of CN103207775B publication Critical patent/CN103207775B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a dispatching method for adopting a graphic processing unit (GPU) to accelerate real-time network flow processing. The method is characterized in that a central processing unit (CPU) sets a GPU execution cycle I according to an application program needing to undergo accelerated processing; and the GPU execution cycle I is larger than or equal to the worst processing time of the GPU. The method comprises a first step of enabling the CPU to receive calculation results of the GPU within a previous GPU execution cycle I, and enabling the CPU to start to accumulate data or an execution sequence within the GPU execution cycle I when a next processing time cycle starts; a second step of enabling the GPU to transmit the data or the execution sequence within the GPU execution cycle I to the GPU to be processed when the next processing time cycle ends; and circulating according to the first step and the second step until the application program processing is finished. According to the method, real-time processing of all network flows is guaranteed while high throughput is acquired by means of the GPU.

Description

Adopt the disposal route of GPU acceleration real-time network streaming application
Technical field
The present invention relates to hardware GPU acceleration, express network processing and Real-Time Scheduling field, be specifically related to a kind of disposal route of the GPU of employing acceleration real-time network streaming application.
Background technology
Current network has many real-time application, and these are used in real time and are concerning that the user experiences, and has very big business demand.Typically, such as Web TV, net cast, video conference, wander about application such as play, VoIP, all concern privacy of user, generally all needing processing such as compression, encryption.And this processing power to the webserver and intermediate equipment has caused challenge, and has caused a large amount of processing expenditure.Do not handle to such an extent as to be serviced provider for the many operations such as encryption that should protect privacy of user, caused having the contents such as Internet video of copyright to be stolen by Network Sniffing, privacy of user might be stolen etc.
Simultaneously, huge except processing expenditure, the real-time of using during these network implementation also must be satisfied, use for VoIP, VoIP wraps every 20s-30s and produces one, its minimum delay from the transmitting terminal to the receiving end is defined as 150ms(because more than the sound delay time 150ms, the people just can discover obtain).Equally, such as video conference, IPTV, wander about this class such as play and use the real time delay requirement that generally has 150ms.
For existing network encryption, gas booster compressor, it is special hardware owing to it usually, poor expandability, construction cycle is long, and price is high, and because its new programming model and order set, need the programmer to learn new programming language or mode, cause exploitation slow and maintainable low.Along with the appearance of general processor CPU and general-purpose computations video card GPGPU, software realizes that high-performance treatments becomes a kind of method possibility of new realization high-speed network appliance.Yet, such as the operation of encryption, compression, forwarding etc., because the machine cpu performance is limited, often when data volume is very big, cannot satisfy processing demands so fast.In recent years, the appearance of GPU makes a lot of data, compute-intensive applications accelerate to become possibility by common apparatus.Method and the algorithm that utilizes GPU to come the intensive network processes of expedited data introduced in existing a large amount of work both at home and abroad, and reached the handling capacity of tens Gbps, verified that GPU is as accelerating the feasibility that express network is handled.
The program that GPU is fit to handle comprises two kinds: a kind of is data-intensive, because the characteristic that its fast threads switches can effectively be hidden memory access and postpone; Another kind is computation-intensive, and the computing power of its hundreds and thousands of nuclears is tens of hundreds of times of CPU, is fit to the high application of concurrency.
Yet because GPU has following characteristic, and make it not be used to handle real-time application so far:
1) batch processing.If take full advantage of high-throughput and the massive parallelism of GPU, task need be saved bit by bit into a collection of and then reach GPU and be handled, yet owing to need save bit by bit and can introducing postpone in a large number.
2) different with CPU, GPU is the data in the access memory directly, handle if data are transferred to GPU, and data must be transferred to the internal memory of GPU, yet propagation delay time is often very big.
3) execution time is uncertain.In GPU handled, a collection of task handling time was to be determined by its slowest thread.Have only when all threads all processedly intact the time, GPU just can return result.And the execution time of GPU is depended on distribution and the scheduling strategy of its inside, and its execution time does not become functional dependence with operating load, so the execution time is unpredictable.
4) multithreading.The many-core structure of GPU makes hundreds of thread to carry out simultaneously, and thread is executable unit with a group, is generally 32, switches fast between each group, does not switch to postpone, and can effectively hide the memory access expense of GPU.
Wherein the high latency and the execution time uncertainty that cause of 1~3 characteristic makes GPU not be used to handle real-time application so far.Especially the characteristic of its batch processing, task need be waited for that a collection of task is carried out just together and can take full advantage of GPU, need certain hour just can transfer to GPU simultaneously and handle the bad assurance of real-time.By reducing batch processing (batch) size, can reduce to postpone, still can reduce the handling capacity of GPU simultaneously; And increase batch processing (batch) size, and can increase handling capacity, increased delay also simultaneously.The present invention therefore.
Summary of the invention
The object of the invention is to provide a kind of GPU of employing to accelerate the dispatching method that real-time network stream is handled, and when this method is utilized GPU high-throughput processing power, guarantees and satisfy the real-time processing requirements of each network flow.
In order to solve these problems of the prior art, technical scheme provided by the invention is:
A kind of disposal route that adopts GPU acceleration real-time network streaming application, it is characterized in that the real-time network streaming application is before execution in the described method, CPU arranges the timer of predetermined GPU performance period I according to the real-time network streaming application, after described timer picks up counting, when timing accumulative total reached GPU performance period I, timing was made zero and is picked up counting again; Said method comprising the steps of:
(1) CPU begins to carry out the real-time network streaming application, and starts the timer timing;
(2) CPU saves bit by bit the data of real-time network streaming application or carries out sequence, and timer the GPU performance period, I expired the time with the data of saving bit by bit or carry out sequence transmission and handle to GPU;
(3) GPU receives data or the execution sequence that CPU saves bit by bit in a last GPU performance period I, and carries out the data of CPU transmission or carry out sequence, and execution result is transferred to CPU;
(4) CPU receives the execution result data of GPU in a last GPU performance period I, and judges whether to continue to begin to save bit by bit the data in next GPU performance period I when the GPU performance period, I began or carry out sequence at next according to the implementation status of real-time network streaming application; When the data of real-time network streaming application or when carrying out sequence and having executed, CPU output execution result is to internal memory; When the data of real-time network streaming application or when carrying out sequence and not executing, CPU continues to begin to save bit by bit the data in next GPU performance period I when the GPU performance period, I began or carry out sequence and finish until the application program processing at next; Wherein said GPU performance period I is more than or equal to the worst processing time of GPU.
Preferably, GPU performance period I is preset in advance in the described method, is not dynamic calculation, and system is operation in a single day, and I is immutable.In a collection of task, minimum task is handled period T, and I should be smaller or equal to T/2, and just I belongs in the middle of [0, T/2] this interval.
Preferably, the worst processing time of GPU comprises that GPU input data transmission period, GPU carry out the batch processing time, GPU result of calculation spreads out of the time in the described method.The worst processing time obtains by estimating, and by each measurement real transmission time, processing time of GPU, the institute that obtains each batch of current system is free, makes it be no more than GPU performance period I.
The worst processing time of GPU can not be estimated especially accurately, the present invention has adopted the scheme of resource reservation, namely when the processing power of system surpasses 90%, no longer accept new network connection, new data have been handled, unless when existing network connection stops or discharging, make when system can have the ability to handle more data, just the further new connection of acceptance.
Be embodied as: the time T of each batch processing of GPU timing B, when it 〉=during 0.9*I, we just do not accept new connection, all are processed in real-time with the task of guaranteeing all connections in this batch task, and the processing time of each batch can not surpass I.
The minimum treat of application program postpones by application program or user's appointment, depend on the application of accelerating, and the user is to the requirement of this network equipment, such as speech communication 150ms maximum-delay, remove network jitter, and the processing delay of server end user side, the time of leaving the encryption device acceleration for will be less than 100ms.This information should be specified in advance by the user, and perhaps this system connects information when initially setting up by each bar, obtains the processing delay of this stream, etc.
Concrete CPU~GPU reciprocal process need specify.
CPU and GPU reciprocal process: CPU receives network data, prepares network data and handles, carry out certain pretreated after, need put into buffer zone by the data that GPU accelerates to handle.(hundreds of is individual after the data of saving bit by bit some, several thousand), the data of all tasks are all transferred to GPU, call GPU then and carry out data in this buffer zone, after waiting for that GPU carries out end, from the storage of GPU, pass the result of GPU back internal memory, finish the acceleration of GPU and handle.Then, CPU carries out subsequent treatment for the data that GPU accelerates to handle.
In the technical solution of the present invention, when dispatching method is initial, according to the application program that required acceleration is handled, set one-period variable I.In the algorithm implementation, namely call GPU behind each fixed intervals I and carry out, before each the execution data of saving bit by bit among the time interval I are formed a collection ofly, transfer to GPU together and handle.That is to say that it is the execution sequence in the time interval with I that the practical implementation of GPU is one, twice execution time be changeless at interval.
Hence one can see that, at a time interval I (P K-1~P k) interior arriving of task, be bound to (P when the next time interval arrives K+1) be performed.Therefore for any one real-time network flow, if its each Message Processing by the time limit greater than 2I, its real-time will be guaranteed so.So, in all real-time task stream for a set, if minimum deadline greater than 2I, each message of all streams of this system set can both be processed in real-time so.
Total system is processed in real-time for guaranteeing all tasks in the process of implementation.Method should limit total data volume to be processed in the process of implementation.Thereby make the summation (i.e. the poorest processing time) in GPU processing time and input and output transmission time of each batch task smaller or equal to I.
Preferred version is, adopts the GPU that is integrated in the cpu chip, and directly copies pick up speed in access system internal memory or the internal memory.
Preferred version is, for the network of determinacy maximum flow, can set and make T=I, obtains the minimum delay when guaranteeing processing power.
Preferred version is, for the network that the known minimum delay guarantees, I is made as half of minimum delay as far as possible, guaranteeing under the situation that all stream minimum delays are satisfied, and reaches high-throughput.
The invention provides the real-time scheduling method that a kind of GPU of being directed to accelerating network flows processing in real time, adopted the method for regularly carrying out, processing delay and handling capacity are done a balance, adopted a maximum-delay that is suitable for all task flow, obtained maximum handling capacity.Thereby make when utilizing GPU to obtain high-throughput, guaranteed that the real-time of all-network stream is handled.
With respect to scheme of the prior art, advantage of the present invention is:
Technical solution of the present invention is utilized processing power intensive and computation-intensive network application with expedited data of GPU, wherein at the balance of handling capacity and delay, the present invention takes following technical scheme: by accurately regularly carrying out, utilize the high performance while of GPU, guaranteed each the worst processing accurately that arrives task by the time, thereby the real-time that has guaranteed all-network stream is handled.
The invention provides a kind of GPU of employing and accelerate the dispatching method that real-time network stream is handled, it is characterized in that the application program that CPU accelerates to handle as required in the described method arranges GPU performance period I; Described GPU performance period I is more than or equal to the worst processing time of GPU; May further comprise the steps: (1) CPU receives the result of calculation of GPU in a last GPU performance period I, and when the next group processing time period began, CPU began to save bit by bit the data in the GPU performance period I or carries out sequence; (2) when the next group processing time period is ended, CPU handles the data in the GPU performance period I or execution sequence transmission to GPU; Finish until the application program processing according to (1)~(2) step cycle.When this method utilizes GPU to obtain high-throughput, guaranteed that the real-time of all-network stream is handled.This method is used for the accelerating network processing of stream in real time, under the not enough situation, has utilized the high-performance of GPU in the computing power of CPU, comes the application of expedited data and computation-intensive, such as compression, encryption, deciphering etc.
Description of drawings
Below in conjunction with drawings and Examples the present invention is further described:
Fig. 1 is that the dispatching algorithm of embodiment of the invention method is carried out synoptic diagram.
Fig. 2 is the flowchart that adopts the disposal route of GPU acceleration real-time network streaming application.
Embodiment
Below in conjunction with specific embodiment such scheme is described further.Should be understood that these embodiment are not limited to limit the scope of the invention for explanation the present invention.The implementation condition that adopts among the embodiment can be done further adjustment according to the condition of concrete producer, and not marked implementation condition is generally the condition in the normal experiment.
Embodiment
As shown in Figure 1, the real-time network streaming application is before execution in the disposal route of employing GPU acceleration real-time network streaming application, CPU arranges the timer of predetermined GPU performance period I according to the real-time network streaming application, after described timer picks up counting, when timing accumulative total reached GPU performance period I, timing was made zero and is picked up counting again; Said method comprising the steps of:
(1) CPU begins to carry out the real-time network streaming application, and starts the timer timing;
(2) CPU saves bit by bit the data of real-time network streaming application or carries out sequence, and timer the GPU performance period, I expired the time with the data of saving bit by bit or carry out sequence transmission and handle to GPU;
(3) GPU receives data or the execution sequence that CPU saves bit by bit in a last GPU performance period I, and carries out the data of CPU transmission or carry out sequence, and execution result is transferred to CPU;
(4) CPU receives the execution result data of GPU in a last GPU performance period I, and judges whether to continue to begin to save bit by bit the data in next GPU performance period I when the GPU performance period, I began or carry out sequence at next according to the implementation status of real-time network streaming application; When the data of real-time network streaming application or when carrying out sequence and having executed, CPU output execution result is to internal memory; When the data of real-time network streaming application or when carrying out sequence and not executing, CPU continues to begin to save bit by bit the data in next GPU performance period I when the GPU performance period, I began or carry out sequence and finish until the application program processing at next; Wherein said GPU performance period I is more than or equal to the worst processing time of GPU.
Concrete, the execution sequence 101 of algorithm among Fig. 1, and execution sequence and the data 102 of real-time network streaming application in internal memory; P K-1, P k, P K+1, P K+2Be four time points in the GPU execution sequence, the difference of per two time points is I.I is the fixing time interval of carrying out sequence, i.e. GPU performance period.
When the execution sequence of the current real-time network streaming application of handling, its work of saving bit by bit in the time interval at I were transferred to GPU and handled, the long process time under the maximal workload was made as T(T<=I).When each batch carried out sequence and handle, its execution time spreads out of the time 107 by input data transmission period 105, GPU execution time 106, result of calculation to be formed, and these time sums are smaller or equal to T, and smaller or equal to I.
Suppose in real-time network streaming application 102, carry out sequence for one and when time point 103, arrive, be i.e. P K-1~P kArrive in this time interval, CPU saves bit by bit it, and will be at time point P kBe transferred to GPU and carry out, under the worst case, it will be performed before time point 104.
Because GPU as auxiliary computing device, is generally carried out by CPU control transmission data and control, Fig. 2 has showed the process flow diagram with CPU thread/process control GPU.When initial in system, configure the time interval that GPU carries out after, the CPU thread just begin with this time interval be the cycle, save bit by bit data, transmit and transfer to GPU and carry out, outgoi8ng data obtains the result again.Again the structure of output is carried out follow-up processing, the latter gives other CPU thread/process and carries out subsequent treatment if needed.Carrying out according to the default time interval of this process strictness moved in circles to carry out until application program and finished.
Concrete implementation:
In embodiments of the present invention, made up a SRTP reverse proxy application that is come accelerating network data stream AES encryption and message checking by GPU.Server end RTP message is received in this reverse proxy, will be forwarded to client as SRTP message behind the message encryption.Adopted the mode of normal distribution to simulate time of arrival and the size of data of network flow among the embodiment.
Present embodiment is set each and is wrapped in that the admissible minimum delay is 80ms in the processing procedure, then sets the execution time to be spaced apart 40ms.
GPU calls by a cpu process control, and the cpu process flow process is as follows:
1) the RTP packet sent of cpu process reception server end is stored in packet in the buffer zone, and its bag is long, AES initial vector, information records such as AES key.
2) data that in time interval I=40ms just will pass by 40ms, received of cpu process (being the RTP bag here), and encrypt required information, as information such as bag long message, AES initial vector and AES key, be transferred to GPU together.
3) cpu process is initiated the GPU kernel of AES encryption and the GPU Kernel of message checking (HMAC-SHA1) then, to allow GPU carry out AES encryption and message checking.
4) cpu process waits for that GPU kernel carries out end when continuing to receive the RTP bag, and when execution finished, the encryption of preceding a collection of RTP bag and message checking were finished.It has been encrypted to the SRTP bag, and cpu process will be encrypted good packet and pass main memory back from GPU, and forward it to client, finish the function of reverse proxy.
5) be to repeat the 1-4 step time interval with 40ms.
The schedulability of system and I choose:
In the network flow real-time system of a reality, have a lot of real-time network stream concurrent, that regularly arrive.Suppose with S i<p i, w i, d i, a stream, p described iBe the time interval that two adjacent message of this stream arrive, w iBe the size of this each message, d iBeing default, is the time restriction that need handle each message of this stream in the time of maximum.
For all stream S={S that handled in the current system 1, S 2... S i..., we just can obtain, the processing delay of minimum in the current system:
D=min{d i| S iBelong to S};
All minimal time delay D in the system have been arranged, just can determine relational expression T<=I<=D/2.
In addition, according to the processing demands of server end be connected restriction, network processes equipment as the service provider, these parameters can be learnt, therefore can correspondingly learn also that the network connection of current system counts N, and the maximum functional load M(of each message as, the maximum bag length of each bag of MPEG-4 video flowing can be determined), and the specified minimum treat time delay of all streams of this system D
N and M have been arranged, can determine the long process time of each batch of system TN the data item that will have a M operating load transferred to GPU and dealt with, and the measurement time, just can obtain the worst processing of this system TIf T<= D/ 2, then this system is schedulable, because our total energy is determined I:
T<=I<= D/2;
Therefore, whether this system can dispatch, and can be implemented preceding decision in this system.In order to obtain less delayed, I can get in interval T, in order to obtain maximum throughput, I can get D/ 2.In any case but, I [ T, D/ 2] getting arbitrary value all can.
The assurance of system real time:
In the reverse proxy operational process, for the real-time that guarantees the connection that all have been set up is guaranteed, system has adopted several threshold values to limit current system load.
Two variable n of system maintenance and m represent the linking number of current system respectively, and the maximum bag of RTP bag is long in the all-links, newly connect and when discharging existing connection, can dynamically update this two values when system receives.
When accepting new the connection, calculate a pair of new value n ', m '.N '=n+1; m NewLong for the new RTP bag that connects, work as m〉m NewThe time, m '=m remains unchanged, as m<m NewThe time, m '=m New.
If n '<N and m '<M, be illustrated in so add new the connection after, system still can keep real-time, then accepts this and newly connects and make n=n ', m=m ', otherwise do not accept this new connection.
This experiment is carried out off-line test at AMD A6-3650APU, and setting I is 40ms, can reach the AES encryption speed of 3.8Gbps.Existing method still fails to use GPU to come for real-time processing.
The present invention can realize in software, firmware, hardware or its combination.The present invention can be included in the article with computer usable medium.This medium for example has computer-readable program code means therein or logic (for example instruction, code, order etc.) provides and use ability of the present invention.These manufacturing article can be used as the part of computer system or sell separately.All above-mentioned variations are considered to a claimed part of the present invention.
Above-mentioned example only is explanation technical conceive of the present invention and characteristics, and its purpose is to allow the people who is familiar with this technology can understand content of the present invention and enforcement according to this, can not limit protection scope of the present invention with this.All spirit essence is done according to the present invention equivalent transformation or modification all should be encompassed within protection scope of the present invention.

Claims (5)

1. disposal route that adopts GPU acceleration real-time network streaming application, it is characterized in that the real-time network streaming application is before execution in the described method, CPU arranges the timer of predetermined GPU performance period I according to the real-time network streaming application, after described timer picks up counting, when timing accumulative total reached GPU performance period I, timing was made zero and is picked up counting again; Said method comprising the steps of:
(1) CPU begins to carry out the real-time network streaming application, and starts the timer timing;
(2) CPU saves bit by bit the data of real-time network streaming application or carries out sequence, and timer the GPU performance period, I expired the time with the data of saving bit by bit or carry out sequence transmission and handle to GPU;
(3) GPU receives data or the execution sequence that CPU saves bit by bit in a last GPU performance period I, and carries out the data of CPU transmission or carry out sequence, and execution result is transferred to CPU;
(4) CPU receives the execution result data of GPU in a last GPU performance period I, and judges whether to continue to begin to save bit by bit the data in next GPU performance period I when the GPU performance period, I began or carry out sequence at next according to the implementation status of real-time network streaming application; When the data of real-time network streaming application or when carrying out sequence and having executed, CPU output execution result is to internal memory; When the data of real-time network streaming application or when carrying out sequence and not executing, CPU continues to begin to save bit by bit the data in next GPU performance period I when the GPU performance period, I began or carry out sequence and finish until the application program processing at next; Wherein said GPU performance period I is more than or equal to the worst processing time of GPU.
2. disposal route according to claim 1, it is characterized in that GPU performance period I directly determines according to the parameter of real-time network streaming application, GPU, CPU in the described method, GPU performance period I is smaller or equal to T/2, and wherein T is the worst processing time of GPU in the execution sequence of real-time network streaming application.
3. disposal route according to claim 1, it is characterized in that the worst processing time of GPU comprises that GPU input data transmission period, GPU carry out the batch processing time, GPU result of calculation spreads out of the time in the described method, the worst processing time of its GPU is to obtain by estimating.
4. disposal route according to claim 1 is characterized in that in the described method when GPU is integrated in the cpu chip, and data need not to transmit and copy by PCIE, and the data that give GPU can directly read internal memory and obtain, and perhaps copy in the internal memory.
5. disposal route according to claim 1, it is characterized in that in the described method when the maximum flow of application program is determined, adopt and progressively accept the maximum flow that the new estimation approach that connects judges whether to reach application program, in making that current handled all-network connects, the GPU processing time of each batch of each time interval I is no more than I.
CN201310075661.6A 2013-03-11 2013-03-11 Adopt the disposal route of GPU acceleration real-time network streaming application Expired - Fee Related CN103207775B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310075661.6A CN103207775B (en) 2013-03-11 2013-03-11 Adopt the disposal route of GPU acceleration real-time network streaming application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310075661.6A CN103207775B (en) 2013-03-11 2013-03-11 Adopt the disposal route of GPU acceleration real-time network streaming application

Publications (2)

Publication Number Publication Date
CN103207775A true CN103207775A (en) 2013-07-17
CN103207775B CN103207775B (en) 2016-03-09

Family

ID=48755008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310075661.6A Expired - Fee Related CN103207775B (en) 2013-03-11 2013-03-11 Adopt the disposal route of GPU acceleration real-time network streaming application

Country Status (1)

Country Link
CN (1) CN103207775B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853938A (en) * 2013-11-27 2014-06-11 上海丰核信息科技有限公司 High-throughput sequencing data processing and analysis flow control method
CN106454413A (en) * 2016-09-20 2017-02-22 北京小米移动软件有限公司 Live broadcast coding switching method and device, and live broadcast terminal equipment
CN106649140A (en) * 2016-12-29 2017-05-10 深圳前海弘稼科技有限公司 Data processing method, apparatus and system
CN108063758A (en) * 2017-11-27 2018-05-22 众安信息技术服务有限公司 For the node in the signature verification method of block chain network and block chain network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010023445A1 (en) * 2000-03-15 2001-09-20 Telefonaktiebolaget Lm Ericsson (Pub1) Method and arrangement for control of non real-time application flows in a network communications system
CN1636363A (en) * 2001-06-07 2005-07-06 马科尼英国知识产权有限公司 Real time processing
CN102930474A (en) * 2012-10-24 2013-02-13 李化常 Security real-time analysis system based on acceleration of graphics processing unit (GPU)

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010023445A1 (en) * 2000-03-15 2001-09-20 Telefonaktiebolaget Lm Ericsson (Pub1) Method and arrangement for control of non real-time application flows in a network communications system
CN1636363A (en) * 2001-06-07 2005-07-06 马科尼英国知识产权有限公司 Real time processing
CN102930474A (en) * 2012-10-24 2013-02-13 李化常 Security real-time analysis system based on acceleration of graphics processing unit (GPU)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853938A (en) * 2013-11-27 2014-06-11 上海丰核信息科技有限公司 High-throughput sequencing data processing and analysis flow control method
CN103853938B (en) * 2013-11-27 2017-09-15 上海尔云信息科技有限公司 A kind of high-flux sequence data processing and inversion flow control method
CN106454413A (en) * 2016-09-20 2017-02-22 北京小米移动软件有限公司 Live broadcast coding switching method and device, and live broadcast terminal equipment
CN106454413B (en) * 2016-09-20 2019-10-08 北京小米移动软件有限公司 Code switching method, device and equipment is broadcast live
CN106649140A (en) * 2016-12-29 2017-05-10 深圳前海弘稼科技有限公司 Data processing method, apparatus and system
CN108063758A (en) * 2017-11-27 2018-05-22 众安信息技术服务有限公司 For the node in the signature verification method of block chain network and block chain network

Also Published As

Publication number Publication date
CN103207775B (en) 2016-03-09

Similar Documents

Publication Publication Date Title
Meng et al. Online deadline-aware task dispatching and scheduling in edge computing
Kliazovich et al. CA-DAG: Modeling communication-aware applications for scheduling in cloud computing
US11188380B2 (en) Method and apparatus for processing task in smart device
Dürr et al. No-wait packet scheduling for IEEE time-sensitive networks (TSN)
CN112148455B (en) Task processing method, device and medium
CN102780625B (en) Method and device for realizing internet protocol security (IPSEC) virtual private network (VPN) encryption and decryption processing
CN104219288B (en) Distributed Data Synchronization method and its system based on multithreading
CN105511954A (en) Method and device for message processing
WO2014187412A1 (en) Method and apparatus for controlling message processing thread
CN103207775B (en) Adopt the disposal route of GPU acceleration real-time network streaming application
CN109379303A (en) Parallelization processing framework system and method based on improving performance of gigabit Ethernet
Kliazovich et al. CA-DAG: Communication-aware directed acyclic graphs for modeling cloud computing applications
CN107623731A (en) A kind of method for scheduling task, client, service cluster and system
Zhang et al. A holistic approach to build real-time stream processing system with GPU
Chakraborty et al. A new task model for streaming applications and its schedulability analysis
Zhao et al. Joint reducer placement and coflow bandwidth scheduling for computing clusters
TWI433517B (en) Method and apparatus of processing sip messages based on multiple cores
CN104580209A (en) Device and method for implementing multi-platform message processing
WO2016197858A1 (en) Method and device for message notification
WO2016188032A1 (en) Data forwarding method and system using flow table
Hengrong Design on IPNCS of electrical propulsion ship based on real-time Ethernet
CN101976206A (en) Interrupt handling method and device
WO2021135699A1 (en) Decision scheduling customization method and device based on information flow
CN107911484A (en) A kind of method and device of Message Processing
Aci et al. A new congestion control algorithm for improving the performance of a broadcast-based multiprocessor architecture

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160309

Termination date: 20200311

CF01 Termination of patent right due to non-payment of annual fee