US20020032753A1 - Mixing and splitting multiple independent audio data streams in kernel space - Google Patents

Mixing and splitting multiple independent audio data streams in kernel space Download PDF

Info

Publication number
US20020032753A1
US20020032753A1 US08/674,353 US67435396A US2002032753A1 US 20020032753 A1 US20020032753 A1 US 20020032753A1 US 67435396 A US67435396 A US 67435396A US 2002032753 A1 US2002032753 A1 US 2002032753A1
Authority
US
United States
Prior art keywords
audio
data
application
server
audio server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US08/674,353
Other versions
US6405255B1 (en
Inventor
Benjamin H. Stoltz
Michael J. Bundschun
Yan J. Yu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oracle America Inc
Original Assignee
Sun Microsystems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Microsystems Inc filed Critical Sun Microsystems Inc
Priority to US08/674,353 priority Critical patent/US6405255B1/en
Assigned to SUN MICROSYSTEMS, INC. reassignment SUN MICROSYSTEMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BUNDSCHUH, MICHAEL J., STOLTZ, BENJAMIN H., YU, YAN J.
Priority to JP9186074A priority patent/JPH113302A/en
Priority to EP97304702A priority patent/EP0817045A3/en
Priority to DE0817045T priority patent/DE817045T1/en
Publication of US20020032753A1 publication Critical patent/US20020032753A1/en
Application granted granted Critical
Publication of US6405255B1 publication Critical patent/US6405255B1/en
Assigned to Oracle America, Inc. reassignment Oracle America, Inc. MERGER AND CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: Oracle America, Inc., ORACLE USA, INC., SUN MICROSYSTEMS, INC.
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/02Arrangements for generating broadcast information; Arrangements for generating broadcast-related information with a direct linking to broadcast information or to broadcast space-time; Arrangements for simultaneous generation of broadcast information and broadcast-related information
    • H04H60/04Studio equipment; Interconnection of studios
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)
  • Telephonic Communication Services (AREA)

Abstract

A method and an apparatus for securely mixing and splitting multiple audio data streams and determining the order of processing the audio streams. A audio server and an audio device driver are in kernel space of a given computer system. In one embodiment, the computer system has a data flow checker and adjuster for checking the flow of data into data queues and a setup application for connecting the audio server and the audio device driver. The data flow checker and adjuster adjusts the flow of data by sending a message up or downstream instructing the up or downstream processes/devices to send more data or stop sending data depending on how full the data queues are.

Description

    BACKGROUND OF THE INVENTION
  • (1) Field of the Invention [0001]
  • The present invention relates to the field of providing operating system services for use by audio and video applications. More specifically, the present invention is a method and apparatus for mixing and splitting multiple independent audio data streams in kernel space, i.e. part of an operating system that performs basic functions such as allocating hardware resources. [0002]
  • (2) Art Related to the Invention [0003]
  • Audio and video applications running on a computer (e.g. a workstation, a personal computer “PC”, mainframe, etc.) often require mixing and splitting of data (e.g., audio and/or video) as the data is being input or output to some type of network device such as an Integrated Services Digital Network (ISDN) device. An ISDN device is a digital phone network defining B-channels carrying up to 64 Kbps. [0004]
  • Many mixer and splitter devices are implemented in firmware or hardware on a card for a PC. A mixer may mix outputs from two specific audio peripheral devices and mix inputs on a microphone or some other set of peripheral devices. There are also software based mixers and splitters which are available in Apple Macintosh computers, IBM PC's and IBM PC compatibles, Sun Microsystem, Inc.'s workstations and in other UNIX based machines. More particularly, there is the Audio File (AF) system from Digital Equipment Corporation (DEC) of Maynard, Mass., and Network Audio System (NAS) from Network Computing Devices (NCD), Inc. of Mountain View, Calif., which are both audio servers that have mixing capabilities. [0005]
  • FIG. 1 illustrates an exemplary embodiment of computer employing a conventional mixer/splitter device. [0006] Computer 101 includes a storage device 103, processor (CPU) 105 and audio device 110 coupled through bus 107. Audio applications 100, 102 and 104 are coupled to a software-based mixer and splitter, audio server 106. Audio server 106 is contained in user space 108 which the area in the storage device used for execution of user programs, and is coupled to audio device 110. Audio server 106 takes incoming audio data streams from one or more audio applications 100, 102 and 104, mixes them together and transmits them to audio device 110. Audio server 106 also takes an audio data stream coming from the audio device 110, clones the data and transmits the data to one or more audio applications 100, 102 and 104 requesting the data.
  • As illustrated in FIG. 1, software-based mixing and splitting functions in the prior art are performed in user space [0007] 108 and audio device 106 can only handle one audio application at a time. Consequently, an audio application has to provide its own mixing and splitting functions or depend on audio servers such as NAS and AF to provide that functionality via a proprietary API (Application Programming Interface). Further, an audio server retains exclusive use of the audio device, forcing audio applications to use a given audio server or not have access to the audio device at all. This is a poor programming practice since it cannot be assumed that a given audio server will be available on another machine.
  • It is more desirable to have the capability to process multiple simultaneous audio streams in kernel space rather than in user space since this allows multiple audio applications to access an audio device and allows backward compatibility with existing audio applications because, among other things, the current Application Program Interface (API) does not change. The ability to write to an existing API allows the splitting and mixing operations to be transparent regardless of the audio applications from which audio data is being received and transmitted for the mixing and splitting operations. [0008]
  • Therefore it is desirable to have a method and an apparatus for processing multiple simultaneous audio streams in kernel space. [0009]
  • BRIEF SUMMARY OF THE INVENTION
  • A The present invention provides for processing of multiple simultaneous audio streams. Additionally, the present invention provides for the mixing and splitting capabilities in kernel space rather than in user space of a system's software environment. This allows for some backward compatibility with old applications running audio and allows for writing to an existing interface. The ability to write to an existing interface allows the splitting and mixing operations to be transparent regardless of the audio applications from which audio data is being received and transmitted for the splitting and mixing operations. [0010]
  • In one embodiment of the invention, a computer has a central processing unit (CPU) coupled to a storage device. The storage device has several audio applications contained in user space. An audio server (mixer and/or splitter) and an audio device driver are in kernel space. The present invention also has a data flow checker and adjuster for checking the flow of data into data queues and a setup application for connecting the audio server and the audio device driver. The data flow checker and adjuster adjusts the flow of data by sending a message upstream or downstream instructing the upstream or downstream processes/devices to send more data or stop sending data depending on how full the data queues are. [0011]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates an exemplary embodiment of a conventional mixer/splitter employed in a conventional computer. [0012]
  • FIG. 2 illustrates a computer system with an exemplary implementation of the present invention. [0013]
  • FIG. 3 is a flow diagram illustrating the general steps followed in the setup application of the present invention. [0014]
  • FIGS. 4[0015] a, 4 b and 4 c illustrate the mixing function of the audio device driver through the audio server of the present invention.
  • FIGS. 5[0016] a, 5 b and 5 c illustrate the splitting function of the audio server of the present invention.
  • FIG. 6 is a flow diagram illustrating how the present invention deals with scheduling and when the mixer reads additional data. [0017]
  • FIG. 7 is a flow diagram illustrating the general steps followed to turn on the present invention's optional secure mode to add audio applications. [0018]
  • FIG. 8 illustrates an exemplary embodiment of the present invention with telephony applications transmitting and reading data to and from an ISDN device driver. [0019]
  • FIG. 9 illustrates an exemplary embodiment of the present invention for a general desktop use. [0020]
  • DETAILED DESCRIPTION OF THE INVENTION
  • A method and an apparatus for allowing multiple audio-video applications and/or audio-video devices to access and utilize an audio server are disclosed. [0021]
  • FIG. 2 illustrates a [0022] computer 200 with an exemplary implementation of the present invention. Computer 200 has CPU 202 coupled to memory device 204 through bus 203. Memory device 204 has a plurality of audio applications 215 1, 215 2, . . . , 215 N contained in user space 208. Audio server (e.g. mixer and/or splitter) 212 and audio device driver 214 are contained in kernel space 210. The audio server 212 includes a data flow checker and adjuster 213 which monitors the flow of data to and from data queues 217, a setup application 219 for adding audio applications 215 1, 215 2, . . . , 215 N to the mixer/splitter operations of audio server 212, and security application 221 for providing an optional secure mode preventing unauthorized audio applications from being added to the mixer/splitter operations of audio server 212. Data flow checker and adjuster 213 adjusts the flow of data by sending a message upstream or downstream instructing the upstream or downstream processes/devices to send more data or stop sending data depending on the current capacity of the data queues 217. The use of data streams in processing of audio data is well known in the art.
  • A more detailed description of [0023] setup application 219 is presented in the text accompanying FIG. 3. Additionally, a more detailed description of data flow checker and adjuster 213 is presented in the text accompanying FIG. 6. Audio applications 215 1, 215 2, . . . , 215 N transmit audio data to audio server 212 which in turn mixes the audio data and transmits the audio data to audio device 216 coupled to memory 204 via bus 218 through audio device driver 214. Audio device driver 214 is a software program which enables computer 200 to communicate with the audio device 216 which typically is a microphone, a speaker or any other device capable of accepting/outputting audio data. Audio server 212 may also be utilized when audio device 216 transmits information to audio applications 215 1, 215 2, . . . 215 N through audio device driver 214. Examples of audio applications 215 1, 215 2, . . . 215 N include, but are not limited to, Sun Microsystems, Inc.'s AUDIOTOOL™, AUDIOPLAY™ and AUDIORECORD™. An example of audio device driver 214 includes, but are not limited to, Sun Microsystems, Inc.'s a combination audio/dual basic rate interface (DBRI) device driver allowing applications to access audio and integrated services digital network (ISDN) functionality on SBUS equipped machines.
  • [0024] CPU 202 includes circuits that control the interpretation and the execution of instructions which carry out the mixing and splitting operations performed by audio server 212 and the transmission of data between audio applications 215 1, 215 2, . . . 215 N and audio device 216 through audio device driver 214. Although not shown, computer 200 may also include a number of peripheral devices including a display device.
  • FIG. 3 is a flow diagram illustrating the general steps followed by the setup application of the present invention. In [0025] step 301, setup application opens audio device driver and acquires a file descriptor associated with the audio device driver. In step 302, application opens audio server 212 and acquires another file descriptor to identify the audio server. At this point, setup application 219 has two open file descriptors and the audio server is not yet involved with the audio device driver. In step 303, setup application executes an input/output control command to add audio applications to the operations performed by the audio server.
  • Audio servers support multiple I/O ports. In initiating processing of a second application, the audio server is opened as was done in [0026] step 301. A separate unique file descriptor is obtained by the setup application from an operating system which controls execution of the audio applications for the second application. The second audio server port which has been opened is added to the mix of the audio server providing for two ports. This process can be repeated to support N audio server ports.
  • Additionally, a security application provides for an optional secure mode when adding additional audio applications to an audio mix. More specifically, when the secure mode is turned on, the security application turns away unauthorized audio applications requesting to be added to the audio mix. A more detailed description of the secure mode is illustrated in FIG. 7 and the accompanying text. [0027]
  • FIGS. 4[0028] a-4 c illustrate how packets of information are transmitted from the plurality of audio applications 215 1, 215 2, . . . , 215 N to audio server 212 which, in turn, are mixed and transmitted to audio device driver 214. FIG. 4a illustrates the process at time 0. Audio data is typically transmitted in packets 500 from the plurality of audio applications 215 1, 215 2, . . . 215 N to an audio device driver 214. In the illustration, audio packets are illustrated as boxes labeled P1, . . . , Pn. An arbitrary number of packets are transmitted down data streams on data queues 217. The packets may be of arbitrary size.
  • [0029] Audio server 212 schedules a timer which activates audio server 212 to begin processing data and monitor data queues 217 to determine if there are any data packets in data queues 217. If there are data packets in data queues 217, then audio server 212 takes a certain portion of the data packets and sends that portion “downstream” to audio device driver 214 after the data has been mixed.
  • If [0030] audio server 212 is activated too often (e.g. to begin processing data), then the queues may not fill fast enough with data packets 500, resulting in inefficient use of central processing unit (CPU) resources. If audio server 212 is not activated often enough, the data packets pile up in the data queues and the packets are not transmitted “downstream” at a fast enough rate to produce smooth audio output, introducing latency and possible break-up in the transmitted audio. The present invention utilizes a unique method of determining when to read data off the queues, mix it and send it “downstream” to audio device driver 214. The present invention's method assures that data is transmitted in a continuous stream. This method is described further in FIG. 6.
  • At [0031] time 1 in FIG. 4b, audio server 212 strips the packets of information off data queues 217. The packets retrieved are mixed to produce mixed packet M1. More specifically, M1 represents the sound that would be obtained if the three packets of data P1, P2 and P3 are mixed together. At time 2 in FIG. 4c, audio server 212 transmits packet M1 down to audio device driver 214. At the same time, audio server 212 has already read the next set of packets P4, P5 and P6 off of data queues Q1, Q2 and Q3 217 and the process is repeated.
  • FIGS. 5[0032] a, 5 b and 5 c illustrate the splitting (e.g. cloning) function of audio server 212 of the present invention. In FIG. 5a, speech recognition process 502 and Dual Tone Multi Frequency (DTMF, e.g. touch tone) detection process application 504 read data transmitted from ISDN (Integrated Services Digital Network) device driver 506. Packets P1, P2 and P3 are transmitted upstream from ISDN device driver 506 at time 0.
  • As illustrated in FIG. 5[0033] b, audio server 212 reads packet 1 as sent upstream by ISDN device driver 506 and duplicates or clones the packet into packets S1 and S2. At time 2, as illustrated in FIG. 5c, packets S1 and S2 are transmitted “upstream” to speech recognition process application 502 and DTMF detection process application 504 respectively. Audio server 212 continues to read packets being sent upstream from ISDN device driver 506 and continues cloning the packets for delivery to the telephony applications.
  • As with the mixing function of the [0034] audio server 212, data must not be read too fast or too slowly. For example, if audio server 212 is not reading packets coming upstream from ISDN device driver 506 fast enough, then ISDN device driver 506 may shut itself down and stop I/O due to flow control. In a worst case scenario, if ISDN device driver 506 does not shut itself down, then it will continue to output packets upstream. Consequently, telephony applications 502 and 504 will not be able to read all the packets in a timely fashion causing additional latency, e.g. delay, from when the packets were sent by ISDN device driver 506 and received by telephony applications 502 and 504. On the other hand, if audio server 212 attempts to read packets too quickly, then the packets will not arrive in time to be read causing inefficient use of CPU resources.
  • For example, if [0035] telephony applications 502 and 504 stop accepting data, audio server 212 will have no idea that telephony applications 502 and 504 have stopped and will continue to send data upstream. Eventually, data queues 217 upstream will fill up and audio server 212 recognizes that data queues 217 are filled. At this point, the present invention causes a message to be sent downstream to the audio device driver or ISDN device driver 506 asking them to stop sending any more additional packets. In an alternate embodiment, any additional data which are transmitted from the audio or ISDN device driver 506 are discarded.
  • The present invention also converts “A-law”, “μ-law” and “linear Pulse Code Modulation” (PCM) audio data as it is transmitted back and forth between the audio applications and the audio device driver. A-law, μ-law and linear PCM are digital representation of an analog audio signal, typically sampled at approximately 8 KHz. For every sample, the amplitude of the audio signal is assigned (quantized) a digital value. For these three encodings, the values used to represent the amplitude is a number between zero and +/−127. More detail may be found in the ITU (International Telecommunication Union) standard G.711. published 1988. [0036]
  • The present invention, in order to minimize the time necessary to convert between the digitized value and its amplitude, uses a pre-computed table of amplitude values for each encoding which can be indexed with the digitized value. Since audio applications identify what encoding they will use when they invoke [0037] audio server 212, audio server 212 can transparently convert the A-law, μ-law and linear PCM data at the time the mix function is called within audio server 212.
  • FIG. 6 is a flow diagram illustrating the general steps followed by an embodiment of the data flow checker and adjuster of the present invention in scheduling the data flow for the mixing operation of the audio server. Since the transmission of audio data between the applications in user space and the audio device driver in kernel space are asynchronous, a mechanism must be put into place which determines when [0038] audio server 212 is to check the lower queues for upstream data coming from the audio device and the upper queues for downstream data coming from the audio applications and how often it must do so. As mentioned earlier, if audio server 212 checks the queues for data packets too quickly, then less than maximum amount of data packets will be processed by audio server 212 leading to inefficient use of CPU resources. If audio server 212 checks for data packets in queues too slowly, then data packets are not processed fast enough and the queues may fill up. Once the queues are full, some data packets may be discarded from lack of being read. In the alternative, the audio device may shut itself down if none of the data packets are being read.
  • In [0039] step 601, if it is time for the data flow checker to check the data flow between the audio applications and the audio devices, then go to step 602. In step 602, if the size of audio data in the output queue is less than the maximum size of data to be mixed in each round of mixing operations, then go to step 603. In step 603, if upper channels do not have enough data in the queue for the audio server operation and in step 604, if the size of data in the audio device's output queue is greater than or equal to a predetermined minimum audio data size to be inserted in the audio device output queue (low water mark) 610, then in step 605, the audio server operation is performed. In step 611, data is sent to the output queue. Otherwise, in step 606, there is enough audio data in the upper channels to fill the output queue up to a predetermined minimum, then in step 607, audio data is acquired from the upper channels to fill the output queues to the minimum audio data size. If there is not enough audio data in the upper channels to fill the output queues to a predetermined minimum audio data size, then in step 608, the present invention fills up the queue with a silent audio signal and sends out a message to the appropriate applications to notify them of the underflow condition in step 609 (allowing the process to start sending more data downstream).
  • Back in [0040] step 603, if the size of audio data in the output queue is less than the maximum and the channel has enough audio data in the queue for the audio server operation, then in step 610, audio server 212 inserts additional audio data in the output queue such that the output queue contains the maximum size of audio data to be mixed in each round of a mixing/splitting operation of audio server 212.
  • Back in [0041] step 602, if the size of data in the audio device's output queue is greater than or equal to the maximum size of audio data (in terms of micro seconds) to be mixed in each round of mixing operation (high water mark), then go to step 612.
  • In [0042] step 612, it is determined whether there are audio applications in the mixing operation. If there are none, then the process is completed. Otherwise, in step 613, the next mixer operation is scheduled and the process repeats itself.
  • The asynchronisity of data flow is handled similarly for the splitting operation of the present invention. [0043]
  • FIG. 7 is a flow diagram illustrating the general steps followed to turn on the optional secure mode of the present invention and to add audio applications. In [0044] step 701, a security application, defined as an application written by the user which runs in user space and implements some form of security policy, opens the audio server and acquires a corresponding file descriptor. In step 702, the security application issues an input/output command to turn on the secure mode in audio server 212.
  • In [0045] step 703, the security application waits for other audio applications, defined as audio applications written by the user which runs in user space and is capable of communicating with the security application, to request permission to be added to the mix, defined as the current mixer/splitter session allowing for audio data sent to and from the audio device to be mixed and distributed to the audio applications.
  • In [0046] step 704, a requesting audio application makes a request to join the mix to the security application. If the request is not allowed, then the request is ignored and the security application returns to step 703. If the request is allowed, then the security application continues to step 705.
  • In [0047] step 705, security application issues an input/output command via the file descriptor to allow the requesting audio application to be added to the mix. The security application then returns to step 703 to await future requests.
  • The present invention may be implemented for various applications. For example, in FIG. 8, multiple telephony applications including [0048] telephone record process 800, telephone speech recognition process 502 and telephone DTMF (Dual Tone Multi Frequency, such as “touch tone”) detection process 504 transmit data to ISDN device driver 506 through audio server 212 and read data from ISDN device driver 506 through audio server 212.
  • Telephone [0049] record process application 800 may be utilized to record a phone call on the computer. Telephone speech recognition process 502 may be used to recognize vocal commands by a user of a computer and to respond in accordance to that vocal command, for example, through an Interactive Voice Response (IVR) system. Telephone DTMF detection process application 504 may be utilized to scan for DTMF tone by determining if a “0” has been pressed or if a “1” has been selected. One or more of these telephony applications may have access to the same telephony data through ISDN 506 and audio server 212.
  • In another exemplary embodiment, the present invention may be utilized in a general desktop use. For example in FIG. 9, [0050] desktop 900 may include software applications including, but not limited to ShowMe™ TV software program 902 and Contool 904 by Sun Microsystems, Inc. of Mountain View, Calif. ShowMe™ TV 902 is a computer program which displays television (TV) programs on a desktop. ShowMe™ TV 902 receives broadcast audio and video data over a network displayed on desktop 900 and sends audio data to audio device driver 214 through audio server 212.
  • [0051] Contool 904 from Sun Microsystems, Inc. of Mountain View, Calif., is a software application which is designed to monitor various system events on a given computer. For example, Contool 904 may alert a user upon the occurrence of a certain predetermined event on a computer. To alert the user of an event, Contool 904 may enable a flashing message on the display screen of the computer or may display some other type of prompt to the user. For example, if a user in a network is attempting to communicate with another user in the network, Contool 904 may enable a prompt to be displayed to the latter user through his/her computer display device stating that another user wants to access the user. Instead of a visual prompt, Contool 904 may also enable an audio prompt to alert the user that another user in the network would like to access the user.
  • Another example use of [0052] Contool 904 requiring the use of audio device driver 214 is when someone is trying to log onto another user's workstation causing, a security breach. Contool 904 may then alert the user of the security breach through, for example, some type of an audio prompt.
  • In prior art devices, exemplary general desktop applications such as [0053] ShowMe™ TV 902 may run a continuous loop of audio stream without the desktop being able to stop the audio stream. In such a case, if there is some type of a security breach or if a user desired to contact the user of desktop 900 through Contool 904, that information would be put on hold until ShowMe™ TV 902 has terminated its program.
  • The present invention allows processing of multiple audio data streams. Hence, with the present invention, while [0054] ShowMe™ TV 902 is running an application playing audio data and while Contool 904 is producing occasional audio prompts to the user at the same time, audio server 212 accepts audio data from audio device driver 212 for both ShowMe™ TV 902 and Contool 904.
  • What has been described is a method and an apparatus for securely processing multiple simultaneous audio streams in kernel space. [0055]
  • While certain exemplary embodiments have been described in detail and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention is not to be limited to the specific arrangements and constructions shown and described, since various other modifications may occur to those with ordinary skill in the art. [0056]

Claims (16)

What is claimed:
1. A method for processing a plurality of data streams being transmitted between a memory device and at least one audio device coupled to a computer system, the method comprising the steps of:
processing the plurality of data streams being transmitted between at least one application running in user space of the memory device and the at least one audio device, said processing being performed by an audio server in kernel space of the memory device; and
adjusting flow of data of the plurality of data streams between said at least one application running in user space and said at least one audio device to prevent data overflow and underflow conditions.
2. The method of claim 1 wherein said processing step further comprises the step of interfacing said at least one application with said at least one audio device through a device driver.
3. The method of claim 2 further comprising the step of acquiring at least one file descriptor for said device driver.
4. The method of claim 1 further comprising the step of processing audio data from said at least one application by said audio server.
5. The method of claim 1 wherein said step of adjusting further comprises the step of determining whether to check data flow between said at least one application and said at least one audio device.
6. The method of claim 5 further comprising the step of determining size of audio data in a queue between said at least one application and said at least one audio device.
7. The method of claim 6 further comprising the step of performing an operation of said audio server if said predetermined size of audio data in said queue is greater than or equal to a predetermined minimum and less than a predetermined maximum.
8. The method of claim 5 further comprising the step of skipping an operation of said audio server if said size of said audio data is greater than or equal to a predetermined maximum.
9. The method of claim 6 further comprising the step of adding additional audio data if said predetermined size of said audio data is less than a predetermined minimum.
10. The method of claim 1 further comprising the step of preventing an unauthorized application from being processed by said audio server.
11. An apparatus for processing multiple data streams between at least one application and at least one audio device, including code configured for storage on a computer-readable medium and executable by a computer, the code including a plurality of modules each configured to carry out at least one function to be executed by the computer, said apparatus comprising:
an audio server module configured to process the multiple data stream in kernel space, said audio server module utilizing a data flow checker and adjuster module configured to adjust data flow between the at least one application and the at least one audio device.
12. The apparatus of claim 11 further comprising a setup module configured to allow data from more than one of said at least one application to be approximately simultaneously processed by said audio server.
13. The apparatus of claim 11 further comprising a security module configured to prevent data from unauthorized applications from being processed by said audio server.
14. A system for processing multiple data stream is between at least one application and at least one audio device, including code configured for storage on a computer-readable medium and executable by a computer, the code including a plurality of modules each configured to carry out at least one function to be executed by the computer, said system comprising:
an audio server module configured to process the multiple data stream in kernel space, said audio server module utilizing a data flow checker and adjuster module configured to adjust data flow between the at least one application and the at least one audio device.
15. The system of claim 14 further comprising a setup module configured to allow data from more than one of said at least one application to be approximately simultaneously processed by said audio server.
16. The system of claim 14 further comprising a security module configured to prevent data from unauthorized applications from being processed by said audio server.
US08/674,353 1996-07-01 1996-07-01 Mixing and splitting multiple independent audio data streams in kernel space Expired - Lifetime US6405255B1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US08/674,353 US6405255B1 (en) 1996-07-01 1996-07-01 Mixing and splitting multiple independent audio data streams in kernel space
JP9186074A JPH113302A (en) 1996-07-01 1997-06-27 Mixing and division of plural independent audio data streams in kernel space
EP97304702A EP0817045A3 (en) 1996-07-01 1997-06-30 Mixing and splitting multiple independent audio data streams in kernel space
DE0817045T DE817045T1 (en) 1996-07-01 1997-06-30 Mixing and separation of several independent audio data streams in one operating system core

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US08/674,353 US6405255B1 (en) 1996-07-01 1996-07-01 Mixing and splitting multiple independent audio data streams in kernel space

Publications (2)

Publication Number Publication Date
US20020032753A1 true US20020032753A1 (en) 2002-03-14
US6405255B1 US6405255B1 (en) 2002-06-11

Family

ID=24706259

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/674,353 Expired - Lifetime US6405255B1 (en) 1996-07-01 1996-07-01 Mixing and splitting multiple independent audio data streams in kernel space

Country Status (4)

Country Link
US (1) US6405255B1 (en)
EP (1) EP0817045A3 (en)
JP (1) JPH113302A (en)
DE (1) DE817045T1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030236864A1 (en) * 2002-06-24 2003-12-25 Culture.Com Technology (Macau) Ltd. File downloading system and method
US20070244586A1 (en) * 2006-04-13 2007-10-18 International Business Machines Corporation Selective muting of applications
FR3034220A1 (en) * 2015-03-27 2016-09-30 Damien Plisson IMPROVED MULTIMEDIA FLOW TRANSMISSION
US20170286048A1 (en) * 2016-03-29 2017-10-05 Shoumeng Yan Technologies for framework-level audio device virtualization
CN112423076A (en) * 2020-11-18 2021-02-26 努比亚技术有限公司 Audio screen projection synchronous control method and device and computer readable storage medium

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6311161B1 (en) * 1999-03-22 2001-10-30 International Business Machines Corporation System and method for merging multiple audio streams
AU2628401A (en) * 2000-01-05 2001-07-16 True Dimensional Sound, Inc. Audio device driver
EP1244960A2 (en) * 2000-01-05 2002-10-02 Siemens Aktiengesellschaft Flow control for i/o reception overload
US6714826B1 (en) 2000-03-13 2004-03-30 International Business Machines Corporation Facility for simultaneously outputting both a mixed digital audio signal and an unmixed digital audio signal multiple concurrently received streams of digital audio data
US6961631B1 (en) * 2000-04-12 2005-11-01 Microsoft Corporation Extensible kernel-mode audio processing architecture
US6646195B1 (en) * 2000-04-12 2003-11-11 Microsoft Corporation Kernel-mode audio processing modules
US7584291B2 (en) * 2000-05-12 2009-09-01 Mosi Media, Llc System and method for limiting dead air time in internet streaming media delivery
US20020120747A1 (en) * 2001-02-23 2002-08-29 Frerichs David J. System and method for maintaining constant buffering time in internet streaming media delivery
US7631088B2 (en) * 2001-02-27 2009-12-08 Jonathan Logan System and method for minimizing perceived dead air time in internet streaming media delivery
JP4254071B2 (en) * 2001-03-22 2009-04-15 コニカミノルタビジネステクノロジーズ株式会社 Printer, server, monitoring device, printing system, and monitoring program
US20050027838A1 (en) * 2003-07-29 2005-02-03 Magid Robert Mark System and method for intercepting user exit interfaces in IMS programs
JP2006033356A (en) * 2004-07-15 2006-02-02 Renesas Technology Corp Audio data processing apparatus
US7711952B2 (en) * 2004-09-13 2010-05-04 Coretrace Corporation Method and system for license management
US20060285701A1 (en) * 2005-06-16 2006-12-21 Chumbley Robert B System and method for OS control of application access to audio hardware
US7813823B2 (en) * 2006-01-17 2010-10-12 Sigmatel, Inc. Computer audio system and method
US8272048B2 (en) * 2006-08-04 2012-09-18 Apple Inc. Restriction of program process capabilities
US8561199B2 (en) * 2007-01-11 2013-10-15 International Business Machines Corporation Method and system for secure lightweight stream processing
US8243119B2 (en) * 2007-09-30 2012-08-14 Optical Fusion Inc. Recording and videomail for video conferencing call systems
US8954178B2 (en) * 2007-09-30 2015-02-10 Optical Fusion, Inc. Synchronization and mixing of audio and video streams in network-based video conferencing call systems
US8732236B2 (en) * 2008-12-05 2014-05-20 Social Communications Company Managing network communications between network nodes and stream transport protocol
EP2311036A1 (en) 2008-07-09 2011-04-20 Nxp B.V. Method and device for digitally processing an audio signal and computer program product
CN102362269B (en) * 2008-12-05 2016-08-17 社会传播公司 real-time kernel
US20110119102A1 (en) * 2009-11-17 2011-05-19 Sunstein Kann Murphy & Timbers LLP Paperless Docketing Workflow System
WO2012118917A2 (en) 2011-03-03 2012-09-07 Social Communications Company Realtime communications and network browsing client
US8781613B1 (en) * 2013-06-26 2014-07-15 Applifier Oy Audio apparatus for portable devices
US9152374B2 (en) * 2013-06-17 2015-10-06 Nvidia Corporation Control and capture of audio data intended for an audio endpoint device of an application executing on a data processing device

Family Cites Families (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2084575C (en) * 1991-12-31 1996-12-03 Chris A. Dinallo Personal computer with generalized data streaming apparatus for multimedia devices
US5485409A (en) * 1992-04-30 1996-01-16 International Business Machines Corporation Automated penetration analysis system and method
US5519833A (en) * 1992-08-13 1996-05-21 Computervision Corporation Distributed data processing system providing a distributed stream software environment to enable application on a first system to use driver on a second system
JP3255308B2 (en) * 1992-12-18 2002-02-12 ソニー株式会社 Data playback device
EP0640909B1 (en) * 1993-07-30 2001-05-16 Texas Instruments Incorporated Modular audio data processing architecture
US5847765A (en) * 1993-11-12 1998-12-08 Nec Corporation Moving picture decoding control system
JPH07220452A (en) * 1993-12-17 1995-08-18 Imix Inc Method and device for video editing and real-time processing
JPH07219970A (en) * 1993-12-20 1995-08-18 Xerox Corp Method and apparatus for reproduction in acceleration format
US5584023A (en) * 1993-12-27 1996-12-10 Hsu; Mike S. C. Computer system including a transparent and secure file transform mechanism
KR960008470B1 (en) * 1994-01-18 1996-06-26 Daewoo Electronics Co Ltd Apparatus for transferring bit stream data adaptively in the moving picture
JP3488500B2 (en) * 1994-02-07 2004-01-19 富士通株式会社 Distributed file system
US6181712B1 (en) * 1994-02-25 2001-01-30 U.S. Philips Corporation Method and device for transmitting data packets
US5583652A (en) * 1994-04-28 1996-12-10 International Business Machines Corporation Synchronized, variable-speed playback of digitally recorded audio and video
US5481719A (en) * 1994-09-09 1996-01-02 International Business Machines Corporation Exception handling method and apparatus for a microkernel data processing system
US5815634A (en) * 1994-09-30 1998-09-29 Cirrus Logic, Inc. Stream synchronization method and apparatus for MPEG playback system
US5734731A (en) * 1994-11-29 1998-03-31 Marx; Elliot S. Real time audio mixer
US5768126A (en) * 1995-05-19 1998-06-16 Xerox Corporation Kernel-based digital audio mixer
US5703794A (en) * 1995-06-20 1997-12-30 Microsoft Corporation Method and system for mixing audio streams in a computing system
US5920572A (en) * 1995-06-30 1999-07-06 Divicom Inc. Transport stream decoder/demultiplexer for hierarchically organized audio-video streams
US5899987A (en) * 1995-10-03 1999-05-04 Memco Software Ltd. Apparatus for and method of providing user exits on an operating system platform
US6098112A (en) * 1995-10-19 2000-08-01 Hewlett-Packard Company Streams function registering
US5815707A (en) * 1995-10-19 1998-09-29 Hewlett-Packard Company Dynamic function replacement for streams framework
US6070198A (en) * 1995-10-19 2000-05-30 Hewlett-Packard Company Encryption with a streams-based protocol stack
US6047323A (en) * 1995-10-19 2000-04-04 Hewlett-Packard Company Creation and migration of distributed streams in clusters of networked computers
US6122668A (en) * 1995-11-02 2000-09-19 Starlight Networks Synchronization of audio and video signals in a live multicast in a LAN
US5726989A (en) * 1995-11-06 1998-03-10 Stellar One Corporation Method for ensuring synchronization of MPEG-1 data carried in an MPEG-2 transport stream
US5956088A (en) * 1995-11-21 1999-09-21 Imedia Corporation Method and apparatus for modifying encoded digital video for improved channel utilization
US5892506A (en) * 1996-03-18 1999-04-06 Discreet Logic, Inc. Multitrack architecture for computer-based editing of multimedia sequences
US5894557A (en) * 1996-03-29 1999-04-13 International Business Machines Corporation Flexible point-to-point protocol framework
US5801685A (en) * 1996-04-08 1998-09-01 Tektronix, Inc. Automatic editing of recorded video elements sychronized with a script text read or displayed
US5768527A (en) * 1996-04-23 1998-06-16 Motorola, Inc. Device, system and method of real-time multimedia streaming
US6137834A (en) * 1996-05-29 2000-10-24 Sarnoff Corporation Method and apparatus for splicing compressed information streams
US5946487A (en) * 1996-06-10 1999-08-31 Lsi Logic Corporation Object-oriented multi-media architecture
US5918228A (en) * 1997-01-28 1999-06-29 International Business Machines Corporation Method and apparatus for enabling a web server to impersonate a user of a distributed file system to obtain secure access to supported web documents

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030236864A1 (en) * 2002-06-24 2003-12-25 Culture.Com Technology (Macau) Ltd. File downloading system and method
US20070244586A1 (en) * 2006-04-13 2007-10-18 International Business Machines Corporation Selective muting of applications
US7706903B2 (en) * 2006-04-13 2010-04-27 International Business Machines Corporation Selective muting of applications
FR3034220A1 (en) * 2015-03-27 2016-09-30 Damien Plisson IMPROVED MULTIMEDIA FLOW TRANSMISSION
WO2016156702A1 (en) * 2015-03-27 2016-10-06 Damien Plisson Improvement in sending of multimedia streams
US20170286048A1 (en) * 2016-03-29 2017-10-05 Shoumeng Yan Technologies for framework-level audio device virtualization
US10776072B2 (en) * 2016-03-29 2020-09-15 Intel Corporation Technologies for framework-level audio device virtualization
CN112423076A (en) * 2020-11-18 2021-02-26 努比亚技术有限公司 Audio screen projection synchronous control method and device and computer readable storage medium

Also Published As

Publication number Publication date
US6405255B1 (en) 2002-06-11
EP0817045A2 (en) 1998-01-07
DE817045T1 (en) 1998-07-16
JPH113302A (en) 1999-01-06
EP0817045A3 (en) 2002-11-06

Similar Documents

Publication Publication Date Title
US6405255B1 (en) Mixing and splitting multiple independent audio data streams in kernel space
US5913062A (en) Conference system having an audio manager using local and remote audio stream state machines for providing audio control functions during a conference session
US8543704B2 (en) Method and apparatus for multimodal voice and web services
US5652866A (en) Collaborative working method and system for a telephone to interface with a collaborative working application
US7853647B2 (en) Network agnostic media server control enabler
EP0620935B1 (en) Call management in a collaborative working network
US6209021B1 (en) System for computer supported collaboration
US7257203B2 (en) Unified message system for accessing voice mail via email
US8149261B2 (en) Integration of audio conference bridge with video multipoint control unit
US20050246468A1 (en) Pluggable terminal architecture for TAPI
US6084911A (en) Transmission of coded and compressed voice and image data in fixed bit length data packets
Levergood et al. AudioFile: A network-transparent system for distributed audio applications
MXPA97000681A (en) System and method for telecommunication
US10432543B2 (en) Dual jitter buffers
WO2010096815A1 (en) Video voicemail and menu system
US5740384A (en) Interactive multimedia system using active backplane having programmable interface to reconfigure the media stream produced by each component
JPH10509564A (en) Telecommunications system and method
Arons Tools for building asynchronous servers to support speech and audio applications
WO1998009213A1 (en) Virtualized multimedia connection system
US7346513B2 (en) Audio signal saving operation controlling method, program thereof, record medium thereof, audio signal reproducing operation controlling method, program thereof, record medium thereof, audio signal inputting operation controlling method, program thereof, and record medium thereof
US20230421621A1 (en) Mixing and Transmitting Multiplex Audiovisual Information
US20100217873A1 (en) Method and system for sip access to media and conferences
Baurens Groupware
Daswani et al. The Ethernet-to-Phone Telephony System
Bennett et al. Planning Considerations for the Deployment of LAN-based Videoconferencing and Related Applications

Legal Events

Date Code Title Description
AS Assignment

Owner name: SUN MICROSYSTEMS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:STOLTZ, BENJAMIN H.;BUNDSCHUH, MICHAEL J.;YU, YAN J.;REEL/FRAME:008079/0625

Effective date: 19960626

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: ORACLE AMERICA, INC., CALIFORNIA

Free format text: MERGER AND CHANGE OF NAME;ASSIGNORS:ORACLE USA, INC.;SUN MICROSYSTEMS, INC.;ORACLE AMERICA, INC.;REEL/FRAME:037278/0625

Effective date: 20100212