Jump to content
IGNORED

wdm vs asio


sbgk

Recommended Posts

the most common wdm ks implementation is wavecyclic which involves a call from user mode to kernel mode and associated memory checking for each buffer submitted, increasing latency if not reducing sound quality, MS provide waveRT and hardware offloading, but these are not widely available.

 

MS themselves recommend asio for low latency, the question I have is does asio also go from user to kernel or is it all in user mode ? How do the interrupts get to the asio driver to tell it to fill a buffer, are these done in hardware outside the kernel ?

There is no harm in doubt and skepticism, for it is through these that new discoveries are made. Richard P Feynman

 

http://mqnplayer.blogspot.co.uk/

Link to comment
MS themselves recommend asio for low latency, the question I have is does asio also go from user to kernel or is it all in user mode ? How do the interrupts get to the asio driver to tell it to fill a buffer, are these done in hardware outside the kernel ?

 

It is all up to the ASIO driver implementor on how to best implement the driver. ASIO only defines the API between driver and application. It doesn't go into implementation details of either side.

 

Typically you have minimal kernel space driver handling interrupts and DMA transfers and to wake up the user space when buffer switch is supposed to happen (ASIO is double-buffer design). The interface between kernel space and user space is completely private business of the particular ASIO driver implementation and one of the strengths of the design.

 

If you like to see source code for similar implementations, take a look at Linux ALSA drivers and alsa-lib on how hardware (hw) devices are handled. When it comes to buffer handling, ALSA can still get closest to the bare hardware.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
It is all up to the ASIO driver implementor on how to best implement the driver. ASIO only defines the API between driver and application. It doesn't go into implementation details of either side.

 

Typically you have minimal kernel space driver handling interrupts and DMA transfers and to wake up the user space when buffer switch is supposed to happen (ASIO is double-buffer design). The interface between kernel space and user space is completely private business of the particular ASIO driver implementation and one of the strengths of the design.

 

If you like to see source code for similar implementations, take a look at Linux ALSA drivers and alsa-lib on how hardware (hw) devices are handled. When it comes to buffer handling, ALSA can still get closest to the bare hardware.

 

thanks for the reply, so asio still has user/kernel interaction, but only kernel code used is for handling interrupts and dma.

 

I think removing the user/kernel interaction would improve sq and it seems the ideal would be for the driver to be able to access the data in ram directly (preloaded from user app) without user/kernel interaction and the only option to do that is by modifying alsa as windows kernel is not open source.

There is no harm in doubt and skepticism, for it is through these that new discoveries are made. Richard P Feynman

 

http://mqnplayer.blogspot.co.uk/

Link to comment
I think removing the user/kernel interaction would improve sq and it seems the ideal would be for the driver to be able to access the data in ram directly (preloaded from user app) without user/kernel interaction and the only option to do that is by modifying alsa as windows kernel is not open source.

 

That's how it works when you deal with PCI/PCIe or Thunderbolt (and usually also Firewire) sound devices on Linux with ALSA. The hardware DMA buffer is mapped directly to be visible to the user space so no extra memory copies are needed. Depending on hardware it is also possible with ASIO and WASAPI on Windows. Interrupts are translated to user space wakeups, which is very light weight operation. The kernel space driver still needs to take care of physical-to-virtual address mappings and programming DMA transfers. Frequency of interrupts is of course directly proportional to how big DMA buffer is possible.

 

With USB devices you always have plenty of kernel space operations performed by two drivers, the audio device driver and the USB controller driver, because the audio data needs to be packaged into USB transfer packets. USB controllers are too dumb to perform this in hardware, unlike (IIRC) Firewire controllers.

 

My recommendation: forget Windows and use Linux instead ;)

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
That's how it works when you deal with PCI/PCIe or Thunderbolt (and usually also Firewire) sound devices on Linux with ALSA. The hardware DMA buffer is mapped directly to be visible to the user space so no extra memory copies are needed. Depending on hardware it is also possible with ASIO and WASAPI on Windows. Interrupts are translated to user space wakeups, which is very light weight operation. The kernel space driver still needs to take care of physical-to-virtual address mappings and programming DMA transfers. Frequency of interrupts is of course directly proportional to how big DMA buffer is possible.

 

With USB devices you always have plenty of kernel space operations performed by two drivers, the audio device driver and the USB controller driver, because the audio data needs to be packaged into USB transfer packets. USB controllers are too dumb to perform this in hardware, unlike (IIRC) Firewire controllers.

 

My recommendation: forget Windows and use Linux instead ;)

 

not sure that wasapi would do that, it's a layer on wdm ks which still needs ioctl instructions to tell the kernel where the data is.

 

alsa would still need to tell the user app when to fill the buffer or is this done on a timer ? Think event managed buffer fill has always sounded best in windows.

 

What I would try and do with Linux would be for it just to read the buffers from ram without any user app interaction ie the user app tells it the start and end address of the data in ram and then kernel code just reads the data from start to end so there is no requirement for the user app to time or react to a buffer fill event. Not good for functionality, but would be interesting to hear if it sounded any better.

There is no harm in doubt and skepticism, for it is through these that new discoveries are made. Richard P Feynman

 

http://mqnplayer.blogspot.co.uk/

Link to comment
alsa would still need to tell the user app when to fill the buffer or is this done on a timer ? Think event managed buffer fill has always sounded best in windows.

 

Yes, this wakeup is done from audio driver's interrupt handler.

 

What I would try and do with Linux would be for it just to read the buffers from ram without any user app interaction ie the user app tells it the start and end address of the data in ram and then kernel code just reads the data from start to end so there is no requirement for the user app to time or react to a buffer fill event. Not good for functionality, but would be interesting to hear if it sounded any better.

 

This is wrong way around, because in most cases you cannot go from virtual to physical for DMA buffers. So it would involve extra copy from your buffer to the DMA buffer. Technically nothing prevents you from implementing entire player in kernel space if you like to, but whether it is any better than user space implementation is questionable. You still need to queue work from the interrupt handler because otherwise you end up with potential deadlocks and other side effects (the amount of stuff you can do in an interrupt handler is very limited).

 

So the way it works is that kernel driver allocates buffer that can be used for DMA which is then mapped to user space. Audio hardware reads this buffer directly. Whenever audio hardware has completed N samples it generates interrupt which is passed over to user space to tell application to put more data into the buffer. ASIO and WASAPI can do this on Windows too, but since they have fixed memory layout it depends if the hardware memory layout is the same. This is not problem with CoreAudio on Mac or ALSA on Linux because they support different memory layouts and it is application's job to adapt to what ever the hardware requires. Typically there are hardware limits on size of the DMA buffer, to something like 64k samples because sample counter is 16-bit. Another reason for limitations is availability of physically contiguous memory, because audio devices don't commonly support scatter-gather DMA.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
Yes, this wakeup is done from audio driver's interrupt handler.

 

 

 

This is wrong way around, because in most cases you cannot go from virtual to physical for DMA buffers. So it would involve extra copy from your buffer to the DMA buffer. Technically nothing prevents you from implementing entire player in kernel space if you like to, but whether it is any better than user space implementation is questionable. You still need to queue work from the interrupt handler because otherwise you end up with potential deadlocks and other side effects (the amount of stuff you can do in an interrupt handler is very limited).

 

So the way it works is that kernel driver allocates buffer that can be used for DMA which is then mapped to user space. Audio hardware reads this buffer directly. Whenever audio hardware has completed N samples it generates interrupt which is passed over to user space to tell application to put more data into the buffer. ASIO and WASAPI can do this on Windows too, but since they have fixed memory layout it depends if the hardware memory layout is the same. This is not problem with CoreAudio on Mac or ALSA on Linux because they support different memory layouts and it is application's job to adapt to what ever the hardware requires. Typically there are hardware limits on size of the DMA buffer, to something like 64k samples because sample counter is 16-bit. Another reason for limitations is availability of physically contiguous memory, because audio devices don't commonly support scatter-gather DMA.

 

ok, thanks for the interesting insights, wavert uses scatter gather, but not many device drivers have this.

There is no harm in doubt and skepticism, for it is through these that new discoveries are made. Richard P Feynman

 

http://mqnplayer.blogspot.co.uk/

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×
×
  • Create New...