GPU Computing again....

No replies
packoman
User offline. Last seen 3 years 28 weeks ago. Offline
Joined: 2007-06-07
Posts:

Hi all,
I've been following the news for GPU-computing thats going on around OpenCL, nVidia's CUDA and AMD streaming computing for a while now and it sounds very exciting for the realm of audio processing. Now there have been quite a number of posts here that were of the kind "hey we need GPU-computing", but I was wondering where exactly the problems are and what the possible solution as well as possibilities (as to a possible usage of GPU-Computing) would be.
The obvious advantage of GPUs as has been stated everywhere is obviously its enormous processing power, when it comes to data-parallel tasks. That power is very applicable to digital audio processing, if I am not mistaken, which made me think about getting into a little coding in this respect (once I finish my thesis)...
Now Paul wrote the following in a post on GPU-Computing:
"GPGPU's are very powerful but also very latency-inducing. They are not designed for use in realtime, low latency contexts. If you were doing "offline rendering" they would offer huge amounts of power, but their current design adds many milliseconds to delays to signal processing."
I was wondering if someone with the knowledge could maybe elaborate a little further on that.
Ardour does have complete latency compensation (again if I am not mistaken) meaning that at least some of the audio effects could well be processed online on a GPU, as long as that compensation works correctly. Why would this not be viable?
I am not aware of how Ardour (or other programs for that matter) handles the effect processing and also not very familiar with LV2 or LADSPA (although I did read up on it a little).
Taking from the complexities that are being talked about, I suspect that the effect-plugins do not open their own threads on the CPU or do they? If they do, wouldn't it be possible to code them in OpenCL and use latency compensation?
Just to clarify: This isn't meant as a "please do this post", but rather I am wondering whether there is any sense to my idea of trying this as a personal project...
Hope this all doesn't sound too ridiculous, given that my knowledge of these things at the moment is still very rudimentary...
Thanks in advance for a reply,
Michael

packoman
User offline. Last seen 3 years 28 weeks ago. Offline
Joined: 2007-06-07
Posts:

Oh yeah.
And maybe someone could point me in the right direction to read up a little on these things...

deva
User offline. Last seen 1 year 5 weeks ago. Offline
Joined: 2006-12-14
Posts:

http://www.gpgpu.org/index.php?s=audio would be a nice place to start I think.
As to the LADSPA and LV2 specs, they can be found at http://www.ladspa.org/ and http://lv2plug.in/ respectively.
They are both very simple to use, and have pretty good documentation (use the source Luke ;-) ).

packoman
User offline. Last seen 3 years 28 weeks ago. Offline
Joined: 2007-06-07
Posts:

Hi.
Thanks for the reply. I checked out http://www.gpgpu.org/index.php?s=audio Unfortunately most of the links there seem to be dead. I also shortly looked into some sample code of an LV2 (again). The thing I don't understand (and that was my main question before) is how they work/interact with the host software. Can an LV2 plug-in open their own thread (i.e. do they have their own process) or do they somehow run "through" host. (I remember, for example, Paul saying that the Ardour audio processing section is not yet multi-threading capable. So does that only relate to the actual audio routing or also the effect plugins....)
I stumbled into this very interesting link:
http://www.kvraudio.com/forum/printview.php?t=222978&page=1
So at least with VST it seems to be possible to do GPU audio processing using CUDA. I haven't tried out the plug-in yet, but I will, once I get around to it.
So maybe, if someone with insight into the deeper workings of LV2 (or the compiling process for that matter) could comment on this, I'd be grateful.

packoman
User offline. Last seen 3 years 28 weeks ago. Offline
Joined: 2007-06-07
Posts:

Couldn't anyone just give a short reply on this?
I'd be very thankful,
Michael

linuxdsp
linuxdsp's picture
User offline. Last seen 6 days 15 hours ago. Offline
Joined: 2009-02-04
Posts:

For what its worth, this is my understanding of LV2:

The LV2 plugin is a shared library. The shared library contains a few functions to instantiate the plugin (set up various data structures etc) and also a 'run' function that is called by the host every time the it wants to process a block of audio samples. (The host calls the function with pointers to the buffers containing the audio and also specifies the number of samples to process). The function does its thing with the audio samples and returns. Essentially the shared library just provides a few functions that the host will call as and when it needs. When you load a plugin into the host, the host loads the shared library into memory and calls the various functions to instantiate the plugin. There is nothing to prevent you starting up a new thread from within the plugin when it is instantiated - or even fork / exec - ing another process but there are various rules governing what should / can execute in which thread - have a look in the lv2.h header which you need to compile an LV2 plugin - this pretty much encapsulates the API. I'm not entirely convinced about LV2's ability to handle GUI extensions etc yet across all hosts - but this is a topic of some debate it seems. There is a FAQ that covers some of the issues I've encountered while trying to develop LV2 plugins on my site:

http://www.linuxdsp.co.uk

You can use the contact info on my site and I can give a more detailed description, the LV2 devs maybe able to shed some light on things - it looks to me as though LV2 is still evolving...