I am developing ODE - based models of biological systems, and since I usually need to simulate such models many times (with different initial conditions), I am searching for ways to parallelize my process.

So far I use only parallel processing at the CPU level to speed up my process. Since an infrastructure of Nvidia GPUs is available in my department, I was exploring the possibility of using it. However, a crucial thing is to be able to use an ODE solver in the GPU, something that so far I have not found.

Therefore, I was wondering whether you have something available or if you are close to releasing ODE solvers for GPUs. I am looking forward to your answer, thank you in advance for your time.

I know that many ArrayFire users have been about to use GFOR along with other functions to implement ODEs for themselves on the GPU. I believe that the main parallelism is derived in 2 manners by those who have been successful:

1) when a batch of ODEs can be run simultaneously you can run them all in a GFOR loop,

or

2) when the linear algebra used to solve an individual ODE is sufficiently large that data-parallel benefits are available with straight-up ArrayFire linear algebra functions.

I'm posting here to see if anyone else has thoughts on ODEs with ArrayFire?!

Cheers,

John