GPU computing on large datasets

[Old posts from the commercial version of ArrayFire] Discussion of ArrayFire using CUDA or OpenCL.

Moderator: pavanky

GPU computing on large datasets

Postby melonakos » Tue Jan 18, 2011 6:25 pm

One programmer recently emailed the following question to us:

I'm trying to construct a Matlab based system which performs operations on 3D-Matrices of sizes of tens of Gigabytes.
Towards this end I intend to purchase a workstation with several tens of GBs of RAM. I was further contemplating whether I could utilize one of Nvidia's massively parallel GPU cards to make the handling of such matrices faster. Specifically I understand that there might be an issue with the the limited size of on-board RAM (on the GPU).

Could you comment in the feasibility of this using your software and point me to further relevant reading?

Great question. The GPUs with the largest memory sizes today are the Teslas, as follows: Tesla C1060 with 4GB, Tesla C2050 with 3GB, and Tesla C2070 with 6GB. Therefore, your computation on any given GPU would have to fit into that memory footprint.

However, if you can split-up your computation to work on smaller chunks (say 2GB chunks) at a time, then you could put multiple GPUs in a single system and have each of those GPUs working in concert with the CPU to solve your problem, via Jacket MGL:

For an example of this, check out the GTC 2010 talk by Nolan Davis at SAIC ( He was able to process very large chunks of data efficiently using GPUs and Jacket.

Hope this helps. If you have any further comments, please reply on these forums!

John Melonakos (
User avatar
Posts: 503
Joined: Tue Jun 10, 2008 9:49 am

Return to [archive-commercial] Programming & Development with ArrayFire