Graphical Processing Units (GPUs, aka video cards) are specialized processors inside a computer that act like a mini-supercomputer, and excel at tasks that require parallel computation.  The department recently purchased a consumer-grade card (GeForce GTX 970) for experimentation, and thanks to a grant from NVIDIA, we have also received a state-of-the-art Tesla K40c card for experimentation and production work.

The GPU cards are installed in computers in the Newton lab, so the department can access them directly or via remote connection.

The cards can be programmed in many different ways:

  1.  low-level access for custom algorithms is done using the CUDA language (an extension of C)
  2.  mid-level access is via many existing C/fortran libraries (see https://developer.nvidia.com/gpu-accelerated-libraries)
  3.  high-level access via python and Matlab. See http://www.mathworks.com/discovery/matlab-gpu.html?

For example, using the new capabilities with Matlab is effortless.
Here's an example of a 4x speedup for matrix multiplication:

$ ssh bromwich.colorado.edu
$ matlab -nodesktop -nosplash
>> A = randn(1e4); B = randn(1e4);
>> matmat_cpu = @() A*B;
>> timeit(matmat_cpu)

ans =

    6.7866

>> Ag = gpuArray(A); Bg = gpuArray(B);
>> matmat_gpu = @() Ag*Bg;
>> gputimeit(matmat_gpu)

ans =

    1.6792

Details:

bromwich.colorado.edu has an NVIDIA Tesla K40c, CUDA "compute
capability" 3.5. This is the nicer card (about $2500 retail)

cantor.colorado.edu has an NVIDIA GeForce GTX 970, CUDA "compute
capability" 5.2. This is the experimentation card.