Numba Example

An example program I wrote (actually adapted from an existing one) some time ago, showing the speed-up achieved when switching from CPU to GPU documentation, it also served to test the setup and benchmark my systems.

The example can be run as a Jupyter notebook or in a terminal.

The program generates a fractal image, the Mandelbrot set, and measures the time the computation takes.

Image generated by CPU and GPU computations


Unsurprisingly, there is a considerable speed-up when switching from CPU to GPU.
On my systems the CPU version takes around 4.4s to compute the image, while the GPU version does it in around 0.3s – considerable time saver, I’d say.

The other interesting thing here is, how easy it is to used your NVIDIA card with Python and take advantage of these speed-ups.

My code is available at Github, feel free to use, comment or share.