Using a graphics card for neural networks

Just a short blurb of text today, because I ran into an issue that ever so slightly annoyed me last weekend. I had bought an old second-hand desktop machine lately, because I wanted to try out a couple of things that are not particularly useful on a laptop (having a server to log on to/ssh into, host my own Jupyter Hub, doing some I/O-heavy things with images from the internet, etc.). One more thing I wanted to try was using TensorFlow with a GPU in your machine. The idea was to check out whether this indeed is as simple as just installing tensorflow-gpu (note that in versions newer than 1.15 of tensorflow, there is no need to install CPU and GPU capabilities separately) and the software would figure out itself whether or not to use the GPU.

Then TensorFlow logo, taken from the Wikipedia site linked above.

“TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks. It is used for both research and production at Google.‍” (source: Wikipedia)

I use it mostly with the lovely Keras API, which allows easy set-up of neural networks in TensorFlow. I have used it, for example (disclaimer: old code…) in a blog post titled “Beyond the recognition of handwritten digits”.

It turned out that TensorFlow will indeed do this for you, but not with just any graphics card. I have an NVIDIA GeForce GT 620. This is quite an old model indeed, but it still has 96 CUDA cores and a gigabyte of memory. Sounded good to me. After loading tensorflow from a Python session you can have it detect the graphics card, so you know it can be used for the calculations:

This is a screenshot of a console in Jupyter Lab.

As you can see, it did not detect a graphics card…. There is a hint in a terminal session that is behind my jupyter session though:

Aplogies for the sub-optimal reading experience here….

Apparently, the cuda compute capability of my graphics card is lower than it should be. Only cards with cuda capability of 3.5 or higher will work (I know the terminal says 3.0, but the documentation says 3.5). On this NVIDIA page, there is a list of their cards and the corresponding cuda capabilities. Pleas check that out before trying to run TensorFlow on it (and before getting frustrated). And when you consider buying a (second-hand?) graphics card to play with, definitely make sure you buy a useful one!

Leave a Reply

Your email address will not be published. Required fields are marked *