deployment – How to package and distribute a Tensorflow GPU desktop application

I am developing a desktop application that utilises Tensorflow. The aim of the application is to let users easily train a given model and use it for inference within the app. I want to support training and inference via the GPU, if available on the end-user’s machine.

The primary issue appears to be setting up Nvidia dependencies: driver, CUDA Toolkit and cudnn. These are required on the end-user’s machine for GPU support.

Ultimately, I don’t want the end-user to faff about installing dependencies for my application to work.

Some ideas that only half work:

1. Use Docker

I can create a Docker image that contains the required components. The only thing the user will have to do is make sure they have a Nvidia driver installed on the host machine. Unfortunately, this approach requires the end-user to first install and configure Docker itself. Not ideal.

2. Package CUDA libraries and dependencies… somehow

First I need to check if Nvidia’s licences allow me to do so, but in theory I could somehow include the libraries in the installer of my application? I am assuming I need C++ knowledge for this.

How can I achieve this?


I am developing on Linux. Main end-user demographic is primarily Windows, but ideally I would be able to do cross-platform.