Solvednvidia docker Connect nvidia-docker as remote python interpreter in Pycharm

Hi,

Someone found a way to connect a python interpreter from container launched with nvidia-docker as remote interpreter in Pycharm ?

I use Pycharm pro edition, that provides tools for Docker integration (with docker-machine or docker-compose)
I followed some tutorial about this tool in Pycharm, (the tool based on docker-machine).. but it seems to connect only an image launch with docker not with nvidia-docker !
(tuto with docker-machine: https://www.jetbrains.com/help/pycharm/2016.3/configuring-remote-interpreters-via-docker.html)

I would like to build a very efficient deep learning dev environment that uses nvidia-docker technology.

Thank you very for your help.

Julien

26 Answers

✔️Accepted Answer

q&d-workaround: only set docker "default-runtime" to "nvidia"

...by adding line "default-runtime": "nvidia" to file /etc/docker/daemon.json

for me, actually only setting the docker "default-runtime" to "nvidia" restarting docker and proceeding with the pycharm (professional edition) docker integration tools seems to work out. Setting the default-runtime is done by adding the line "default-runtine": "nvidia" to the /etc/docker/daemon.json-file so with nvidia-docker installed the file may look like this:

{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}

restarting docker by e.g. typing systemctl restart docker in the command line, possibly reload daemon-config before systemctl daemon-reload.

i have gpu acceleration on docker & docker-compose projects in pycharm using e.g. an official NVIDIA-image such as nvcr.io/nvidia/pytorch:18.07-py3 (from NVIDIA GPU CLOUD) and testing nvidia-gpu-acceleration with a test-script for pytorch:

import torch
num_devices = torch.cuda.device_count()
if num_devices > 0:
    device_id = torch.cuda.current_device()
    print(torch.cuda.device(device_id))
    print(torch.cuda.get_device_name(device_id))
else:
    print('no gpu-device attached')

this should return your gpu model name. i'm a newbie regarding nvidia-/docker, still:

  • this seems to work for the moment
  • normal docker-compose.yml work within my limited use cases
  • non-NVIDIA-images with the "nvidia"-default-runtime-setting are not affected by the nvidia-runtime.

a short glance into nvidia-docker-script reveals that it only adds a --runtime="nvidia" to the docker run and docker create command, so this simple workaround may work for at least some use cases with pycharm. anyway i'll be happy to read any better/more generalized solutions...

More Issues: