Recently I helped a Patron to setup an instance of my Emby docker-swarm recipe, but with the extra bonus of having all transcoding done using his GPU. Note that this would work equally well for Plex , or any other “Dockerized” application which would, technically, support GPU processing.
This is a companion discussion topic for the original entry at https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/
I have two Windows 10 desktops in my home. One has a NVIDIA GTX1060 and the other has an AMD R470.
Does AMD have a similar setup so both hosts can transcode properly?
Also, with this setup, would you recommend I run docker on a VM running linux? Or just run Docker for windows?
Thanks.
Sorry, missed this. Best bet here is running the GTX under nvidia-docker on Linux. I don’t think it’s at all possible in Windows
lyoko37
September 16, 2018, 8:39pm
4
Hey @funkypenguin ,
I’m trying out this guide and I think I’ve got everything set correctly. I have the runtime as nvidia
and the env variable set for plex. I can see the Nvidia card in the plex container but I’m getting the error 11230:Sep 16, 2018 20:15:00.432 [0x7f1b733ff700] DEBUG - Codecs: hardware transcoding: opening hw device failed - probably not supported by this system, error: Invalid argument
when the server is running.
Any ideas?
Hey @lyoko37 ,
What’s the result of df
and env
if you’re shelled into the container?
D
lyoko37
September 16, 2018, 9:33pm
6
Thanks for the quick reply @funkypenguin
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 49272632 30685544 16061136 66% /
tmpfs 65536 0 65536 0% /dev
tmpfs 32931428 0 32931428 0% /sys/fs/cgroup
storage 42890405376 19690219520 23200185856 46% /movies
/dev/sdi1 49272632 30685544 16061136 66% /transcode
shm 65536 8 65528 1% /dev/shm
tmpfs 32931428 12 32931416 1% /proc/driver/nvidia
udev 32902140 0 32902140 0% /dev/nvidia0
tmpfs 32931428 0 32931428 0% /proc/acpi
tmpfs 32931428 0 32931428 0% /proc/scsi
tmpfs 32931428 0 32931428 0% /sys/firmware
root@TheHoard:/dev# env
HOSTNAME=TheHoard
PUID=1000
TERM=xterm
LC_ALL=C.UTF-8
PGID=1000
NVIDIA_VISIBLE_DEVICES=all
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/dev
LANG=C.UTF-8
SHLVL=1
HOME=/config
affinity:container==351dadd5522594140ce25ef75aa43c46cd10c375ea2f91f29211ebe251da9b87
CHANGE_CONFIG_DIR_OWNERSHIP=true
_=/usr/bin/env
OLDPWD=/
Inside the container I also see:
root@TheHoard:/dev# ls
core fd full mqueue null nvidia0 nvidiactl ptmx pts random shm stderr stdin stdout tty urandom zero
root@TheHoard:/dev# nvidia-smi
Sun Sep 16 21:32:45 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.87 Driver Version: 390.87 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro P2000 Off | 00000000:02:00.0 Off | N/A |
| 0% 27C P0 16W / 75W | 0MiB / 5057MiB | 3% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
root@TheHoard:/dev#
It seems like the card is showing up in the container there but I’m not sure.
lyoko37
September 16, 2018, 9:40pm
7
Just for reference as well my docker compose file looks like this:
plex:
image: plexinc/pms-docker:plexpass
container_name: plex
runtime: nvidia
environment:
- PGID=1000
- PUID=1000
- NVIDIA_VISIBLE_DEVICES=all
volumes:
- /storage/container_configs/plex:/config
- /storage/media/tv_shows:/tvshows
- /storage/media/movies_4k:/tvshows_4k
- /storage/media/movies:/movies
network_mode: "host"
restart: always
Aha.
Try adding this additional env variable:
NVIDIA_DRIVER_CAPABILITIES=compute,utility
lyoko37
September 16, 2018, 9:53pm
9
Hmm
Still seeing this when it attempts to transcode:
Sep 16, 2018 21:53:16.412 [0x7f8d9dfff700] DEBUG - Codecs: hardware transcoding: testing API vaapi
Sep 16, 2018 21:53:16.412 [0x7f8d9dfff700] DEBUG - Codecs: hardware transcoding: opening hw device failed - probably not supported by this system, error: Invalid argument
Sep 16, 2018 21:53:16.414 [0x7f8d9dfff700] DEBUG - Codecs: hardware transcoding: testing API vaapi
Sep 16, 2018 21:53:16.414 [0x7f8d9dfff700] DEBUG - Codecs: hardware transcoding: opening hw device failed - probably not supported by this system, error: Invalid argument
Updated compose file:
plex:
image: plexinc/pms-docker:plexpass
container_name: plex
runtime: nvidia
environment:
- PGID=1000
- PUID=1000
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,utility
volumes:
- /storage/container_configs/plex:/config
- /storage/media/tv_shows:/tvshows
- /storage/media/movies_4k:/tvshows_4k
- /storage/media/movies:/movies
network_mode: "host"
restart: always
Here’s the output from a working Emby container with an nVidia GPU exposed:
/ # df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 3914269104 2476888104 1238524504 67% /
tmpfs 65536 0 65536 0% /dev
tmpfs 32964632 0 32964632 0% /sys/fs/cgroup
/dev/md1 3914269104 2476888104 1238524504 67% /config
prod1:/storage 242960993280 216413934592 21663166464 91% /media
/dev/md1 3914269104 2476947852 1238464756 67% /etc/resolv.conf
/dev/md1 3914269104 2476947884 1238464724 67% /etc/hostname
/dev/md1 3914269104 2476947924 1238464684 67% /etc/hosts
shm 65536 0 65536 0% /dev/shm
tmpfs 32964632 12 32964620 0% /proc/driver/nvidia
/dev/md1 3914269104 2476948060 1238464548 67% /usr/bin/nvidia-smi
/dev/md1 3914269104 2476948108 1238464500 67% /usr/bin/nvidia-debugdump
/dev/md1 3914269104 2476948148 1238464460 67% /usr/bin/nvidia-persistenced
/dev/md1 3914269104 2476948188 1238464420 67% /usr/bin/nvidia-cuda-mps-control
/dev/md1 3914269104 2476948236 1238464372 67% /usr/bin/nvidia-cuda-mps-server
/dev/md1 3914269104 2476948268 1238464340 67% /usr/lib64/libnvidia-ml.so.396.54
/dev/md1 3914269104 2476948316 1238464292 67% /usr/lib64/libnvidia-cfg.so.396.54
/dev/md1 3914269104 2476948352 1238464256 67% /usr/lib64/libcuda.so.396.54
/dev/md1 3914269104 2476948396 1238464212 67% /usr/lib64/libnvidia-opencl.so.396.54
/dev/md1 3914269104 2476948444 1238464164 67% /usr/lib64/libnvidia-ptxjitcompiler.so.396.54
/dev/md1 3914269104 2476948492 1238464116 67% /usr/lib64/libnvidia-fatbinaryloader.so.396.54
/dev/md1 3914269104 2476948540 1238464068 67% /usr/lib64/libnvidia-compiler.so.396.54
/dev/md1 3914269104 2476948584 1238464024 67% /usr/lib64/libvdpau_nvidia.so.396.54
/dev/md1 3914269104 2476948620 1238463988 67% /usr/lib64/libnvidia-encode.so.396.54
/dev/md1 3914269104 2476948668 1238463940 67% /usr/lib64/libnvcuvid.so.396.54
And environment:
/ # env | grep NVIDIA
NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
NVIDIA_VISIBLE_DEVICES=all
/ #
I don’t think so. nvidia-docker adds an additional runtime to your docker environment, and it’s unLIKELY this’ll work with UnRAID
I only see (hw) appear on the encoding side and not decoding for Plex on ubuntu 16.04. Is this a known issue or did I do something wrong?
EDIT: Seems to be a limitation of the plex transcoder that should eventually get updated. Hardware Accelerated Decode (Nvidia) for Linux - Feature Suggestions - Plex Forum
Thank you for this mini tutorial, it really helped me out getting my own Emby container running with nvenc!
You’re welcome FYI, there’s a bug in the current stable version which occasionally disables your transcoding settings. It’s fixed in the recent betas, and will be addressed in stable 3.6.0.