GPU acceleration seemingly not used in headless rendering with EGL
See original GitHub issueHi,
I am running pyrender in a Docker container with nvidia runtime. (nvidia/cudagl:10.2-runtime-ubuntu18.04)
I can check that the GPU passthrough works by seeing the output of nvidia-smi.
Pyrender works fine with PYOPENGL_PLATFORM=egl, and I can produce the expected image. I know that Pyrender is at least checking the presence of the GPU because if I fail to enable the GPU in the container it exits with an error.
So - functionally it seems all fine.
However: I don’t think that the GPU is actually being used. When I run my program (which is producing a video using pyrender) I can see the CPU is pegged at 100% - while I would expect that the execution should be GPU bound. Moreover, the performance that I see is compatible with the speed of software rendering.
I did a deep dive into the code, trying to isolate where is the problem. I use flags = FLAT. I disabled the reading of the depth buffer (it took 40ms per frame), optimized to some extent the caching of the program, etc., until I had to realize that the bottleneck is the drawing (glDrawElementsInstanced). If I understand correctly, calling this should not be a CPU-bound operation, as the GPU will do most of the work.
So I’m thinking that somehow PyRender is not actually using the acceleration.
How can I debug this issue?
One question that I have is the following: where can I check that the renderer is actually a hardware rendererer? in platforms/ I didn’t see an obvious place where to get the name of the renderer.
I am using Python packages:
PyOpenGL (3.1.0)
PyOpenGL-accelerate (3.1.0)
pyrender (0.1.43, /project/pyrender)
Docker base image nvidia/cudagl:10.2-runtime-ubuntu18.04 plus APT packages libglfw3-dev libgles2-mesa-dev freeglut3-dev.
nvidia-smi from the container:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100 Driver Version: 440.100 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 208... Off | 00000000:65:00.0 On | N/A |
| 35% 35C P8 21W / 260W | 536MiB / 11018MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
Issue Analytics
- State:
- Created 3 years ago
- Reactions:5
- Comments:7
Top Related StackOverflow Question
@KalraA here is what I used:
I also tried without NVIDIA’s base image by installing libglvnd but I couldn’t get that to work yet.
Hi, I tested my pyrender with PYOPENGL_PLATFORM=egl in docker but I got an error "ValueError: Invalid device ID(0) ". Have you meet this problem?