GPU acceleration seemingly not used in headless rendering with EGL

See original GitHub issue

Hi,

I am running pyrender in a Docker container with nvidia runtime. (nvidia/cudagl:10.2-runtime-ubuntu18.04)

I can check that the GPU passthrough works by seeing the output of nvidia-smi.

Pyrender works fine with PYOPENGL_PLATFORM=egl, and I can produce the expected image. I know that Pyrender is at least checking the presence of the GPU because if I fail to enable the GPU in the container it exits with an error.

So - functionally it seems all fine.

However: I don’t think that the GPU is actually being used. When I run my program (which is producing a video using pyrender) I can see the CPU is pegged at 100% - while I would expect that the execution should be GPU bound. Moreover, the performance that I see is compatible with the speed of software rendering.

I did a deep dive into the code, trying to isolate where is the problem. I use flags = FLAT. I disabled the reading of the depth buffer (it took 40ms per frame), optimized to some extent the caching of the program, etc., until I had to realize that the bottleneck is the drawing (glDrawElementsInstanced). If I understand correctly, calling this should not be a CPU-bound operation, as the GPU will do most of the work.

So I’m thinking that somehow PyRender is not actually using the acceleration.

How can I debug this issue?

One question that I have is the following: where can I check that the renderer is actually a hardware rendererer? in platforms/ I didn’t see an obvious place where to get the name of the renderer.

I am using Python packages:

PyOpenGL (3.1.0)
PyOpenGL-accelerate (3.1.0)
pyrender (0.1.43, /project/pyrender)

Docker base image nvidia/cudagl:10.2-runtime-ubuntu18.04 plus APT packages libglfw3-dev libgles2-mesa-dev freeglut3-dev.

nvidia-smi from the container:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100      Driver Version: 440.100      CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce RTX 208...  Off  | 00000000:65:00.0  On |                  N/A |
| 35%   35C    P8    21W / 260W |    536MiB / 11018MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:5
  • Comments:7

github_iconTop GitHub Comments

1reaction
m3atcommented, Nov 26, 2020

@KalraA here is what I used:

FROM nvidia/cudagl:11.0-runtime-ubuntu18.04

ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,graphics,utility,video

ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8

RUN apt-get update && apt-get install -qq \
    curl \
    ca-certificates \
    sudo \
    git \
    curl \
    bzip2 \
    libx11-6 \
    autoconf \
    automake \
    build-essential \
    cmake \
    wget \
    libjpeg-dev \
    libpng-dev

RUN apt-get install -qq python3 python3-dev python3-pip
RUN ln -s /usr/bin/pip3 /usr/bin/pip
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN pip install -U pip

RUN pip install scikit-build opencv-python matplotlib jupyter notebook scikit-learn

RUN apt-get install -qq libglfw3-dev libgles2-mesa-dev freeglut3-dev
RUN pip install PyOpenGL PyOpenGL_accelerate pyrender

I also tried without NVIDIA’s base image by installing libglvnd but I couldn’t get that to work yet.

1reaction
SLTK1commented, Oct 19, 2020

Hi, I tested my pyrender with PYOPENGL_PLATFORM=egl in docker but I got an error "ValueError: Invalid device ID(0) ". Have you meet this problem?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Trying to figure out pvserver OpenGL issue - ParaView Support
The X server setup on the node is working correctly, as I can use GPU-accelerated rendering in ParaView and e.g. Blender when running...
Read more >
EGL Eye: OpenGL Visualization without an X Server
If you're like me, you have a GPU-accelerated in-situ ... The reason is that it's not sufficient to enable OpenGL rendering on the...
Read more >
Qt for Embedded Linux | Qt 6.4 - Qt Documentation
It has no connection to EGL configurations and the color depth used for OpenGL rendering. QT_QPA_EGLFS_SWAPINTERVAL, By default, a swap interval of 1...
Read more >
Rendering WebGL image in headless chrome without a GPU
I'm using Pixi.js for rendering, if I use canvas renderer then everything works on the server; It's WebGL rendering thats not working. It's ......
Read more >
EGL doesn't work with NVIDIA Tesla - Google Groups
I give the argument --use-gl=egl to Chromium so it gets the GPU acceleration in headless mode. I ran it in various environments that...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found