Cannot re-initialize CUDA in forked subprocess

See original GitHub issue

Hello

I am trying to create a Rest API (flask/uwsgi) for sentence embedding.

Here is my code:

from sentence_transformers import SentenceTransformer

# Loading embedding model (SentenceBERT)
print("Loading the model…")
model = SentenceTransformer('bert-base-nli-mean-tokens')
print("Model loaded.")

def logic(sentences):
    print("SENTENCES:", sentences)
    # Embedding the sentences with model
    result_array = model.encode(sentences)
    print("ENCODDED")
    return result_array.tolist()

Here is the traceback of the error:

File "./app/controllers.py", line 19, in logic
    result_array = model.encode(sentences)
  File "/home/christophe/HEG_microservices/sentenceembedding/venv/lib/python3.6/site-packages/sentence_transformers/SentenceTransformer.py", line 151, in encode
    self.to(device)
  File "/home/christophe/HEG_microservices/sentenceembedding/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 607, in to
    return self._apply(convert)
  File "/home/christophe/HEG_microservices/sentenceembedding/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 354, in _apply
    module._apply(fn)
  File "/home/christophe/HEG_microservices/sentenceembedding/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 354, in _apply
    module._apply(fn)
  File "/home/christophe/HEG_microservices/sentenceembedding/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 354, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "/home/christophe/HEG_microservices/sentenceembedding/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 376, in _apply
    param_applied = fn(param)
  File "/home/christophe/HEG_microservices/sentenceembedding/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 605, in convert
    return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
  File "/home/christophe/HEG_microservices/sentenceembedding/venv/lib/python3.6/site-packages/torch/cuda/__init__.py", line 185, in _lazy_init
    "Cannot re-initialize CUDA in forked subprocess. " + msg)
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method

Do you have an idea of how to solve this ? Best regards

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:2
  • Comments:7 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
zengjie617789commented, Apr 7, 2022

what about if using the model to inference withoud dataloader?

1reaction
zolekodecommented, Jan 16, 2021

Just solved it by setting the number of workers to 0 (dataloader/dataset)

Read more comments on GitHub >

github_iconTop Results From Across the Web

Cannot re-initialize CUDA in forked subprocess #40403 - GitHub
I am getting CUDA re-initialize error. I am using below code to generate synthetic dataset on GPU. To perform distributed training I am...
Read more >
Cannot re-initialize CUDA in forked subprocess - Stack Overflow
I load the model in the parent process and it's accessible to each forked worker process. The problem occurs when creating a CUDA-backed...
Read more >
RuntimeError: Cannot re-initialize CUDA in forked subprocess ...
I'm getting the above error even though I'm not using multiprocessing. Not using multiprocessing, but getting CUDA error re. forked subprocess.
Read more >
Cannot re-initialize CUDA in forked subprocess" Displayed in ...
When PyTorch is used to start multiple processes, the following error message is displayed: RuntimeError: Cannot re-initialize CUDA in forked subprocess ...
Read more >
Cannot re-initialize CUDA in forked subprocess和CUDA error ...
RuntimeError : Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found