Segmentation fault when trying to load models

See original GitHub issue

We are using Azure ML pipelines to train our transformers models. We have had it working for a few weeks, and then recently (just noticed it a few days ago), when trying to initialize a model, we are getting Segmentation fault.

I tried just loading the models locally this morning and have the same issues. See snippet below.

config = config_class.from_pretrained(model_name, num_labels=10)
tokenizer = tokenizer_class.from_pretrained(model_name, do_lower_case=False)
model = model_class.from_pretrained("distilroberta-base", from_tf=False, config=config)

I also tried to download the *_model.bin and pass a local path instead of the model name and also got a Segmentation fault. I also tried to use bert-base-uncased instead of distilroberta-base and had the same issue.

I am running on Ubuntu, with the following package versions:

torch==1.3.0
tokenizers=0.0.11
transformers==2.4.1

UPDATE:

I hacked some example scripts and had success, so I think the issue is that our code uses…

    "roberta": (RobertaConfig, RobertaForTokenClassification, RobertaTokenizer),
    "mroberta": (RobertaConfig, RobertaForMultiLabelTokenClassification, RobertaTokenizer),    # our custom multilabel class

instead of what the example scripts use…

    AutoConfig,
    AutoModelForTokenClassification,
    AutoTokenizer,

Was there a breaking change to model files recently that would mean that our use of the “non-auto” classes are no longer usable?

UPDATE 2:

Our original code does not cause a Segmentation fault on Windows.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:7 (2 by maintainers)

github_iconTop GitHub Comments

22reactions
yaof20commented, Jul 2, 2020

Downgrade to sentencepiece==0.1.91 solve it. I am using PyTorch 1.2.0 + transformers3.0.0

4reactions
michaelcapizzicommented, Jun 25, 2020

Bumping to torch==1.5.1 fixes this issue. But it’s still unclear why.

Read more comments on GitHub >

github_iconTop Results From Across the Web

tensorflow segmentation fault in Nvidia Xavier Jetson when ...
I have a segmentation fault with a very specific code sequence and only on Xavier Jetson: import os import requests import tensorflow as...
Read more >
Segmentation fault when reading result of model execution
I'm using the following code to execute a model with 1 input on the Atlas 500, but when trying to read the output...
Read more >
Segmentation fault (core dumped): It crashes when compiling ...
TVM throws Segmentation fault (core dumped) at relay.build() when compile a ONNX model in CUDA. Please notice that: the script run well when ......
Read more >
Segmentation fault when I use the static library(nvinfer_static ...
... segmentation fault occurs while load the FP16 engine. ... Can you try running your model with trtexec command, and share the ...
Read more >
TDA4VM: Segmentation fault when I try to import TIDL model
It is no errors happen when I import model with 32bit mode, but Segmentation fault happens at 8bit mode and 16 bit mode....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found