AttributeError: 'NoneType' object has no attribute 'from_pretrained'
See original GitHub issueThis code was working yesterday but doesn’t work today:
from transformers import AutoTokenizer
AutoTokenizer("Helsinki-NLP/opus-mt-en-fr")
Issue Analytics
- State:
- Created 3 years ago
- Reactions:3
- Comments:6 (3 by maintainers)
Top Results From Across the Web
M2M100Tokenizer.from_pretrained 'NoneType' object is not ...
Same here, it works fine in my local env but not on colab, I fixed this by using transformers==4.16.0 instead of the latest...
Read more >BertPreTrainedModel and RobertaPreTrainedModel works ...
It has the error: config, model_kwargs = cls.config_class.from_pretrained( AttributeError: 'NoneType' object has no attribute ...
Read more >'eagertensor' object has no attribute 'size'. - You.com
AttributeError : 'Tensor' object has no attribute 'numpy'. CPU TEST VERSION OF TENSORFLOW 2.0. ... model = AutoModel.from_pretrained("bert-base-chinese .
Read more >AttributeError: 'NoneType' object has no attribute 'data'
AttributeError : 'NoneType' object has no attribute 'data' def train_net(net, device, epochs=5, batch_size=1, lr=0.001, val_percent=0.7, ...
Read more >Projectwise transformers gets 'NoneType' object has no ...
ProjectWiseWSGConnector: Uploading file "testfile.shp" to Bentley ProjectWise; Python Exception <AttributeError>: 'NoneType' object has no attribute 'encode ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Right, I was using
Thanks,
pip install sentencepiecefixed the issue!It looks that previously the tokenizer outputted torch tensors and now lists. Is this intended? It breaks existing code.
Yes, this was a bug. Tokenizers are framework-agnostic and should not output a specific framework’s tensor. The implementation of the Marian tokenizer was not respecting the API in that regard.
Tokenizers can still handle torch tensors, you need to specify that you want them though:
I guess in your situation it has to do with the
prepare_seq2seq_batch: