OSError: Can't load config for 'bert-base-uncased
See original GitHub issueEnvironment info
It happens in local machine, Colab, and my colleagues also.
transformersversion:- Platform: Window, Colab
- Python version: 3.7
- PyTorch version (GPU?): 1.8.1 (GPU yes)
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
Who can help
@LysandreJik It is to do with ‘bert-base-uncased’
Information
Hi, I m having this error suddenly this afternoon. It was all okay before for days. It happens in local machine, Colab and also to my colleagues. I can access this file in browser https://huggingface.co/bert-base-uncased/resolve/main/config.json no problem. Btw, I m from Singapore. Any urgent help will be appreciated because I m rushing some project and stuck there.
Thanks

403 Client Error: Forbidden for url: https://huggingface.co/bert-base-uncased/resolve/main/config.json
HTTPError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 505 use_auth_token=use_auth_token, –> 506 user_agent=user_agent, 507 )
6 frames HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/bert-base-uncased/resolve/main/config.json During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 516 f"- or ‘{pretrained_model_name_or_path}’ is the correct path to a directory containing a {CONFIG_NAME} file\n\n" 517 ) –> 518 raise EnvironmentError(msg) 519 520 except json.JSONDecodeError:
OSError: Can’t load config for ‘bert-base-uncased’. Make sure that:
-
‘bert-base-uncased’ is a correct model identifier listed on ‘https://huggingface.co/models’
-
or ‘bert-base-uncased’ is the correct path to a directory containing a config.json file
Issue Analytics
- State:
- Created 2 years ago
- Comments:15 (7 by maintainers)
Top Related StackOverflow Question
Still not okay online, but I managed to do it locally
git clone https://huggingface.co/bert-base-uncased
#model = AutoModelWithHeads.from_pretrained(“bert-base-uncased”) model = AutoModelWithHeads.from_pretrained(BERT_LOCAL_PATH, local_files_only=True)
#tokenizer = AutoTokenizer.from_pretrained(“bert-base-uncased”) tokenizer = AutoTokenizer.from_pretrained(BERT_LOCAL_PATH, local_files_only=True)
adapter_name = model2.load_adapter(localpath, config=config, model_name=BERT_LOCAL_PATH)
With additional testing, I’ve found that this issue only occurs with adapter-tranformers, the AdapterHub.ml modified version of the transformers module. With the HuggingFace module, we can pull pretrained weights without issue.
Using adapter-transformers this is now working again from Google Colab, but is still failing locally and from servers running in AWS. Interestingly, with adapter-transformers I get a 403 even if I try to load a nonexistent model (e.g. fake-model-that-should-fail). I would expect this to fail with a 401, as there is no corresponding config.json on huggingface.co. The fact that it fails with a 403 seems to indicate that something in front of the web host is rejecting the request before the web host has a change to respond with a not found error.