OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index', 'flax_model.msgpack']

See original GitHub issue

My Embeddings were indexed using txtai==3.1.0

embeddings = Embeddings({"method": "transformers", "path": "clip-ViT-B-32", "modelhub": False})
embeddings.index(images())

But now using txtai==3.2.0 I get the following error after initialising Embeddings following example Image.py for a streamlit application

embeddings = Embeddings({"path": os.path.join(os.path.dirname(os.path.realpath(__file__)), 'clip-ViT-B-32') , "method": "transformers", "modelhub": True})

streamlit_1  | Traceback (most recent call last):
streamlit_1  |   File "/usr/local/lib/python3.7/site-packages/streamlit/script_runner.py", line 350, in _run_script
streamlit_1  |     exec(code, module.__dict__)
streamlit_1  |   File "/app/app.py", line 129, in <module>
streamlit_1  |     app()
streamlit_1  |   File "/app/app.py", line 102, in app
streamlit_1  |     embeddings = build(embeddings_path)
streamlit_1  |   File "/usr/local/lib/python3.7/site-packages/streamlit/caching.py", line 543, in wrapped_func
streamlit_1  |     return get_or_create_cached_value()
streamlit_1  |   File "/usr/local/lib/python3.7/site-packages/streamlit/caching.py", line 525, in get_or_create_cached_value
streamlit_1  |     return_value = func(*args, **kwargs)
streamlit_1  |   File "/app/app.py", line 68, in build
streamlit_1  |     embeddings = Embeddings({"path": clippath, "method": "transformers", "modelhub": True})
streamlit_1  |   File "/usr/local/lib/python3.7/site-packages/txtai/embeddings/base.py", line 53, in __init__
streamlit_1  |     self.model = self.loadVectors() if self.config else None
streamlit_1  |   File "/usr/local/lib/python3.7/site-packages/txtai/embeddings/base.py", line 385, in loadVectors
streamlit_1  |     return VectorsFactory.create(self.config, self.scoring)
streamlit_1  |   File "/usr/local/lib/python3.7/site-packages/txtai/vectors/factory.py", line 41, in create
streamlit_1  |     return TransformersVectors(config, scoring)
streamlit_1  |   File "/usr/local/lib/python3.7/site-packages/txtai/vectors/base.py", line 24, in __init__
streamlit_1  |     self.model = self.load(config["path"])
streamlit_1  |   File "/usr/local/lib/python3.7/site-packages/txtai/vectors/transformers.py", line 36, in load
streamlit_1  |     return MeanPooling(path, device=deviceid)
streamlit_1  |   File "/usr/local/lib/python3.7/site-packages/txtai/vectors/pooling.py", line 33, in __init__
streamlit_1  |     self.model = AutoModel.from_pretrained(path)
streamlit_1  |   File "/usr/local/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 384, in from_pretrained
streamlit_1  |     return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
streamlit_1  |   File "/usr/local/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1223, in from_pretrained
streamlit_1  |     f"Error no file named {[WEIGHTS_NAME, TF2_WEIGHTS_NAME, TF_WEIGHTS_NAME + '.index', FLAX_WEIGHTS_NAME]} found in "
streamlit_1  | OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index', 'flax_model.msgpack'] found in directory /app/clip-ViT-B-32 or `from_tf` and `from_flax` set to False.

I am copying clip-ViT-B-32 directory to the container in the Dockerfile to avoid streamlit downloading it. I’m not sure if the path is correct.

Tested locally with docker-compose up --build streamlit using streamlit-cdk-fargate as a guide. I have succesfully deployed the streamlit app this way- all that has changed now, is new embeddings and possibly updated python requirements.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
davidmezzetticommented, Aug 24, 2021

This should patch your index:

import pickle
with open("/content/my-embeddings/config", "rb") as f:
  config = pickle.load(f)
  config["method"] = "sentence-transformers"

with open("/content/my-embeddings/config", "wb") as f:
  pickle.dump(config, f, protocol=4)
1reaction
edanweiscommented, Aug 24, 2021

@davidmezzetti This problem has come up again attempting to load embeddings created using txtai 3.0.0 / txtai 3.1.0 into txtai 3.2.0 installed with txtai[similarity] using embedding.load("3.0.0-embeddings") Eg:

# txtai==3.2.0
embeddings = Embeddings({"method": "sentence-transformers", "path": "clip-ViT-B-32"})
embeddings.load("3.0.0-embeddings")
  warnings.warn(
404 Client Error: Not Found for url: https://huggingface.co/clip-ViT-B-32/resolve/main/config.json
Traceback (most recent call last):
  File "/home/ec2-user/miniconda3/envs/txtai/lib/python3.9/site-packages/transformers/configuration_utils.py", line 512, in get_config_dict
    resolved_config_file = cached_path(
  File "/home/ec2-user/miniconda3/envs/txtai/lib/python3.9/site-packages/transformers/file_utils.py", line 1370, in cached_path
    output_path = get_from_cache(
  File "/home/ec2-user/miniconda3/envs/txtai/lib/python3.9/site-packages/transformers/file_utils.py", line 1541, in get_from_cache
    r.raise_for_status()
  File "/home/ec2-user/miniconda3/envs/txtai/lib/python3.9/site-packages/requests/models.py", line 953, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/clip-ViT-B-32/resolve/main/config.json

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ec2-user/23-08-21.py", line 167, in <module>
    main()
  File "/home/ec2-user/23-08-21.py", line 139, in main
    query(img_emb='precedent-images-textai-embedding', query_emb='clip-ViT-B-32-multilingual-v1')
  File "/home/ec2-user/23-08-21.py", line 116, in query
    embeddings.load(f"./{img_emb}")
  File "/home/ec2-user/miniconda3/envs/txtai/lib/python3.9/site-packages/txtai/embeddings/base.py", line 329, in load
    self.model = self.loadVectors()
  File "/home/ec2-user/miniconda3/envs/txtai/lib/python3.9/site-packages/txtai/embeddings/base.py", line 385, in loadVectors
    return VectorsFactory.create(self.config, self.scoring)
  File "/home/ec2-user/miniconda3/envs/txtai/lib/python3.9/site-packages/txtai/vectors/factory.py", line 41, in create
    return TransformersVectors(config, scoring)
  File "/home/ec2-user/miniconda3/envs/txtai/lib/python3.9/site-packages/txtai/vectors/base.py", line 24, in __init__
    self.model = self.load(config["path"])
  File "/home/ec2-user/miniconda3/envs/txtai/lib/python3.9/site-packages/txtai/vectors/transformers.py", line 36, in load
    return MeanPooling(path, device=deviceid)
  File "/home/ec2-user/miniconda3/envs/txtai/lib/python3.9/site-packages/txtai/vectors/pooling.py", line 33, in __init__
    self.model = AutoModel.from_pretrained(path)
  File "/home/ec2-user/miniconda3/envs/txtai/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 378, in from_pretrained
    config, kwargs = AutoConfig.from_pretrained(
  File "/home/ec2-user/miniconda3/envs/txtai/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py", line 450, in from_pretrained
    config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "/home/ec2-user/miniconda3/envs/txtai/lib/python3.9/site-packages/transformers/configuration_utils.py", line 532, in get_config_dict
    raise EnvironmentError(msg)
OSError: Can't load config for 'clip-ViT-B-32'. Make sure that:

- 'clip-ViT-B-32' is a correct model identifier listed on 'https://huggingface.co/models'

- or 'clip-ViT-B-32' is the correct path to a directory containing a config.json file
Read more comments on GitHub >

github_iconTop Results From Across the Web

OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5 ...
Here is what I found. Go to the following link, and click the circled to download, rename it to pytorch_model.bin , and drop...
Read more >
OSError: Error no file named ['pytorch_model.bin ... - GitHub
Hi: I am very new to using BERT and just followed along on your youtube.com video describing how to build and entity extraction...
Read more >
Error no file named ['pytorch_model.bin', 'tf_model.h5'] found ...
This takes a corpus, an existing BERT model, and fine tune that model ... OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5'] ...
Read more >
Error No File Named ['Pytorch_Model.Bin', 'Tf_Model.H5 ...
In this notebook I'll show you how to save and load models with PyTorch. This is important because you'll often want to load...
Read more >
huggingface load finetuned model - You.com | The AI Search ...
I am using this code to load the checkpoint. from transformers import ... model.save_weights("/content/drive/MyDrive/trained_model/tf_model.h5") ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found