OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index', 'flax_model.msgpack']
See original GitHub issueMy Embeddings were indexed using txtai==3.1.0
embeddings = Embeddings({"method": "transformers", "path": "clip-ViT-B-32", "modelhub": False})
embeddings.index(images())
But now using txtai==3.2.0 I get the following error after initialising Embeddings following example Image.py for a streamlit application
embeddings = Embeddings({"path": os.path.join(os.path.dirname(os.path.realpath(__file__)), 'clip-ViT-B-32') , "method": "transformers", "modelhub": True})
streamlit_1 | Traceback (most recent call last):
streamlit_1 | File "/usr/local/lib/python3.7/site-packages/streamlit/script_runner.py", line 350, in _run_script
streamlit_1 | exec(code, module.__dict__)
streamlit_1 | File "/app/app.py", line 129, in <module>
streamlit_1 | app()
streamlit_1 | File "/app/app.py", line 102, in app
streamlit_1 | embeddings = build(embeddings_path)
streamlit_1 | File "/usr/local/lib/python3.7/site-packages/streamlit/caching.py", line 543, in wrapped_func
streamlit_1 | return get_or_create_cached_value()
streamlit_1 | File "/usr/local/lib/python3.7/site-packages/streamlit/caching.py", line 525, in get_or_create_cached_value
streamlit_1 | return_value = func(*args, **kwargs)
streamlit_1 | File "/app/app.py", line 68, in build
streamlit_1 | embeddings = Embeddings({"path": clippath, "method": "transformers", "modelhub": True})
streamlit_1 | File "/usr/local/lib/python3.7/site-packages/txtai/embeddings/base.py", line 53, in __init__
streamlit_1 | self.model = self.loadVectors() if self.config else None
streamlit_1 | File "/usr/local/lib/python3.7/site-packages/txtai/embeddings/base.py", line 385, in loadVectors
streamlit_1 | return VectorsFactory.create(self.config, self.scoring)
streamlit_1 | File "/usr/local/lib/python3.7/site-packages/txtai/vectors/factory.py", line 41, in create
streamlit_1 | return TransformersVectors(config, scoring)
streamlit_1 | File "/usr/local/lib/python3.7/site-packages/txtai/vectors/base.py", line 24, in __init__
streamlit_1 | self.model = self.load(config["path"])
streamlit_1 | File "/usr/local/lib/python3.7/site-packages/txtai/vectors/transformers.py", line 36, in load
streamlit_1 | return MeanPooling(path, device=deviceid)
streamlit_1 | File "/usr/local/lib/python3.7/site-packages/txtai/vectors/pooling.py", line 33, in __init__
streamlit_1 | self.model = AutoModel.from_pretrained(path)
streamlit_1 | File "/usr/local/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 384, in from_pretrained
streamlit_1 | return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
streamlit_1 | File "/usr/local/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1223, in from_pretrained
streamlit_1 | f"Error no file named {[WEIGHTS_NAME, TF2_WEIGHTS_NAME, TF_WEIGHTS_NAME + '.index', FLAX_WEIGHTS_NAME]} found in "
streamlit_1 | OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index', 'flax_model.msgpack'] found in directory /app/clip-ViT-B-32 or `from_tf` and `from_flax` set to False.
I am copying clip-ViT-B-32 directory to the container in the Dockerfile to avoid streamlit downloading it. I’m not sure if the path is correct.
Tested locally with docker-compose up --build streamlit using streamlit-cdk-fargate as a guide. I have succesfully deployed the streamlit app this way- all that has changed now, is new embeddings and possibly updated python requirements.
Issue Analytics
- State:
- Created 2 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5 ...
Here is what I found. Go to the following link, and click the circled to download, rename it to pytorch_model.bin , and drop...
Read more >OSError: Error no file named ['pytorch_model.bin ... - GitHub
Hi: I am very new to using BERT and just followed along on your youtube.com video describing how to build and entity extraction...
Read more >Error no file named ['pytorch_model.bin', 'tf_model.h5'] found ...
This takes a corpus, an existing BERT model, and fine tune that model ... OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5'] ...
Read more >Error No File Named ['Pytorch_Model.Bin', 'Tf_Model.H5 ...
In this notebook I'll show you how to save and load models with PyTorch. This is important because you'll often want to load...
Read more >huggingface load finetuned model - You.com | The AI Search ...
I am using this code to load the checkpoint. from transformers import ... model.save_weights("/content/drive/MyDrive/trained_model/tf_model.h5") ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
This should patch your index:
@davidmezzetti This problem has come up again attempting to load embeddings created using txtai 3.0.0 / txtai 3.1.0 into txtai 3.2.0 installed with
txtai[similarity]usingembedding.load("3.0.0-embeddings")Eg: