AttributeError: 'GPT2LMHeadModel' object has no attribute 'generate'
See original GitHub issueπ Bug
The example script run_generation.py is broken with the error message AttributeError: 'GPT2LMHeadModel' object has no attribute 'generate'
To Reproduce
Steps to reproduce the behavior:
- In a terminal, cd to
transformers/examplesand thenpython run_generation.py --model_type=gpt2 --model_name_or_path=gpt2 - After the model binary is downloaded to cache, enter anything when prompted β
Model prompt >>>β - And then you will see the error:
Traceback (most recent call last):
File "run_generation.py", line 236, in <module>
main()
File "run_generation.py", line 216, in main
output_sequences = model.generate(
File "C:\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 585, in __getattr__
type(self).__name__, name))
AttributeError: 'GPT2LMHeadModel' object has no attribute 'generate'
Expected behavior
Environment
- OS: Windows 10
- Python version: 3.7.3
- PyTorch version: 1.3.1
- PyTorch Transformers version (or branch): 2.2.2
- Using GPU ? N/A
- Distributed of parallel setup ? N/A
- Any other relevant information:
Iβm running the latest version of run_generation.py. Here is the permanent link: https://github.com/huggingface/transformers/blob/ce50305e5b8c8748b81b0c8f5539a337b6a995b9/examples/run_generation.py
Additional context
Issue Analytics
- State:
- Created 4 years ago
- Reactions:11
- Comments:6
Top Results From Across the Web
AttributeError: 'GPT2LMHeadModel' object has no attribute ...
Hello guys, I fine-tuned a GPT2 model on some stable diffusion text and I'm trying to create a Gradio space for it but...
Read more >'GPT2Model' object has no attribute 'gradient_checkpointing ...
It throws the following error as mentioned in the question: AttributeError: 'GPT2Model' object has no attribute 'gradient_checkpointing'.
Read more >Training Transformers model using TextLearner - Fast.ai forums
I am unable to use the TextLearner Class to create my learner object. AttributeError: 'GPT2LMHeadModel' object has no attribute 'reset'.
Read more >[MLOps] BentoML - 2 : kogpt2 with transformers! - velog
l^o")) import torch from transformers import GPT2LMHeadModel model = GPT2LMHeadModel.from_pretrained('skt/kogpt2-base-v2') text = 'λ¬ λ°μΒ ...
Read more >PYTHON : AttributeError: 'module' object has no attribute 'model'
PYTHON : AttributeError : 'module' object has no attribute 'model' [ Gift : Animated Search Engine : https://www.hows.tech/p/recommended.html ]Β ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@jsh9βs solution worked for me!
Also, if you want to avoid doing the manual steps, you can just
pip installdirectly from themasterbranch by running:I have found the reason.
So it turns out that the
generate()method of thePreTrainedModelclass is newly added, even newer than the latest release (2.3.0). Quite understandable since this library is iterating very fast.So to make
run_generation.pywork, you can install this library like this:pip install -e .(donβt forget the dot)run_generation.pyIβll leave this ticket open until the
generate()method is incorporated into the latest release.