"ValueError: Please make sure to properly initialize your"... when setting LOGLEVEL=DEBUG
See original GitHub issueSystem Info
- `Accelerate` version: 0.14.0
- Platform: Linux-5.4.0-124-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.13
- Numpy version: 1.21.5
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- `Accelerate` default config:
Not found
Traceback (most recent call last):
File "/workspace/diffusers/examples/dreambooth/train_dreambooth.py", line 689, in <module>
main(args)
File "/workspace/diffusers/examples/dreambooth/train_dreambooth.py", line 346, in main
accelerator = Accelerator(
File "/opt/conda/envs/dreambooth/lib/python3.10/site-packages/accelerate/accelerator.py", line 203, in __init__
trackers = filter_trackers(log_with, self.logging_dir)
File "/opt/conda/envs/dreambooth/lib/python3.10/site-packages/accelerate/tracking.py", line 580, in filter_trackers
logger.debug(f"{log_with}")
File "/opt/conda/envs/dreambooth/lib/python3.10/logging/__init__.py", line 1835, in debug
self.log(DEBUG, msg, *args, **kwargs)
File "/opt/conda/envs/dreambooth/lib/python3.10/site-packages/accelerate/logging.py", line 47, in log
if self.isEnabledFor(level) and self._should_log(main_process_only):
File "/opt/conda/envs/dreambooth/lib/python3.10/site-packages/accelerate/logging.py", line 32, in _should_log
state = AcceleratorState()
File "/opt/conda/envs/dreambooth/lib/python3.10/site-packages/accelerate/state.py", line 78, in __init__
raise ValueError(
ValueError: Please make sure to properly initialize your accelerator via `accelerator = Accelerator()` before using any functionality from the `accelerate` library.
Information
- The official example scripts
- My own modified scripts
Tasks
- One of the scripts in the examples/ folder of Accelerate or an officially supported
no_trainerscript in theexamplesfolder of thetransformersrepo (such asrun_no_trainer_glue.py) - My own task or dataset (give details below)
Reproduction
In the train_dreambooth.py example from the diffusers repo:
Just before the logger = get_logger(__name__) line.
Patch the following code into the file + set the LOGLEVEL env var to DEBUG:
import logging LOGLEVEL = os.environ.get('LOGLEVEL', 'WARNING').upper() logging.basicConfig(level=LOGLEVEL)_Originally posted by @0xdevalias in https://github.com/huggingface/accelerate/issues/834#issuecomment-1309708720_
Then run with something like:
LOGLEVEL=DEBUG conda run -n dreambooth --live-stream \
accelerate launch train_dreambooth.py \
--pretrained_model_name_or_path="$MODEL_NAME_OR_PATH" \
--instance_data_dir="$INSTANCE_DIR" \
--class_data_dir="$CLASS_REGULARISATION_IMAGE_DIR" \
--output_dir="$OUTPUT_DIR" \
--with_prior_preservation --prior_loss_weight=1.0 \
--instance_prompt="$INSTANCE_PROMPT" \
--class_prompt="$CLASS_REGULARISATION_PROMPT" \
--resolution=512 \
--train_batch_size=1 \
--gradient_accumulation_steps=2 --gradient_checkpointing \
--use_8bit_adam \
--learning_rate=5e-6 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--num_class_images="$NUM_CLASS_REGULARISATION_IMAGES_TO_USE" \
--max_train_steps="$MAX_TRAIN_STEPS" \
--seed=$SEED
The error is being raised by some code inside accelerate, when we’re calling the Accelerator initialiser:
- https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py#L332-L337
The code that seems to be triggering the error is the following logger.debug:
Expected behavior
The accelerate library wouldn’t crash, and would correctly output DEBUG level logs.
Issue Analytics
- State:
- Created 10 months ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
python logging file is not working when using logging ...
I was using pytest which seems to set handlers which means the default logging setup with loglevel WARNING is active -- so it...
Read more >unable to add an ML backend · Issue #565 - GitHub
Hey there, When I try to initialize an ML backend with label-studio-ml init spacy_backend --script /path/to/model.py, I constantly get the ...
Read more >When Things Go Wrong — PyInstaller 5.7.0 documentation
Find a module name, then keep clicking the “imported by” links until you find the top-level import that causes that module to be...
Read more >PyInstaller Documentation - Read the Docs
Open a command prompt/shell window, and navigate to the directory where your .py file is located, then build your app with the following...
Read more >Ray Core API — Ray 2.2.0 - the Ray documentation
This specifies the maximum number of times that a given worker can execute the given remote function before it must exit (this can...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@0xdevalias would you be willing to try this again, building from accelerate with:
And setting the environmental variable
LOG_LEVEL?E.g.:
Actually, it looks like the pip repo hadn’t updated yet. Had to build from source: