[Grid] You must call wandb.init() before wandb.log()
See original GitHub issueš Bug
Iām reopening #1356 because Iām getting this error running my code on grid.ai.
I am getting error:
wandb.errors.error.Error: You must call wandb.init() before wandb.log()
Please reproduce using the BoringModel
Not possible since colab has only one GPU, unlike grid.ai
To Reproduce
On grid.ai or multiple GPU machine, create a trainer with WandbLogger and do not specify an accelerator. Run with gpus=-1 and hit this error.
Despite https://github.com/PyTorchLightning/pytorch-lightning/pull/2029 the default is ddp_spawn, which triggers this error on grid.ai:
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:52: UserWarning: You requested multiple GPUs but did not specify a backend, e.g. `Trainer(accelerator="dp"|"ddp"|"ddp2")`. Setting `accelerator="ddp_spawn"` for you.
Workaround:
- In main, run
import wandb
wandb.init(project...)
(seems redudant and potentially dangerous/foot-gunny since you are already passing a WandbLogger to the trainer.
- Make sure trainer has accelerator=ddp defined.
Expected behavior
wandb logger works when trainer is given WandbLogger, gpu=-1, and no accelerator is defined, nor is a duplicate wandb init needed to be called.
Environment
grid.ai
- CUDA: - GPU: - Tesla M60 - Tesla M60 - available: True - version: 10.2
- Packages: - numpy: 1.20.2 - pyTorch_debug: False - pyTorch_version: 1.8.1+cu102 - pytorch-lightning: 1.2.7 - tqdm: 4.60.0
- System: - OS: Linux - architecture: - 64bit - - processor: x86_64 - python: 3.7.10 - version: #1 SMP Tue Mar 16 04:56:19 UTC 2021
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:6 (6 by maintainers)
Top Results From Across the Web
Error: You must call wandb.init() before wandb.log() Ā· Issue ...
hyperparameter tuning gave an error about wandb logging. I think the wandb has not been initialised for parameter tuning but it wasĀ ...
Read more >Log Data with wandb.log - Documentation - Weights & Biases
Call wandb.log(dict) to log a dictionary of metrics, media, or custom objects to a step. Each time you log, we increment the step...
Read more >Launch Experiments with wandb.init - Documentation
Call wandb.init() once at the beginning of your script to initialize a new job. This creates a new run in W&B and launches...
Read more >FAQ - Documentation - Weights & Biases - Wandb
FAQ Ā· Do I need to provide values for all hyperparameters as part of the W&B Sweep. Ā· How should I run sweeps...
Read more >General - Documentation - Weights & Biases - Wandb
init() is called from your training script an API call is made to create a run object on our servers. A new process...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Thanks. I tried this and can see where the problem is. Do the following:
wandb.log({"examples": ... })withself.logger.experiment.log(...)This should work:) I can see the audio samples in the wandb run online. It doesnāt play but I think thatās because this dummy sample is too short.
Furthermore, we currently donāt support images, audio etc. in self.log(), since the api depends on the specific logger. There are efforts to standardize this #6720 So for these custom objects, you have to call
self.logger.experiment.log(which is basically the same aswandb.log)EDIT: I tried your code with DDP as well. The fix above applies.
I see. Thank.
Iām not exactly sure how to make it more clear, but the headline āManual Loggingā is maybe a bit off-base for me. āManual Logging to a Supported or Custom Loggerā?