[bug] Conan remove --locks doesn't work
See original GitHub issueEnvironment Details (include every applicable attribute)
- Windows 10
- Mingw64 gcc 9.2
- Conan 1.27.0:
- Python version: 3.7
Steps to reproduce (Include if Applicable)
Tried to install ZMQ https://conan.io/center/zmq/4.3.2/ entered in password wrong (also plain text visibility on password…?? what…?) conan apparently couldn’t handle it, and after an hour, nothing, ctrl-c, quit, nothing worked. I manually closed the gitbash window and tried again, was met with
$ conan install .. -s compiler=gcc
Configuration:
[settings]
arch=x86_64
arch_build=x86_64
build_type=Release
compiler=gcc
os=Windows
os_build=Windows
[options]
[build_requires]
[env]
zmq/4.3.2@bincrafters/stable is locked by another concurrent conan process, wait...
If not the case, quit, and do 'conan remove --locks'
You pressed Ctrl+C!
ERROR: Exiting with code: 3
so I run
$ conan remove --locks
Cache locks removed
and try again. This time I get
$ conan install .. -s compiler=gcc
Configuration:
[settings]
arch=x86_64
arch_build=x86_64
build_type=Release
compiler=gcc
os=Windows
os_build=Windows
[options]
[build_requires]
[env]
zmq/4.3.2@bincrafters/stable is locked by another concurrent conan process, wait...
If not the case, quit, and do 'conan remove --locks'
You pressed Ctrl+C!
ERROR: Exiting with code: 3
the same exact thing.
I check my .conan folder, low and behold, I see stable.count.lock and stable.count both still there.
stable.count contains a -1, I don’t know if that means anything. stable.count.lock is empty.
So apparently removing the lock didn’t remove the lock.
So after manually removing the lock, I was able to restart the installation. Unfortunately the install still didn’t work, even after inputting the correct credentials. Conan still sits there and doesn’t give any feedback or indication that anything is happening. I suspect this potentially has something to do with proxy permissions or something, but with out any feedback, it is impossible to tell.
Issue Analytics
- State:
- Created 3 years ago
- Comments:21 (8 by maintainers)
Top Related StackOverflow Question
I had a similar problem and looked into this a bit more. It seems locks are only cleaned if there is a sibling directory with the same name prefix. In @blackliner’s case, there was no directory
.conan/data/ceres-solver/1.13.0/_/_, solist_folder_subdirsdoesn’t return it, and its sibling lock files aren’t cleaned up:https://github.com/conan-io/conan/blob/687eb9fd97aace7e3159de46071401d8209c68e2/conans/client/cache/cache.py#L265-L269
There is a way in which this situation can arise: issuing
conan install some/version@my/channeland being asked for a password. If you abort with Ctrl+C at that point,some/version/my/channel.count{,.lock}are created, butsome/version/my/channel/is not.@memsharded Is it safe to change the lock cleanup code to go only 3 levels deep, and indiscriminately remove
*.countand*.count.lockin all of those directories? If so, I can send a pull request.This is good feedback, we started to try to use sqlite for multi-process sync, but didn’t go deeper. We might want to investigated and try this idea further.
Yes, reads can take as long as the consumer of a package is building. So when you are building one package from source you must lock all the transitive dependencies of that package (reader). This should allow other packages building in parallel to read the same transitive dependencies (multiple readers). And when one writer takes control over one package (while building from source), it should block all its consumers from reading completely. And the problem is that a build from source operation can take many minutes, a build of 10-20 minutes is not unusual. Implementing a simple mutex would easily take down concurrency of the cache to pure sequential access in practice. So we believe that a good readers-writer implementation is necessary for some reasonable concurrency. If it can be implemented with python sqlite module robustly, that would be fantastic.