connection was closed in the middle of operation
See original GitHub issue- asyncpg version: 0.15.0
- PostgreSQL version: 10.3
- local PostgreSQL:
- Python version: 3.6.5
- Platform: Ubuntu 14.04.5
- Do you use pgbouncer?: no
- Did you install asyncpg with pip?: yes
I use asyncpg with sanic. Before server start, a connection pool is made and attached to the app. In every route, the handler acquire a connection if pg access is needed. The problem is, if the pool is silent for too long, new acquired connection is not usable.
asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
After one-time exception. The next acquisition is normal again. No exception with queries. Which is really weird for me.
I’ve tried to twist max_inactive_connection_lifetime parameter, got no luck. Actually I don’t quite understand this parameter. Why would I need this parameter?
Any help is welcomed.
Issue Analytics
- State:
- Created 5 years ago
- Reactions:6
- Comments:23 (3 by maintainers)
Top Results From Across the Web
Connection was closed in the middle of operation when ...
Connection was closed in the middle of operation when accesing database using Python - Stack Overflow. Stack Overflow for Teams – Start ...
Read more >PostgreSQL "Connection is closed" error after a few minutes of ...
I've successfully deployed my Python app with a PostgreSQL cluster. The app manages the DB connection using SQLAlchemy with the asyncpg driver.
Read more >How to deal with closed connections in database pool
This article explains how to overcome the "connection is closed" error that sometimes is seen on the mule logs when connecting to a...
Read more >4.7. The Mysteries of Connection Close - HTTP - O'Reilly
If the other side sends data to your closed input channel, the operating system will issue a TCP “connection reset by peer” message...
Read more >Why do we get the error "Connection reset by peer ... - IBM
One of the most common causes for this error is a firewall in the middle closing the connection. In this case you could...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@Jeffwhen we had the same problem when deploying with docker-swarm.
In our case, the root cause was that ipvs, used by swarm to route packets, have default expiration time for idle connections set to 900 seconds. So if connection had no activity for more than 15 minutes, ipvs broke it. 900 seconds is significantly less than default linux tcp keepalive setting (7200 seconds) used by most of the services that can send keepalive tcp packets to keep connections from going idle.
The same problem is described here https://github.com/moby/moby/issues/31208
To fix this we had to set the following in
postgresql.conf:These settings are forcing PostgreSQL to keep connections from going idle by sending keepalive packets more often than ipvs default setting (that we can’t change in docker-swarm, sadly). I guess the same could be achieved by changing corresponding linux settings (
net.ipv4.tcp_keepalive_timeand the like), 'cause PostgreSQL uses them by default, but in our case changing these was a bit more cumbersome.This bug happens only with
uvloopinstalled