Flo*_*erz 14 postgresql google-cloud-sql google-cloud-platform
Google Cloud SQL PostgreSQL数据库的连接数相对较少.根据计划,这大概在25到500之间,而Google Cloud SQL中的MySQL限制在250到4000之间,非常快地达到4000.
我们目前为在Kubernetes上运行的不同客户提供了许多试用实例,并由相同的Google Cloud SQL Postgres服务器提供支持.每个实例使用一组单独的数据库,角色和连接(每个服务一个).我们已经达到了我们计划的连接限制(50),我们甚至没有达到内存或CPU限制.连接池似乎不是一个选项,因为连接与不同的用户.我现在想知道为什么限制如此之低以及是否有办法增加限制而不必升级到更昂贵的计划.
Vic*_*rez 13
公共问题跟踪器中有一个功能请求,用于max_connections在PostgreSQL中公开并控制它.这条评论(我在这里复制)解释了以现在的方式设置连接数的原因:
Per-tier max_connections is now fully rolled out. As shown on
https://cloud.google.com/sql/faq#sizeqps, the limits are now:
Memory size, in GiB | Maximum concurrent connections
--------------------+-------------------------------
0.6 (db-f1-micro) | 25
1.7 (db-g1-small) | 50
3.75 up to 6 | 100
6 up to 7.5 | 150
7.5 up to 15 | 200
15 up to 30 | 250
30 up to 60 | 300
60 up to 120 | 400
120 and above | 500
I understand your frustration about the micro/small instances having fewer than 100
concurrent connections and the lack of control of this flag. We arrived at these values by
taking the available RAM, reducing it by overhead, shared buffers, autovacuum memory and
then dividing the remaining ram by typical per-connection memory and rounding off. This
gives us the number of connections that can be used without risk of hitting out-of-memory
condition
The basic premise of a fully managed service with an attached SLA is that we provide safe
hosting. This is what motivates us using a max_connections that is safe against OOM.
Run Code Online (Sandbox Code Playgroud)
您已选择丢弃连接池,以使用具有更高内存的实例.
更新:
正如上述线程的评论中所提到的,最大连接设置已发生变化,现在是: