Squidex almost exhausts all the connections in Mongo Atlas

Hello,

We are testing out Squidex with Mongo Atlas. We have noticed that a total of 28 connections are being created on Mongo Atlas when starting Squidex. This issue appears to be happening per replica. Therefore, if we have 3 replicas we are having approximately 80 connections on Mongo. These connections are not being released.

We are using a Shared Cluster of Mongo Atlas which has a maximum number of connections of 100. Squidex is almost exhausting all the connection available by our Mongo Atlas cluster.

We are not doing any other processing except for stating the process only. Such scenario is limiting us in our operations.

Hi,

thank you for your feedback. Do you use a single squidex instance and the same configuration for all mongodb settings?

We are using the default Squidex configuration

You can try to configure the connection pool size: https://docs.mongodb.com/v2.4/reference/connection-string/#connection-string-options

Thanks. I limited the number of connections. I also increased the number of max pool size just in case.

One question (asking here not to open a new question)…CosmosDB states that fully compatible with Mongo API simply by changing the connection string. Do you suggest such database for Squidex?

They say so, but it is not true. It is missing one essential feature that I need for the event store. Therefore it is not possible yet. One year ago I tried to support CosmosDb with the help of other developers, but it did not work out.

Ouch, there goes my plan for hosting our production environment in Azure… And I guess that the planned support for SQL isn’t coming anytime soon right?

Yes, personally I have used many big cloud providers before: S3, Azure, GCE and IBM (IBM sucks) and I am very happy that with kubernetes I am not coupled to them anymore and can host my stuff just where I want.

SQL isn’t coming soon, because I do not have the resources for it and there are more important features in the pipeline.