Squidex intermittent 404 error

We have load balanced Squidex UI and deployed in docker container in two different servers. We have central MongoDB cluster so both instances connect to same MongoDB with same connection string. Both these servers are behind ELB and ELB is doing load balancing.
We are getting intermittent 404 error for everything (schema creation, data creation etc).

We have most of the defaults in appSetting and just updated MongoDB connection strings. Are we missing some setting?

Everything worked perfectly fine when we had just one server and one docker container in Dev.

Have you enabled clustering? https://github.com/Squidex/squidex/blob/master/backend/src/Squidex/appsettings.json#L361

You also need a health check to not serve one instance while it is still starting.

Thanks for reply. Can below setting cause this problem also? I saw similar posts and you suggested to set caching.replicated.enable to false. Should we do this also along with “clustering”: “MongoDB”,

“caching”: {
// Set to true, to use strong etags.
“strongETag”: false,

    // Restrict the surrogate keys to the number of characters.
    "maxSurrogateKeysSize": 0,

    "replicated": {
        // Set to true to enable a replicated cache for app, schemas and rules. Increases performance but reduces consistency.
        "enable": true

Just try it out, I would say.

I have used below setting now. Just updated clustering to MongoDB. I am getting errors as below. Out of 2 containers first container always works without any issue but the second one is having below issue.
“orleans”: {
“clustering”: “MongoDB”,
“siloPort”: “11111”,
“gatewayPort”: “40000”,
“ipAddress”: “”

“category”: “Orleans.Runtime.TypeManager”,
“exception”: {
“type”: “Orleans.Runtime.OrleansMessageRejectionException”,
“message”: “Exception while sending message: Orleans.Runtime.Messaging.ConnectionFailedException: Unable to connect to S172.24.0.2:11111:372540031, will retry after 0.7663ms\n at Orleans.Runtime.Messaging.ConnectionManager.ConnectionEntry.ThrowIfRecentConnectionFailure()\n at Orleans.Runtime.Messaging.ConnectionManager.GetConnectionAsync(SiloAddress endpoint)\n at Orleans.Runtime.Messaging.OutboundMessageQueue.\u003CSendMessage\u003Eg__SendAsync|10_0(ValueTask\u00601 c, Message m)”,
“stackTrace”: " at Orleans.Internal.OrleansTaskExtentions.\u003CToTypedTask\u003Eg__ConvertAsync|4_0[T](Task\u00601 asyncTask)\n at Orleans.Runtime.Scheduler.AsyncClosureWorkItem\u00601.Execute()\n at Orleans.Runtime.TypeManager.GetTargetSiloGrainInterfaceMap(SiloAddress siloAddress)"

Just to add, we are connecting to central MongoDB cluster from both instances.

Perhaps your servers cannot communicate with each other?

As far as I can understand, both servers should be able to talk on port:11111, right? There is another port 40000, do we need that port also open? Most likely port won’t be allowed and we might need to allow this port. Do we need both 11111 and 40000 or just 11111.

Only port 11111 is needed for node to node communication.