Server restarts on saving content

Hi, I have issue with stability of Squidex instance on Azure docker (fallowed install documentation). After login to management ui, going to dedicated app (in our case blog) and creating some Post the server restarts.
This issue at first look was dedicated to assets, but in few others test without pasting to content/another fields the asset image also cause this issue.

Initially I though that this could be issue of beta squidex version (4.0.0-beta1), but then I migrated to version 4.0.3 -> still same behavior.

In current logs there wasn’t nothing logged (maybe we need to enable flag showPII ?), logs seems not be placed after server restarts.

Then I was searching in old logs from few days ago and found something related with mongodb -> MongoDB.Driver.MongoConnectionException.

Here also some image:

  • [x] Checked the logs and have uploaded a log file and provided a link because I found something suspicious there. Please do not post the log file in the topic because very often something important is missing.

I’m submitting a…

  • [ ] Regression (a behavior that stopped working in a new release)
  • [x] Bug report
  • [ ] Performance issue
  • [ ] Documentation issue or request

Current behavior

Expected behavior

Working server on saving content with/without assets (not restarting).

Minimal reproduction of the problem

  1. Creating new Post content
    (2). Adding asset
  2. Saving

Step 2 is optional - it take to same error with and without added asset.
Sometimes step 3 is not available cause server restart.


  • [x] Self hosted with docker (Azure) (setup from documentation, B1 instance)
  • [ ] Self hosted with IIS
  • [ ] Self hosted with other version
  • [ ] Cloud version

Version: 4.0.3 / Previously on 4.0.0


  • [x] Chrome (desktop)
  • [ ] Chrome (Android)
  • [ ] Chrome (iOS)
  • [ ] Firefox
  • [ ] Safari (desktop)
  • [ ] Safari (iOS)
  • [ ] IE
  • [ ] Edge


We also have setup the Squidex Identity Server and using GitHub and Google OAuth. All what we have is single instance (already checked if Orleans is set to Development).

Thanks for helping in this.

There must be something in the logs.

Here’s the log file (for 4.0.0, in new version 4.0.3 this issue wasn’t logged):

I only see that you loose connection to MongoDB very often.

Thanks for reply. Tomorrow morning I’ll set this flag showPII and post new log. Also I searched in mongodb logs, but I didn’t found nothing disturbing.

I am very sure that you will not find anything useful with showPII. It is only used for identity server and authentication to hide sensitive personal data.

Ok, so just post new log. Do you have any ideas what can it be ? Or what can I check ?

For now this log contains just the Post request, but no errors.

No, there are only a few reasons that can cause a restart:

  1. Out of memory
  2. Stackoverflow issues.
  3. A monitoring system or observer decided to restart the service (e.g. kubernetes, docker)

I have never seen 1. and 2. so I would dig into 3.

Yes, but you can explore app and nothing breaks. Issue appears when you want to save content. Also when server is up we can get all posts data (by graphql) from another app and display it without any issues.

I’ll also check docker logs tomorrow. For now thanks for helping.

Hi @Sebastian
Here’s the new log file:!Ajl6ZcJ9oiNd-l7-yBN79uuriOdM

I also saw that we have assets with same file name (our editors upload them so), could it be also a issue ? Checked it and this is not a issue with same name. Today when I wanted to select asset from popup window then it caused restart application.

No idea what can it be. I have just this log file.

It’s always related when we are adding asset to content and saving it.

It is also restarting when trying to go to assests section. So there must be sth connected with querying them. Is there any additional way in Squidex we can debug assets queries?

Are you also using docker in Azure? I cannot reproduce it locally and also not in the squidex cloud.

yes. So I guess this might be specific issue for this combination of tools.

Might be, there are a lot of running instances in docker and it does not seem to be a general error, otherwise I would have seen it before.

We solved this issue. It was related with server resources, we had to increase power. Thanks for helping.

Keeping an eye on this topic: Hardware requirements?