Total on contents view doesn't match the number of paged results

I have…

  • [x] Checked the logs and have uploaded a log file and provided a link because I found something suspicious there. Please do not post the log file in the topic because very often something important is missing.

I’m submitting a…

  • [ ] Regression (a behavior that stopped working in a new release)
  • [x] Bug report
  • [ ] Performance issue
  • [ ] Documentation issue or request

Current behavior

‘Total’ on Contents view (and querying with the API and Squidex.ClientLibrary) is showing vastly more results than there actually are.
Clicking on ‘Next page’ shows an empty page with only back button and page size showing:
Deleting permanently via the API is not failing, but it isn’t deleting anything. Deleting via the UI is working.

Expected behavior

‘Total’ matches the number of documents.

Minimal reproduction of the problem

No idea sorry!


  • [x] Self hosted with docker
  • [ ] Self hosted with IIS
  • [ ] Self hosted with other version
  • [ ] Cloud version

Version: 7.2.0


  • [x] Chrome (desktop)
  • [ ] Chrome (Android)
  • [ ] Chrome (iOS)
  • [ ] Firefox
  • [ ] Safari (desktop)
  • [ ] Safari (iOS)
  • [ ] IE
  • [ ] Edge

This has happened a few times for us in shared development environments, so may not be a bug with Squidex (was happening when on v7.1.0 too so not a regression) but perhaps is caused by us performing concurrent actions like deleting and inserting at the same time, deploying halfway through a deletion etc. Just wondered if you had any ideas on how we could get into this state? Also if you had any thoughts on how we could fix it that doesn’t require deleting and recreating the schema?

I am actually do not believe that, because I have api tests for this use, case, which are using the API obviously.

Squidex has a cache for the total count, because it is actually a very slow operation. Is it only temporary the wrong value or constantly? Have you tried to delete this cache?

It is a mongodb collection ending with _Count.

1 Like

Thank you very much! Simply deleting the record for the problematic schema in “[app name]Content.States_Contents_All3_Count” immediately fixed the issue.

Yes sorry the deletion issue was just a side effect on our code of the total being incorrect, and it wasn’t us directly hitting the API; just using the Squidex.ClientLibrary. Our code does the following:

  1. Make initial lightweight request to get the ‘Total’ of all documents in a schema (this was returning the wrong cached value)
  2. We then make a multiple request to get 500 documents at a time (just their IDs) until we hit that total (however this was failing as we would never hit that total)
  3. Uses Bulk API to delete all the documents by their IDs

It seems to be constantly, at least it had been the wrong value for hours. Is there a built in cache expiration or action that is supposed to cause it to expire? Is there a way we can force cache expiration via the API or Squidex.ClientLibrary?

Edit: All I can find are options to not return a Total at all, nothing to get it to recalculate the total:

No, but it is only 10 sec:

I would just query until nothing is returned anymore.

Yeah looking at the code again I have definitely over complicated it, and there is even already a bit of code basically doing exactly what you are saying that I must of added in the past, making that initial call to get the total completely redundant!

if (results.Count == 0)

Ah right, so possibly for some reason actualCount != cachedCount or isOutdated are false. Will let you know if this recurs for us and try to give more insight on how we got into this state. Interesting to know this will only happen when there are more than 5000 documents!

I think I have found a bug:

See this line:

I am not sure how familiar you are with .NET, but the ct variable is CancellationToken. It is used to cancel an operation, e.g. database call. Such a cancellation token can be populated from multiple source and in this case it is…

  1. The cancellation token from the request.
  2. The hard timeout for queries.

But his line is supposed to run in the background so it does not make sense to cancel it at all.

1 Like

That makes far more sense than what I was saying! So it would just continue to run in the background until it hits an actual error? Would it not still need to be able to timeout but with a lot longer until it does so?

Also while you’re in that area there’s a minor typo…

I have pushed a fix…docker tag dev-7390

1 Like

This topic was automatically closed after 2 days. New replies are no longer allowed.