[LOW_PRIO] Unable to access content

I have…

  • [ ] Checked the logs and have uploaded a log file and provided a link because I found something suspicious there. Please do not post the log file in the topic because very often something important is missing.

I’m submitting a…

  • [ ] Regression (a behavior that stopped working in a new release)
  • [ ] Bug report
  • [x] Performance issue
  • [ ] Documentation issue or request

Current behavior

The app is very very slow, we have content with >1000 records, we can’t scroll down to see them all, we can’t edit them. the browser saying after waiting a while that the site was crashed

Expected behavior

the app should load fast and let us see all the records & edit records

Minimal reproduction of the problem


  • [ ] Self hosted with docker
  • [ ] Self hosted with IIS
  • [ ] Self hosted with other version
  • [x] Cloud version

Version: [VERSION]


  • [x] Chrome (desktop)
  • [ ] Chrome (Android)
  • [ ] Chrome (iOS)
  • [ ] Firefox
  • [ ] Safari (desktop)
  • [ ] Safari (iOS)
  • [ ] IE
  • [ ] Edge


What is your app name and the schema name? I do not experience any issues.

I need the app name as it is used in the URL. I cannot see it from the screenshot.

Hey it’s -
(in the same team)


I have never seen a schema like this. I am not sure if there is an easy fix for this, or if there is a fix at all.

Such a big content item with thousands of inputs brings the browser to its limit. So they only option would be to use virtualization. If is very complicated on the root field.

So there are 3 options:

  1. Build a custom editor for your content
  2. Use a simple schema with just (key, value) and use inline editing => Benefit: You can search.
  3. Use a singleton content with an array field (because it already supports virtualization).

Hello, Sebastian!
I understand your concern about the complexity, but our schema worked just fine before Squidex new version release. Maybe we should talk about some new features that are causing this behaviour?

You mean the feature with the design? Because it was only CSS work.

How do you create the schema fields? Is it an automated process and if yes: Perhaps you have just added too much fields over time.

I cannot find the topic right now but a few ago I worked on virtualization for the array editor. The problem was a content item with a lot of array items and I have done some profiling with chrome. When everything was rendered properly I made a simple test: I just clicked an input field and checked the performance. And it took around 500ms just to click an input field. It was very interesting because there was no javascript at all in the debugger output.

We are creating new fields by using the UI which Squidex provides. Schema field is just an JSON field right?

If so I don’t see how we can reach maximum because JSON files are pretty flexible in terms of fields capacity

The problem is not the JSON or the API. The main problem is the UI at the moment. Because it cannot render so much input fields or editors.

1 Like

Thanks, we will look into some solution

The other problem is the following:

Lets say I would write the optimal code and there is no overhead from the JavaScript or the way I write CSS. Then there is still the problem that Chrome cannot render so many inputs.

So sooner or later the only solution to the problem is virtualization, which basically means that while you scroll around the field editors are created and destroyed on the fly to keep the size of the DOM small.

There are two problems:

  1. The state of some editors like reference editors store its state in the component itself .So they fetch the references on the fly when the are initialized. Using virtualization would be a problem because they have to re-fetch the state all the time. This can probably solved with caching, I guess.

  2. Some other editors like rich text and markdown editors are very heavy and recreating them on the fly while you scroll around would also cause issues and make scrolling not a very great experience.

So there are a lot of challenges around this and I am not sure if there is a good solution that works for all cases.

Is there a recommended limit on how many input fields there should be in a schema?

No, I have no benchmark or so. But I think for a normal model it is hard to build something with more than 30 fields.