Upgrading Squidex from 5.5.0 to 7.7.0

Hi Sebastian,

I’m planning on upgrading my Squidex instance from 5.5.0 to 7.7.0. I’m currently running Squidex in Azure AKS using the Kubernetes MongoDB Community Operator with a ReplicaSet of 3 members. I’m running MongoDB version 4.2.6.

I’ve read the https://github.com/Squidex/squidex/blob/master/CHANGELOG.md and I found some things that I think might affect the upgrade:

  • Orleans has been removed
  • Rabbit MQ: Removed the RabbitMQ event consumer.

Question 1. Will these two affect anything do you think?

Then I saw the following:

  • Identity: Add anti forgery tokens to all login and profile pages to prevent CSRF attacks.

Question 2:

Using antiforgery tokens in ASP.NET Core requires sticky sessions. So I’m assuming I will need to enable that in my ingress-nginx configuration? (If so, then this sort of is a breaking change; maybe update changelog.md to reflect this?)

Here’s some more information about sticky sessions if you want to read up on it:



Hi,

let me answer your questions:

Question 1

I think nobody has used RabbitMQ. This was built long before rules and if we need it we should implement it with rules. It was kind of a predecessor of that.

Orleans is relevant. You need one worker instance. So I would …

  1. Duplicate your deployment and change.
  2. Change tags to not serve web requests by removing it from the service.
  3. Set replica count to 1
  4. Set CLUSTERING__WORKER= true for this deployment.
  5. Set CLUSTERING__WORKER= false for the main deployment.

Question 2

Where do you have the thing with sticky sessions from? I have just seen a cookie in my tests.

EDIT: I think the problem with sticky session for some user is the encryption key. By default it is stored in the file system, so 2 instances could have different encryption keys. But we just store it in the database, so I don’t see a problem here.

e.g. https://stackoverflow.com/questions/23402210/the-anti-forgery-token-could-not-be-decrypted/53870092#53870092

Thanks Sebastian, will check the things you mentioned.

Will get back to you regarding sticky sessions.

About Orleans; it was removed in v7. But it is mentioned in this article:

https://docs.squidex.io/01-getting-started/installation/platforms/install-on-kubernetes

I’m a bit confused why Orleans is mentioned here?

`###

Common Issues

Warning for ServerGC

info: Orleans.Runtime.Silo[100404] Silo starting with GC settings: ServerGC=False GCLatencyMode=Interactive warn: Orleans.Runtime.Silo[100405] Note: Silo not running with ServerGC turned on - recommend checking app config : – warn: Orleans.Runtime.Silo[100405] Note: ServerGC only kicks in on multi-core systems (settings enabling ServerGC have no effect on single-core machines).

This is not a critical warning. ServerGC is a special Garbage Collector as it has no positive or negative impact when running with a single core. You can just ignore it.

Solution : Request more than 1 CPU

resources:

requests:

cpu: 2
`

It just has not been updated yet.

EDIT: Has been updated.

Awesome, thanks! :slight_smile:
Please keep this thread open, I will add more info later.

1 Like

Hi Sebastian,

I managed to get Squidex up and running in a new AKS cluster testing environment.

I followed the instructions from this doc:

https://docs.squidex.io/01-getting-started/installation/platforms/install-on-kubernetes

I’m looking at the Helm chart templates, values.yml etc (https://github.com/Squidex/squidex/tree/master/helm/squidex7) trying to understand how all of the Kubernetes resources get created.

Question 1: One thing that I don’t really understand is how the StatefulSet gets created? I couldn’t find any template in the helm chart GitHub repo.

❯ k describe statefulsets.apps squidex-mongodb-replicaset
Name:               squidex-mongodb-replicaset
Namespace:          mse
CreationTimestamp:  Thu, 27 Jul 2023 13:39:45 +0200
Selector:           app=mongodb-replicaset,release=squidex
Labels:             app=mongodb-replicaset
                    app.kubernetes.io/managed-by=Helm
                    chart=mongodb-replicaset-3.9.6
                    heritage=Helm
                    release=squidex
Annotations:        meta.helm.sh/release-name: squidex
                    meta.helm.sh/release-namespace: mse
Replicas:           3 desired | 3 total
Update Strategy:    RollingUpdate
  Partition:        0
Pods Status:        3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:       app=mongodb-replicaset
                release=squidex
  Annotations:  checksum/config: 955d248e96a48d052e38ecd28cfbcc15f0ae8e7fc13fc97509ad8235d4c53fda
  Init Containers:
   copy-config:
    Image:      busybox:1.29.3
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
    Args:
      -c
      set -e
      set -x

      cp /configdb-readonly/mongod.conf /data/configdb/mongod.conf

    Environment:  <none>
    Mounts:
      /configdb-readonly from config (rw)
      /data/configdb from configdir (rw)
      /work-dir from workdir (rw)
   install:
    Image:      unguiculus/mongodb-install:0.7
    Port:       <none>
    Host Port:  <none>
    Args:
      --work-dir=/work-dir
    Environment:  <none>
    Mounts:
      /work-dir from workdir (rw)
   bootstrap:
    Image:      mongo:3.6
    Port:       <none>
    Host Port:  <none>
    Command:
      /work-dir/peer-finder
    Args:
      -on-start=/init/on-start.sh
      -service=squidex-mongodb-replicaset
    Environment:
      POD_NAMESPACE:   (v1:metadata.namespace)
      REPLICA_SET:    rs0
      TIMEOUT:        900
    Mounts:
      /data/configdb from configdir (rw)
      /data/db from datadir (rw)
      /init from init (rw)
      /work-dir from workdir (rw)
  Containers:
   mongodb-replicaset:
    Image:      mongo:3.6
    Port:       27017/TCP
    Host Port:  0/TCP
    Command:
      mongod
    Args:
      --config=/data/configdb/mongod.conf
      --dbpath=/data/db
      --replSet=rs0
      --port=27017
      --bind_ip=0.0.0.0
    Liveness:     exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=3
    Readiness:    exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /data/configdb from configdir (rw)
      /data/db from datadir (rw)
      /work-dir from workdir (rw)
  Volumes:
   config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      squidex-mongodb-replicaset-mongodb
    Optional:  false
   init:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      squidex-mongodb-replicaset-init
    Optional:  false
   workdir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
   configdir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
Volume Claims:
  Name:          datadir
  StorageClass:
  Labels:        <none>
  Annotations:   <none>
  Capacity:      10Gi
  Access Modes:  [ReadWriteOnce]
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  53m   statefulset-controller  create Pod squidex-mongodb-replicaset-0 in StatefulSet squidex-mongodb-replicaset successful
  Normal  SuccessfulCreate  51m   statefulset-controller  create Pod squidex-mongodb-replicaset-1 in StatefulSet squidex-mongodb-replicaset successful
  Normal  SuccessfulCreate  51m   statefulset-controller  create Pod squidex-mongodb-replicaset-2 in StatefulSet squidex-mongodb-replicaset successful

Question 2: What are these and how are they created?

Image:      busybox:1.29.3
Image:      unguiculus/mongodb-install:0.7
Image:      mongo:3.6

Question 3: I came across the following article. Do you use any of the four methods mentioned in the article? If so, which one? I’m currently using https://github.com/mongodb/mongodb-kubernetes-operator in production and it works fine.

We just reference an existing mongo helm chart. I am not 100% sure, but I think it is the first one from Bitnami. I have migrated everything from a deprecated chart a while ago.