Developing/Migrating from Preorleans to Post Orleans

When migrating from an older version to 7+ where Orleans was dropped, and local memory caching was implemented…
.

  1. With the change removing Orleans are there risks of the in memory caches becoming out of sync with each other in a multinode environment?
  2. When one local cache gets an update how do the other caches know about it?
  3. Is there a possibility of consecutive requests getting different read states in a round robin sort of load balancing of the incoming requests to squidex instances? Sebastian Stehle
  4. If cache state is replicated for each node given the same request, does that imply vertical memory needs increase from Orleans if not routing traffic to specific nodes based on some condition? (i.e. if we are using simple round robining of requests to arbitrary squidex instances).
  • By vertical memory needs I mean each instance might be expected to each need enough memory for all apps/schemas/etc.

It might be mostly okay, in that perhaps when writes happen via an instance with stale data they get rejected with the error “Another user has modified this.” But that might also mean that refreshes could sometimes pull stale data multiple times if the request for read happens to keep hitting the stale data instance. I’m mostly trying to figure out if I’m missing something and if there will be some trade-offs with upgrading from the Orlean’s based design to this one. Orlean’s seemed to elegantly handle the problem via partitioning of singletons - meaning only one instance of any given state in the cluster. With multiple instances of state between various nodes it seems like consistency could rear it’s head.

We are on Kubernetes in AKS. It looks like AKS load balancer defaults might route traffic basic on IP-Hashing. That might imply as long as we expose the client source IP that maybe requests from the same source will route to the same application instance. This might improve some of the aspects above, in that they would be consistent; but it also still allows that some people might get a stale version for the duration of the cache if they hit a different instance after an update occurs. This might be acceptable. Does this seem like a correct understanding? I suppose for such behavior for any ClusterIP type load balancers I need to set session affinity: https://kubernetes.io/docs/reference/networking/virtual-ips/#session-affinity

Short answer for now: I would just run Squidex without caching. There is memory cache for apps and schemas, but MongoDB is fast enough for these records.

When you enable that, Squidex uses pub-sub (over Redis, Mongo, RabbitMQ or Google PubSub) to distribute cache invalidations to other instances.

Please note: You can only have one worker instance (this is the instance that runs all the background processes)

1 Like