Pods fail with Exit Code 139

I have…

I’m submitting a…

  • [ ] Regression (a behavior that stopped working in a new release)
  • [ ] Bug report
  • [ ] Performance issue
  • [x] Documentation issue or request

Current behavior

I am installing Squidex via Kubernetes Helm chart on Open Telekom Cloud hosting service.
The installation fails to spin up the requested pods. Specifically the Squidex pod (not the mongo DB pods which I got working by changing storageClass to csi-disk, as Open Telekom supports).
The pods fails with Exit Code 139. Here is what I have found about that error code;

  • This indicates that container received SIGSEGV
  • SIGSEGV indicates a segmentation fault. This occurs when a program attempts to access a memory location that it’s not allowed to access, or attempts to access a memory location in a way that’s not allowed.
  • From the Docker container standpoint, this either indicates an issue with the application code or sometimes an issue with the base images used by the container.

We got Squidex running on http, but not https a while ago.
But when we tried retracing our steps, we could not get it working again. So we are not sure how. This also indicates to me, that it might be cloud provider issue, and not Squidex code as presume when reading about error code 139 above

Error log:

{"logLevel":"Information","message":"Application started","environment":{"applicationname":"Squidex","aspnetcore_urls":"http://\u002B:80","aspnet_version":"5.0.0","assets:defaultpagesize":"200","assets:deletepermanent":"False","assets:deleterecursive":"True","assets:folderperapp":"False","assets:maxresults":"200","assets:maxsize":"5242880","assets:timeoutfind":"00:00:01","assets:timeoutquery":"00:00:05","assetstore:amazons3:accesskey":"\u003CMY_KEY\u003E","assetstore:amazons3:bucket":"squidex-test","assetstore:amazons3:bucketfolder":"squidex-assets","assetstore:amazons3:forcepathstyle":"False","assetstore:amazons3:regionname":"eu-central-1","assetstore:amazons3:secretkey":"\u003CMY_SECRET\u003E","assetstore:amazons3:serviceurl":"","assetstore:azureblob:connectionstring":"UseDevelopmentStorage=true","assetstore:azureblob:containername":"squidex-assets","assetstore:exposesourceurl":"False","assetstore:folder:path":"Assets","assetstore:ftp:password":"","assetstore:ftp:path":"Assets","assetstore:ftp:serverhost":"","assetstore:ftp:serverport":"21","assetstore:ftp:username":"","assetstore:googlecloud:bucket":"squidex-assets","assetstore:mongodb:bucket":"fs","assetstore:mongodb:configuration":"mongodb://localhost","assetstore:mongodb:database":"SquidexAssets","assetstore:type":"MongoDb","caching:maxsurrogatekeyssize":"0","caching:replicated:enable":"True","caching:strongetag":"False","contentroot":"/app","contents:defaultpagesize":"200","contents:maxresults":"200","contents:timeoutfind":"00:00:01","contents:timeoutquery":"00:00:05","dotnet_running_in_container":"true","dotnet_version":"5.0.0","email:notifications:existinguserbody":"Dear User,\r\n\r\n$ASSIGNER_NAME ($ASSIGNER_EMAIL) has invited you to join App $APP_NAME at Squidex Headless CMS.\r\n\r\nLogin or reload the Management UI to see the App.\r\n\r\nThank you very much,\r\nThe Squidex Team\r\n\r\n\u003C\u003CStart now!\u003E\u003E [$UI_URL]","email:notifications:existingusersubject":"[Squidex CMS] You have been invited to join App $APP_NAME","email:notifications:newuserbody":"Welcome to Squidex\r\nDear User,\r\n\r\n$ASSIGNER_NAME ($ASSIGNER_EMAIL) has invited you to join Project (also called an App) $APP_NAME at Squidex Headless CMS. Login with your Github, Google or Microsoft credentials to create a new user account and start editing content now.\r\n\r\nThank you very much,\r\nThe Squidex Team\r\n\r\n\u003C\u003CStart now!\u003E\u003E [$UI_URL]","email:notifications:newusersubject":"You have been invited to join Project $APP_NAME at Squidex CMS","email:notifications:usagebody":"Dear User,\r\n\r\nYou you are about to reach your usage limit for App $APP_NAME at Squidex Headless CMS.\r\n\r\nYou have already used $API_CALLS of your monthy limit of $API_CALLS_LIMIT API calls.\r\n\r\nPlease check your clients or upgrade your plan!\r\n\r\n\u003C\u003CGo to Squidex!\u003E\u003E [$UI_URL]","email:notifications:usagesubject":"[Squidex CMS] You you are about to reach your usage limit for App $APP_NAME","email:smtp:enablessl":"True","email:smtp:password":"","email:smtp:port":"587","email:smtp:sender":"hello@squidex.io","email:smtp:server":"","email:smtp:username":"","eventpublishers:alltorabbitmq:configuration":"amqp://guest:guest@localhost/","eventpublishers:alltorabbitmq:enabled":"False","eventpublishers:alltorabbitmq:eventsfilter":".*","eventpublishers:alltorabbitmq:exchange":"squidex","eventpublishers:alltorabbitmq:type":"RabbitMq","eventstore:mongodb:configuration":"mongodb://squidex-1646399944-mongodb-replicaset-0.squidex-1646399944-mongodb-replicaset.app.svc.cluster.local,squidex-1646399944-mongodb-replicaset-1.squidex-1646399944-mongodb-replicaset.app.svc.cluster.local,squidex-1646399944-mongodb-replicaset-2.squidex-1646399944-mongodb-replicaset.app.svc.cluster.local","eventstore:mongodb:database":"Squidex","eventstore:type":"MongoDb","exposedconfiguration:version":"squidex:version","fulltext:elastic:configuration":"http://localhost:9200","fulltext:elastic:indexname":"squidex","fulltext:type":"default","healthz:gc:threshold":"4096","home":"/root","hostname":"squidex-1646399944-574f5b58d4-xxt8k","identity:adminclientid":"","identity:adminclientsecret":"","identity:adminemail":"rasmuslc@brain-plus.com","identity:adminpassword":"Represent","identity:adminrecreate":"true","identity:allowpasswordauth":"true","identity:githubclient":"211ea00e726baf754c78","identity:githubsecret":"d0a0d0fe2c26469ae20987ac265b3a339fd73132","identity:googleclient":"1006817248705-t3lb3ge808m9am4t7upqth79hulk456l.apps.googleusercontent.com","identity:googlesecret":"QsEi-fHqkGw2_PjJmtNHf2wg","identity:lockautomatically":"false","identity:microsoftclient":"b55da740-6648-4502-8746-b9003f29d5f1","identity:microsoftsecret":"idWbANxNYEF4cB368WXJhjN","identity:microsofttenant":"","identity:oidcauthority":"","identity:oidcclient":"","identity:oidcgetclaimsfromuserinfoendpoint":"false","identity:oidcmetadataaddress":"","identity:oidcname":"OIDC","identity:oidconsignoutredirecturl":"","identity:oidcresponsetype":"id_token","identity:oidcscopes":"[]","identity:oidcscopes:0":"email","identity:oidcsecret":"","identity:privacyurl":"https://squidex.io/privacy","identity:showpii":"true","kafka:bootstrapservers":"","kubernetes_port":"tcp://10.247.0.1:443","kubernetes_port_443_tcp":"tcp://10.247.0.1:443","kubernetes_port_443_tcp_addr":"10.247.0.1","kubernetes_port_443_tcp_port":"443","kubernetes_port_443_tcp_proto":"tcp","kubernetes_service_host":"10.247.0.1","kubernetes_service_port":"443","kubernetes_service_port_https":"443","languages:custom":"","logging:applicationinsights:connectionstring":"InstrumentationKey=[key];IngestionEndpoint=https://[datacenter].in.applicationinsights.azure.com/","logging:applicationinsights:enabled":"false","logging:colors":"false","logging:human":"false","logging:level":"INFORMATION","logging:logrequests":"true","logging:otlp:enabled":"false","logging:otlp:endpoint":"","logging:stackdriver:enabled":"false","logging:storeenabled":"true","logging:storeretentationindays":"90","logging:storeretentionindays":"90","mode:isreadonly":"False","news:appname":"squidex-website","news:clientid":"squidex-website:default","news:clientsecret":"QGgqxd7bDHBTEkpC6fj8sbdPWgZrPrPfr3xzb3LKoec=","notifo:apikey":"","notifo:apiurl":"https://app.notifo.io","notifo:appid":"","orleans:clustering":"MongoDB","orleans:gatewayport":"40000","orleans:ipaddress":"","orleans:kubernetes":"true","orleans:siloport":"11111","orleans_cluster_id":"squidex","orleans_service_id":"squidex-5.9.0","paas_pod_id":"afeff30b-d5e0-49c1-8ef1-f0b918c13d33","path":"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","plugins:0":"Squidex.Extensions.dll","pod_ip":"172.16.0.106","pod_name":"squidex-1646399944-574f5b58d4-xxt8k","pod_namespace":"app","rebuild:apps":"False","rebuild:assetfiles":"False","rebuild:assets":"False","rebuild:contents":"False","rebuild:indexes":"False","rebuild:rules":"False","rebuild:schemas":"False","robots:text":"User-agent: *\nAllow: /api/assets/*","rules:executiontimeoutinseconds":"10","running_in_container":"true","squidex_1646399944_port":"tcp://10.247.205.20:80","squidex_1646399944_port_80_tcp":"tcp://10.247.205.20:80","squidex_1646399944_port_80_tcp_addr":"10.247.205.20","squidex_1646399944_port_80_tcp_port":"80","squidex_1646399944_port_80_tcp_proto":"tcp","squidex_1646399944_service_host":"10.247.205.20","squidex_1646399944_service_port":"80","squidex_1646399944_service_port_http":"80","store:mongodb:configuration":"mongodb://squidex-1646399944-mongodb-replicaset-0.squidex-1646399944-mongodb-replicaset.app.svc.cluster.local,squidex-1646399944-mongodb-replicaset-1.squidex-1646399944-mongodb-replicaset.app.svc.cluster.local,squidex-1646399944-mongodb-replicaset-2.squidex-1646399944-mongodb-replicaset.app.svc.cluster.local","store:mongodb:contentdatabase":"SquidexContent","store:mongodb:database":"Squidex","store:type":"MongoDb","translations:deepl:authkey":"","translations:deepl:mapping:zh-cn":"zh-CN","translations:deepl:mapping:zh-tw":"zh-TW","translations:googlecloud:projectid":"","twitter:clientid":"QZhb3HQcGCvE6G8yNNP9ksNet","twitter:clientsecret":"Pdu9wdN72T33KJRFdFy1w4urBKDRzIyuKpc0OItQC2E616DuZD","ui:google:analyticsid":"UA-99989790-2","ui:hidedatebuttons":"False","ui:hidedatetimemodebutton":"False","ui:hidenews":"False","ui:hideonboarding":"False","ui:map:googlemaps:key":"AIzaSyB_Z8l3nwUxZhMJykiDUJy6bSHXXlwcYMg","ui:map:type":"OSM","ui:onlyadminscancreateapps":"False","ui:redirecttologin":"False","ui:referencesdropdownitemcount":"100","ui:regexsuggestions:email":"^[a-zA-Z0-9.!#$%\u0026\u2019*\u002B\\/=?^_\u0060{|}~-]\u002B@[a-zA-Z0-9-]\u002B(?:.[a-zA-Z0-9-]\u002B)*$","ui:regexsuggestions:phone":"^\\(*\\\u002B*[1-9]{0,3}\\)*-*[1-9]{0,3}[-. /]*\\(*[2-9]\\d{2}\\)*[-. /]*\\d{3}[-. /]*\\d{4} *e*x*t*\\.* *\\d{0,4}$","ui:regexsuggestions:slug":"^[a-z0-9]\u002B(\\-[a-z0-9]\u002B)*$","ui:regexsuggestions:url":"^(?:http(s)?:\\/\\/)?[\\w.-]\u002B(?:\\.[\\w\\.-]\u002B)\u002B[\\w\\-\\._~:\\/?#%[\\]@!\\$\u0026\u0027\\(\\)\\*\\\u002B,;=.]\u002B$","ui:showinfo":"False","urls":"http://\u002B:80","urls:basepath":"","urls:baseurl":"https://frontend.80.158.54.207.nip.io/webapp3","urls:enableforwardheaders":"True","urls:enforcehost":"False","urls:enforcehttps":"true","version":"5.0.0","webapp1_svc_port":"tcp://10.247.201.77:80","webapp1_svc_port_80_tcp":"tcp://10.247.201.77:80","webapp1_svc_port_80_tcp_addr":"10.247.201.77","webapp1_svc_port_80_tcp_port":"80","webapp1_svc_port_80_tcp_proto":"tcp","webapp1_svc_service_host":"10.247.201.77","webapp1_svc_service_port":"80","webapp2_svc_port":"tcp://10.247.230.227:80","webapp2_svc_port_80_tcp":"tcp://10.247.230.227:80","webapp2_svc_port_80_tcp_addr":"10.247.230.227","webapp2_svc_port_80_tcp_port":"80","webapp2_svc_port_80_tcp_proto":"tcp","webapp2_svc_service_host":"10.247.230.227","webapp2_svc_service_port":"80"},"timestamp":"2022-03-04T13:22:11Z","app":{"name":"Squidex","version":"5.9.0.0","sessionId":"b613383d-269d-4c31-91e6-4b86331559c1"}}
{"logLevel":"Information","initialize":["ValidationInitializer","SerializationInitializer","LanguagesInitializer","MongoAssetFolderRepository","MongoAssetRepository","MongoContentRepository","MongoEventStore","MongoGridFsAssetStore","MongoHistoryEventRepository","MongoMigrationStatus","MongoRequestLogRepository","MongoRoleStore","MongoRuleEventRepository","MongoSchemasHash","MongoTextIndex","MongoTextIndexerState","MongoUsageRepository","MongoUserStore","TokenStoreInitializer","CreateAdminInitializer"],"timestamp":"2022-03-04T13:22:11Z","app":{"name":"Squidex","version":"5.9.0.0","sessionId":"b613383d-269d-4c31-91e6-4b86331559c1"}}
{"logLevel":"Information","initializedSystem":"ValidationInitializer","timestamp":"2022-03-04T13:22:11Z","app":{"name":"Squidex","version":"5.9.0.0","sessionId":"b613383d-269d-4c31-91e6-4b86331559c1"}}
{"logLevel":"Information","initializedSystem":"SerializationInitializer","timestamp":"2022-03-04T13:22:11Z","app":{"name":"Squidex","version":"5.9.0.0","sessionId":"b613383d-269d-4c31-91e6-4b86331559c1"}}
{"logLevel":"Information","initializedSystem":"LanguagesInitializer","timestamp":"2022-03-04T13:22:11Z","app":{"name":"Squidex","version":"5.9.0.0","sessionId":"b613383d-269d-4c31-91e6-4b86331559c1"}}
{"logLevel":"Information","initializedSystem":"MongoAssetFolderRepository","timestamp":"2022-03-04T13:22:11Z","app":{"name":"Squidex","version":"5.9.0.0","sessionId":"b613383d-269d-4c31-91e6-4b86331559c1"}}
{"logLevel":"Information","initializedSystem":"MongoAssetRepository","timestamp":"2022-03-04T13:22:11Z","app":{"name":"Squidex","version":"5.9.0.0","sessionId":"b613383d-269d-4c31-91e6-4b86331559c1"}}
{"logLevel":"Information","initializedSystem":"MongoContentRepository","timestamp":"2022-03-04T13:22:11Z","app":{"name":"Squidex","version":"5.9.0.0","sessionId":"b613383d-269d-4c31-91e6-4b86331559c1"}}
{"logLevel":"Information","initializedSystem":"MongoEventStore","timestamp":"2022-03-04T13:22:11Z","app":{"name":"Squidex","version":"5.9.0.0","sessionId":"b613383d-269d-4c31-91e6-4b86331559c1"}}

{"logLevel":"Error","message":"QueueWorkItem was called on a non-null context [SystemTarget: S172.16.0.106:11111:384096130*stg/13/0000000d@S0000000d] but there is no valid WorkItemGroup for it.","eventId":{"id":101231},"timestamp":"2022-03-04T13:22:41Z","app":
{"name":"Squidex","version":"5.9.0.0","sessionId":"b613383d-269d-4c31-91e6-4b86331559c1"},"category":"Orleans.Runtime.Scheduler.OrleansTaskScheduler"}
{"logLevel":"Error","message":"QueueWorkItem was called on a non-null context [SystemTarget: S172.16.0.106:11111:384096130*stg/13/0000000d@S0000000d] but there is no valid WorkItemGroup for it.","eventId":{"id":101231},"timestamp":"2022-03-04T13:22:41Z","app":
{"name":"Squidex","version":"5.9.0.0","sessionId":"b613383d-269d-4c31-91e6-4b86331559c1"},"category":"Orleans.Runtime.Scheduler.OrleansTaskScheduler"}

Unhandled exception. System.TimeoutException: A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : "1", Type : "Unknown", State : "Disconnected", Servers : [{ ServerId: "{ ClusterId : 1, EndPoint : "Unspecified/localhost:27017" }", EndPoint: "Unspecified/localhost:27017", ReasonChanged: "Heartbeat", State: "Disconnected", ServerVersion: , TopologyVersion: , Type: "Unknown", HeartbeatException: "MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.
 ---> System.Net.Sockets.SocketException (101): Network is unreachable
   at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.CreateException(SocketError error, Boolean forAsyncThrow)
   at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ConnectAsync(Socket socket)
   at System.Net.Sockets.Socket.ConnectAsync(EndPoint remoteEP, CancellationToken cancellationToken)
   at System.Net.Sockets.Socket.ConnectAsync(EndPoint remoteEP)
   at MongoDB.Driver.Core.Connections.TcpStreamFactory.ConnectAsync(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)
   at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine)
   at MongoDB.Driver.Core.Connections.TcpStreamFactory.ConnectAsync(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)
   at MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStreamAsync(EndPoint endPoint, CancellationToken cancellationToken)
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
   at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1.AsyncStateMachineBox`1.MoveNext(Thread threadPoolThread)
   at System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction(IAsyncStateMachineBox box, Boolean allowInlining)
   at System.Threading.Tasks.Task.RunContinuations(Object continuationObject)
   at System.Threading.Tasks.Task.FinishSlow(Boolean userDelegateExecute)
   at System.Threading.Tasks.Task.TrySetException(Object exceptionObject)
   at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1.SetException(Exception exception, Task`1& taskField)
   at MongoDB.Driver.Core.Connections.TcpStreamFactory.ConnectAsync(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
   at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1.AsyncStateMachineBox`1.MoveNext(Thread threadPoolThread)
   at System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction(IAsyncStateMachineBox box, Boolean allowInlining)
   at System.Threading.Tasks.Task.RunContinuations(Object continuationObject)
   at System.Threading.Tasks.Task.FinishSlow(Boolean userDelegateExecute)
   at System.Threading.Tasks.Task.TrySetException(Object exceptionObject)
   at System.Threading.Tasks.ValueTask.ValueTaskSourceAsTask.<>c.<.cctor>b__4_0(Object state)
   at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.InvokeContinuation(Action`1 continuation, Object state, Boolean forceAsync, Boolean requiresExecutionContextFlow)
   at System.Net.Sockets.SocketAsyncContext.OperationQueue`1.ProcessSyncEventOrGetAsyncEvent(SocketAsyncContext context, Boolean skipAsyncEvents, Boolean processAsyncEvents)
   at System.Net.Sockets.SocketAsyncContext.HandleEvents(SocketEvents events)
   at System.Net.Sockets.SocketAsyncEngine.System.Threading.IThreadPoolWorkItem.Execute()
   at System.Threading.ThreadPoolWorkQueue.Dispatch()
--- End of stack trace from previous location ---
   at MongoDB.Driver.Core.Connections.TcpStreamFactory.ConnectAsync(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)
   at MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStreamAsync(EndPoint endPoint, CancellationToken cancellationToken)
   at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)
   --- End of inner exception stack trace ---
   at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)
   at MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnectionAsync(CancellationToken cancellationToken)
   at MongoDB.Driver.Core.Servers.ServerMonitor.HeartbeatAsync(CancellationToken cancellationToken)", LastHeartbeatTimestamp: "2022-03-04T13:22:41.0998087Z", LastUpdateTimestamp: "2022-03-04T13:22:41.0998089Z" }] }.
   at MongoDB.Driver.Core.Clusters.Cluster.ThrowTimeoutException(IServerSelector selector, ClusterDescription description)
   at MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedHelper.HandleCompletedTask(Task completedTask)
   at MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedAsync(IServerSelector selector, ClusterDescription description, Task descriptionChangedTask, TimeSpan timeout, CancellationToken cancellationToken)
   at MongoDB.Driver.Core.Clusters.Cluster.SelectServerAsync(IServerSelector selector, CancellationToken cancellationToken)
   at MongoDB.Driver.MongoClient.AreSessionsSupportedAfterServerSelectionAsync(CancellationToken cancellationToken)
   at MongoDB.Driver.MongoClient.AreSessionsSupportedAsync(CancellationToken cancellationToken)
   at MongoDB.Driver.MongoClient.StartImplicitSessionAsync(CancellationToken cancellationToken)
   at MongoDB.Driver.MongoDatabaseImpl.UsingImplicitSessionAsync[TResult](Func`2 funcAsync, CancellationToken cancellationToken)
   at Squidex.Assets.MongoGridFsAssetStore.InitializeAsync(CancellationToken ct)
   at Squidex.Hosting.DelegateInitializer.InitializeAsync(CancellationToken ct)
   at Squidex.Hosting.InitializerHost.StartAsync(CancellationToken cancellationToken)
   at Microsoft.Extensions.Hosting.Internal.Host.StartAsync(CancellationToken cancellationToken)
   at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
   at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
   at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.Run(IHost host)
   at Squidex.Program.Main(String[] args) in /src/src/Squidex/Program.cs:line 22

Expected behavior

Expected behaviour would be that the pods spin up and get to the ready state.

Minimal reproduction of the problem

As this might be a problem with the cloud provider, it can be hard to reproduce I guess.
But here are the steps I did on Open Telekom.
-Created a new Cloud Container Engine
-Created a ingress-nginx-controller from the default Helm chart provided by Nginx
-Created cert-manager for https requests
Then I executed the Squidex helm command with a few modifications.
helm install Squidex/squidex --generate-name -f values.yaml
My values.yaml for Squidex:

labels:
service:
  type: ClusterIP
  port: 80
deployment:
  replicaCount: 1
selectors:
  component: squidex
  partOf: ""
  version: ""
image:
  repository: squidex/squidex
  tag: ""
  pullPolicy: IfNotPresent

resources: { }
nodeSelector: { }
tolerations: [ ]
affinity: { }

clusterSuffix: cluster.local

ingress:  
  ## If true, Squidex Ingress will be created.
  ##
  enabled: false

  ## Squidex Ingress annotations
  annotations:
    ingressClassName: ingress-nginx
    kubernetes.io/ingress.class: ingress-nginx
    kubernetes.io/tls-acme: "true"
  hostName: my.domain.com/

  tls: [ ]
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

env:
  # Define the type of the event store
  EVENTSTORE__TYPE: MongoDb
  EVENTSTORE__MONGODB__DATABASE: "Squidex"

  # CREATE LOCAL ADMIN USER
  IDENTITY__ADMINEMAIL: "my@email.com"
  IDENTITY__ADMINPASSWORD: "mypassword"
  IDENTITY__ADMINRECREATE: "true" # Recreate the admin if it does not exist or the password does not match
  IDENTITY__ALLOWPASSWORDAUTH: "true" # Enable password auth. Set this to false if you want to disable local login, leaving only 3rd party login options
  IDENTITY__LOCKAUTOMATICALLY: "false" # Lock new users automatically, the administrator must unlock them
  IDENTITY__SHOWPII: "true" # Set to true to show PII (Personally Identifiable Information) in the logs
  IDENTITY__PRIVACYURL: "https://squidex.io/privacy" # The url to you privacy statements, if you host squidex by yourself

  # Settings for Google auth (keep empty to disable)
  IDENTITY__GOOGLECLIENT: null
  IDENTITY__GOOGLESECRET: null

  # Settings for Github auth (keep empty to disable)
  IDENTITY__GITHUBCLIENT: null
  IDENTITY__GITHUBSECRET: null

  # Settings for Microsoft auth (keep empty to disable)
  # NOTE: Tennant is optional for using a specific AzureAD tenant
  IDENTITY__MICROSOFTCLIENT: null
  IDENTITY__MICROSOFTSECRET: null
  IDENTITY__MICROSOFTTENANT: null

  # Settings for your custom oidc server
  IDENTITY__OIDCNAME: null
  IDENTITY__OIDCAUTHORITY: null
  IDENTITY__OIDCCLIENT: null
  IDENTITY__OIDCSECRET: null
  IDENTITY__OIDCMETADATAADDRESS: null
  IDENTITY__OIDCSCOPES: [] # ["email"]
  IDENTITY__OIDCRESPONSETYPE: null # id_token or code
  IDENTITY__OIDCGETCLAIMSFROMUSERINFOENDPOINT: false
  IDENTITY__OIDCSINGOUTREDIRECTURL: null

  LETSENCRYPT_HOST: null
  LETSENCRYPT_EMAIL: null

  # LOGGING SETTINGS
  LOGGING__LEVEL: INFORMATION # Trace, Debug, Information, Warning, Error, Fatal
  LOGGING__HUMAN: false # Setting the flag to true, enables well formatteds json logs
  LOGGING__COLORS: false # Set to true, to use colors
  LOGGING__LOGREQUESTS: true # Set to false to disable logging of http requests
  LOGGING__STOREENABLED: true # False to disable the log store
  LOGGING__STORERETENTIONINDAYS: 90 # The number of days request log items will be stored
  LOGGING__STACKDRIVER__ENABLED: false # True, to enable stackdriver integration
  LOGGING__OTLP__ENABLED: false # True, to enable OpenTelemetry Protocol integration
  LOGGING__OLTP__ENDPOINT: null # The endpoint to the agent
  LOGGING__APPLICATIONINSIGHTS__ENABLED: false # True, to enable application insights integraon
  LOGGING__APPLICATIONINSIGHTS__CONNECTIONSTRING: null # "instrumentationkey=keyvalue"

  # Define the clustering type
  ORLEANS__CLUSTERING: MongoDB # SUPPORTED: MongoDB, Development
  ORLEANS__KUBERNETES: true # Tell Orleans it is running in kubernetes
  
  # Define the type of the read store
  STORE__TYPE: MongoDb
  STORE__MONGODB__DATABASE: "Squidex"
  STORE__MONGODB__CONTENTDATABASE: "SquidexContent"
  
  # Assets
  ASSETSTORE__TYPE: MongoDb

  URLS__BASEURL: https://my.domain.com/ # Set the base url of your application, to generate correct urls in background process
  URLS__ENFORCEHTTPS: true # Set it to true to redirect the user from http to https permanently

mongodb-replicaset:
  enabled: true
  replicas: 3
  
  auth:
    enabled: false
    existingKeySecret: ""
    existingAdminSecret: ""
    existingMetricsSecret: ""
    # adminUser: username
    # adminPassword: password
    # metricsUser: metrics
    # metricsPassword: password
    # key: keycontent
  
  persistentVolume:
    enabled: true
    ## mongodb-replicaset data Persistent Volume Storage Class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    storageClassName: "csi-disk"
    storageClass: "csi-disk"
    accessModes:
      - ReadWriteOnce
    size: 10Gi

  nodeSelector: {}

Environment

  • [ ] Self hosted with docker
  • [ ] Self hosted with IIS
  • [x] Self hosted with other version
  • [ ] Cloud version

Version: latest

Browser:

  • [x] Chrome (desktop)

Others:
The 2 nodes we are running are: 4 cores | 8 GB memory | 100Gb disks
I have managed to get a random online tutorial running, with 2 web apps using https and the same setup.

Thanks for the very nice application, which we choose over some others we tested. We like it a lot, and hope to get this wokring soon. :slight_smile:
Thanks in advance for any help. We have been trying for weeks to get this running. Let me know if any more details are needed.

You have to configure your mongodb connection string to point to your replica set.

Thank you for your swift reply. Sounds really simple, but I cannot seem to figure out which parameter you want me to change and to what? I am guessing it is somewhere in values.yaml.
I am also guessing that the value needs to be mongodb-replicaset, but I cannot see any mongodb connection string key anywhere in the values.yaml.

Hope you or someone else who knows, can elaborate on the details.
Sorry, I am complete noob in dev ops environments, but trying to learn.

Its these two environment variables:

Thank you for clarifying. I checked those values (in the deployed deployment.yaml) and I have the following already in there without editing anything, they seem to be filled out automatically:

  • name: EVENTSTORE__MONGODB__CONFIGURATION
    value: ‘mongodb://squidex-mongodb-replicaset-0.squidex-mongodb-replicaset.app.svc.cluster.local,squidex-mongodb-replicaset-1.squidex-mongodb-replicaset.app.svc.cluster.local,squidex-mongodb-replicaset-2.squidex-mongodb-replicaset.app.svc.cluster.local’

  • name: STORE__MONGODB__CONFIGURATION
    value: ‘mongodb://squidex-mongodb-replicaset-0.squidex-mongodb-replicaset.app.svc.cluster.local,squidex-mongodb-replicaset-1.squidex-mongodb-replicaset.app.svc.cluster.local,squidex-mongodb-replicaset-2.squidex-mongodb-replicaset.app.svc.cluster.local’

So I am a bit confused. Are these values incorrect?

I think because you have named your repliace set “mongodb-replicaset” the host name for the individual members should be mongodb-replicaset-0.mongodb-replicaset.app.svc.cluster.local

It is more a kubernetes issue.

Just to be clear, I did not name the replica sets anything, they seem to name themselves during installation.
I know for sure I did not put “mongodb-replicaset” anywhere.

Tried your suggestion, but it did not help. Also wondering how removing the squidex part from the string should help. Because the actual mongodb pod contains squidex in it’s name, so would I not just point to something that does not exist?
My pod names are in fact equal to squidex-mongodb-replicaset-0, squidex-mongodb-replicaset-1 and so on. Not mongodb-replicaset-0.

Looking at the log I posted, I thought it might have something to do with Orleans.

{"logLevel":"Error","message":"QueueWorkItem was called on a non-null context [SystemTarget: S172.16.0.106:11111:384096130*stg/13/0000000d@S0000000d] but there is no valid WorkItemGroup for it.","eventId":{"id":101231},"timestamp":"2022-03-04T13:22:41Z","app":
{"name":"Squidex","version":"5.9.0.0","sessionId":"b613383d-269d-4c31-91e6-4b86331559c1"},"category":"Orleans.Runtime.Scheduler.OrleansTaskScheduler"}
{"logLevel":"Error","message":"QueueWorkItem was called on a non-null context [SystemTarget: S172.16.0.106:11111:384096130*stg/13/0000000d@S0000000d] but there is no valid WorkItemGroup for it.","eventId":{"id":101231},"timestamp":"2022-03-04T13:22:41Z","app":
{"name":"Squidex","version":"5.9.0.0","sessionId":"b613383d-269d-4c31-91e6-4b86331559c1"},"category":"Orleans.Runtime.Scheduler.OrleansTaskScheduler"}

After installing with helm, I have an upgrade option in my webinterface, where I can change the values for the chart.
In the list, I see these values which are empty (has no value):
ORLEANS_SERVICE_ID
ORLEANS_CLUSTER_ID
POD_NAMESPACE
POD_NAME
POD_IP
They are not a part of the values.yaml, but somehow appears as they are after installing.
However, they do not have any values. Could that be an issue or is it unrelated?

Sorry, I thought the replica set was named differently.

The only exception that matters is the one you posted:

Unhandled exception. System.TimeoutException: A timeout occurred after 30000ms

This means that Squidex tried to connect to MongoDB for 30sec and then it failed. I do no understand why. Perhaps something is different with DNS and it cannot resolve the host names. This is what I try to find out. Something seems to be wrong with that.

Okay, so the missing Orleans values does not seem to have anything to do with it I guess?!
I will try and have a closer look at your suggested point of failure. Thanks.

Also I think I did not explain clearly, that this squidex-hosting repo works for us on http (not https but that is a different issue):
https://github.com/Squidex/squidex-hosting/tree/master/kubernetes/helm

This is the repo we are trying to get to work and which this thread is about:
https://github.com/Squidex/squidex/tree/master/helm

Just to be clear :slight_smile:

Perhaps the cluster suffix is different in your case?

clusterSuffix: cluster.local

It is used to calculate the host names of the mongo containers.

Not sure how to check what it could be. I have nothing mentioning a suffix on my cluster information page.
Also the values calculated for the mongo containers mach the names of the containers as far as I can see.

From deployed deployment.yaml

 - name: EVENTSTORE__MONGODB__CONFIGURATION
   value: 'mongodb://squidex-mongodb-replicaset-0.squidex-mongodb-replicaset.app.svc.cluster.local,squidex-mongodb-replicaset-1.squidex-mongodb-replicaset.app.svc.cluster.local'
 - name: STORE__MONGODB__CONFIGURATION
   value: 'mongodb://squidex-mongodb-replicaset-0.squidex-mongodb-replicaset.app.svc.cluster.local,squidex-mongodb-replicaset-1.squidex-mongodb-replicaset.app.svc.cluster.local'

From squidex pod

    - name: EVENTSTORE__MONGODB__CONFIGURATION
      value: 'mongodb://squidex-mongodb-replicaset-0.squidex-mongodb-replicaset.app.svc.cluster.local,squidex-mongodb-replicaset-1.squidex-mongodb-replicaset.app.svc.cluster.local'
    - name: STORE__MONGODB__CONFIGURATION
      value: 'mongodb://squidex-mongodb-replicaset-0.squidex-mongodb-replicaset.app.svc.cluster.local,squidex-mongodb-replicaset-1.squidex-mongodb-replicaset.app.svc.cluster.local'

From squidex-mongodb-replicaset-0 yaml file

apiVersion: v1
kind: Pod
metadata:
name: squidex-mongodb-replicaset-0
generateName: squidex-mongodb-replicaset-
namespace: app
selfLink: /api/v1/namespaces/app/pods/squidex-mongodb-replicaset-0
uid: 375c7cf8-ffa7-4d70-a252-a8711e07c1b7
resourceVersion: ‘8661519’
creationTimestamp: ‘2022-03-10T15:56:36Z’
labels:
app: mongodb-replicaset
controller-revision-hash: squidex-mongodb-replicaset-64b677d67d
release: squidex
statefulset.kubernetes.io/pod-name: squidex-mongodb-replicaset-0

Can you try to find out the host name of this pod?

In the pod yaml, I have this:

 hostname: squidex-mongodb-replicaset-0
 subdomain: squidex-mongodb-replicaset

Thats not the host name: https://medium.com/kubernetes-tutorials/kubernetes-dns-for-services-and-pods-664804211501

Hm, okay. I have tried finding a way to get that hostname, starting with the article, but I cannot seem to find it. Among other articles I tried this one, but it did not give me anything on the pods hostname:

It did give a kind of confirmation that cluster.local is used on my cloud host.

ubuntu@ecs-admin:~$ kubectl exec -ti dnsutils -- nslookup my.external.ip.address 
my.external.ip.address.in-addr.arpa      name = app-ingress-ingress-nginx-controller.ingress.svc.cluster.local.

Maybe I cannot retrieve because it does not have one?! (as you mentioned Squidex cannot connect to it)

Some details about the service:
The internal service name is: squidex-mongodb-replicaset
the internal domain name for this service is: squidex-mongodb-replicaset.app.svc.cluster.local
Access Address is: none
Access Type is: Headless Service

kubectl describe pod gives me:

Name: squidex-mongodb-replicaset-0
Namespace: app
Priority: 0
Node: 10.0.1.10/10.0.1.10
Start Time: Fri, 11 Mar 2022 09:02:03 +0000
Labels: app=mongodb-replicaset
controller-revision-hash=squidex-mongodb-replicaset-64b677d67d
release=squidex
statefulset.kubernetes.io/pod-name=squidex-mongodb-replicaset-0
Annotations: checksum/config: ff02334879fa26821a473b211afeeeb2be8aa12e2a4ef4a9e42c1f60666d3023
k8s.v1.cni.cncf.io/network-status:
[{
“name”: “default-network”,
“ips”: [
“172.16.0.32”
],
“default”: true
}]
kubernetes.io/psp: psp-global
Status: Running
IP: 172.16.0.32
IPs:
IP: 172.16.0.32
Controlled By: StatefulSet/squidex-mongodb-replicaset
Init Containers:
copy-config:
Container ID: docker://b51eb96e84e91bfd407f6d6f0fe8e47afe11b13be4b41d3276fb954f8b35b900
Image: busybox:1.29.3
Image ID: docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796
Port:
Host Port:
Command:
sh
Args:
-c
set -e
set -x

  cp /configdb-readonly/mongod.conf /data/configdb/mongod.conf
  
State:          Terminated
  Reason:       Completed
  Exit Code:    0
  Started:      Fri, 11 Mar 2022 09:02:21 +0000
  Finished:     Fri, 11 Mar 2022 09:02:21 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
  /configdb-readonly from config (rw)
  /data/configdb from configdir (rw)
  /var/run/secrets/kubernetes.io/serviceaccount from default-token-nrkr2 (ro)
  /work-dir from workdir (rw)

install:
Container ID: docker://0b450216fe80e2ef5b56f0fcfd96681579ad7d0e039a2d4905883c6b51731cf5
Image: unguiculus/mongodb-install:0.7
Image ID: docker-pullable://unguiculus/mongodb-install@sha256:a3a0154bf476b5a46864a09934457eeea98c4e7f240c8e71044fce91dc4dbb8b
Port:
Host Port:
Args:
–work-dir=/work-dir
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 11 Mar 2022 09:02:22 +0000
Finished: Fri, 11 Mar 2022 09:02:22 +0000
Ready: True
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nrkr2 (ro)
/work-dir from workdir (rw)
bootstrap:
Container ID: docker://8dd7be1881a9b024045db0918d2c64ee4ad396c36d5434db365a92df4dd8dd9e
Image: mongo:3.6
Image ID: docker-pullable://mongo@sha256:146c1fd999a660e697aac40bc6da842b005c7868232eb0b7d8996c8f3545b05d
Port:
Host Port:
Command:
/work-dir/peer-finder
Args:
-on-start=/init/on-start.sh
-service=squidex-mongodb-replicaset
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 11 Mar 2022 09:02:23 +0000
Finished: Fri, 11 Mar 2022 09:02:27 +0000
Ready: True
Restart Count: 0
Environment:
POD_NAMESPACE: app (v1:metadata.namespace)
REPLICA_SET: rs0
TIMEOUT: 900
Mounts:
/data/configdb from configdir (rw)
/data/db from datadir (rw)
/init from init (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nrkr2 (ro)
/work-dir from workdir (rw)
Containers:
mongodb-replicaset:
Container ID: docker://657f5faf99be8120ca069981b23ee88da0b516048278dc218790fc563e644ebe
Image: mongo:3.6
Image ID: docker-pullable://mongo@sha256:146c1fd999a660e697aac40bc6da842b005c7868232eb0b7d8996c8f3545b05d
Port: 27017/TCP
Host Port: 0/TCP
Command:
mongod
Args:
–config=/data/configdb/mongod.conf
–dbpath=/data/db
–replSet=rs0
–port=27017
–bind_ip=0.0.0.0
State: Running
Started: Fri, 11 Mar 2022 09:02:27 +0000
Ready: True
Restart Count: 0
Liveness: exec [mongo --eval db.adminCommand(‘ping’)] delay=30s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [mongo --eval db.adminCommand(‘ping’)] delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/data/configdb from configdir (rw)
/data/db from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nrkr2 (ro)
/work-dir from workdir (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-squidex-mongodb-replicaset-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: squidex-mongodb-replicaset-mongodb
Optional: false
init:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: squidex-mongodb-replicaset-init
Optional: false
workdir:
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
Medium:
SizeLimit:
configdir:
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
Medium:
SizeLimit:
default-token-nrkr2:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nrkr2
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:

Don’t know if that helps.

Can you book an appointment for next week where we have a look together? https://calendly.com/squidex-meeting/30min

Btw: Are you from germany?

Thank you for offering that. I can and I suggest Tuesday the 15. March at 14.00 if that is possible for you? Otherwise let me know.
I am from Denmark :slight_smile:

Note: I am on Central European Time (CET), UTC +1

Thanks for trying to help today.

Can I ask what distribution and kernel version you used in the test on Google Cloud where you got it working?

So I got a reply and a solution from Open Telekom.
As you also mentioned, the main issue is that the app cannot connect to MongoDB
Quote from reply in Open telekom community:
BEGIN QUOTE

To be more precise; the application has 3 different db connection strings:

"store:mongodb:configuration": "mongodb://squidex-1646399944-mongodb-replicaset-0.squidex-1646399944-mongodb-replicaset.app.svc.cluster.local,squidex-1646399944-mongodb-replicaset-1.squidex-1646399944-mongodb-replicaset.app.svc.cluster.local,squidex-1646399944-mongodb-replicaset-2.squidex-1646399944-mongodb-replicaset.app.svc.cluster.local" ,
"eventstore:mongodb:configuration": "mongodb://squidex-1646399944-mongodb-replicaset-0.squidex-1646399944-mongodb-replicaset.app.svc.cluster.local,squidex-1646399944-mongodb-replicaset-1.squidex-1646399944-mongodb-replicaset.app.svc.cluster.local,squidex-1646399944-mongodb-replicaset-2.squidex-1646399944-mongodb-replicaset.app.svc.cluster.local" ,
"assetstore:mongodb:configuration": "mongodb://localhost" ,

As ASSETSTORE__MONGODB__CONFIGURATION is not set, the application uses the default of “mongodb://localhost” (can be seen in application logs) which obviously does not accept connection on mongoDB port (27017) and causes the failure:

While I am unsure why it would work on Google cloud, my best guess is that they changed the value ASSETSTORE__TYPE and allowed it to use “assetstore:googlecloud:bucket”: “squidex-assets” default instead.

The problem ultimately is caused by missing default configuration on the helm chart and can be fixed in the following ways:

  • Override the default by setting it in values.yaml or by using --set argument of helm to set:
env.ASSETSTORE__MONGODB__CONFIGURATION=mongodb://squidex-mongodb-replicaset-0.squidex-mongodb-replicaset.app.svc.cluster.local,squidex-mongodb-replicaset-1.squidex-mongodb-replicaset.app.svc.cluster.local,squidex-mongodb-replicaset-2.squidex-mongodb-replicaset.app.svc.cluster.local
  • Use Kustomize to directly override helm manifest
  • Fork the chart to a repo and fix the missing default in templates/deployment.yaml

I have noticed another problem with the chart after the fix of the database problem where it requires pod read permissions:

The resources for this requirement are present in the Squidex chart repo but not inside the Helm chart. This can be applied with:

kubectl apply -n <your_namespace> -f https://raw.githubusercontent.com/Squidex/squidex/master/helm/setup-roles.yml

Alternatively, if a custom fork of the helm chart is chosen as the solution in the previous step, the setup-roles.yml can simply be added to the templates directory of the chart to automatically apply when helm chart is deployed.

END QUOTE

I also had to change the livenessProbe and readinessProbe port from http to 80 to get those working.
And I added initialDelaySeconds: 300 to the readinessProbe to ensure it was running before probing.

The solution they provided works, so now all the pods are in ready state. But unfortunately I am facing a new issue.
When accessing the url in my browser i get this:

And I am not sure why this happens. Any thoughts or ideas on how to fix this hopefully final issue?
If anyone has any ideas on how to fix, I will appreciate it. Otherwise I will reply here when it I hopefully find a solution for this.

EDIT: I found out that squidex is looking for the ressources on main url address https://www.myurl.com, while I want to access it through https://www.myurl.com/squidex
Example:
Currently wrong url: https://www.myurl.com/scripts/outdatedbrowser/outdatedbrowser.min.css
Needed correct url: https://www.myurl.com/squidex/scripts/outdatedbrowser/outdatedbrowser.min.css
Still working on figuring out how this happens…

I think it should work if you set URLS__BASEURL to https://www.myurl.com/squidex or check

URLS__BASEPATH: https://github.com/Squidex/squidex/blob/master/backend/src/Squidex/appsettings.json#L12

You should see the actual value in the logs for that: ASSETSTORE__TYPE