Elasticsearch Upsert is not working as expected in local

I am running squidex in a local machine using docker. The upsert request to elastic search is not working.

Current behavior

You can check the Error logs here.
And the elasticsearch params here

Expected behavior

Any action like content-updated, published should trigger a upsert request to elasticsearch and insert/update the information in elastic search.

Minimal reproduction of the problem

  1. Download the docker-compose and .env file from squidex-docker
  2. Replace the docker-compose.yml file with
version: '2.1'
services:
  squidex_mongo:
    image: mongo:latest
    volumes:
      - mongo:/data/db
    networks:
      - internal
    restart: unless-stopped

  squidex_squidex:
    image: "squidex/squidex:dev"
    ports:
      - "5000:80"
    hostname: ${SQUIDEX_DOMAIN}
    environment:
      - URLS__BASEURL=${SQUIDEX_PROTOCOL}://${SQUIDEX_DOMAIN}/
      - URLS__ENFORCEHTTPS=${SQUIDEX_FORCE_HTTPS}
      - EVENTSTORE__MONGODB__CONFIGURATION=mongodb://squidex_mongo
      - STORE__MONGODB__CONFIGURATION=mongodb://squidex_mongo
      - IDENTITY__ADMINEMAIL=${SQUIDEX_ADMINEMAIL}
      - IDENTITY__ADMINPASSWORD=${SQUIDEX_ADMINPASSWORD}
      - IDENTITY__GOOGLECLIENT=${SQUIDEX_GOOGLECLIENT}
      - IDENTITY__GOOGLESECRET=${SQUIDEX_GOOGLESECRET}
      - IDENTITY__GITHUBCLIENT=${SQUIDEX_GITHUBCLIENT}
      - IDENTITY__GITHUBSECRET=${SQUIDEX_GITHUBSECRET}
      - IDENTITY__MICROSOFTCLIENT=${SQUIDEX_MICROSOFTCLIENT}
      - IDENTITY__MICROSOFTSECRET=${SQUIDEX_MICROSOFTSECRET}
      - IDENTITY__SHOWPII=true
      - IDENTITY__PRIVACYURL=http://research.squidex.com/privacy.html
      - REBUILD_APPS=true
      - LETSENCRYPT_HOST=${SQUIDEX_DOMAIN}
      - LETSENCRYPT_EMAIL=${SQUIDEX_ADMINEMAIL}
      - UI__ONLYADMINSCANCREATEAPPS=True
    depends_on:
      - squidex_mongo
    volumes:
      - squidex_assets:/app/Assets
    networks:
      - internal
    restart: unless-stopped

networks:
  internal:
    driver: bridge

volumes:
  mongo:
    driver: local
  squidex_assets:
    driver: local
  1. update the .env file to
SQUIDEX_PROTOCOL=http
SQUIDEX_FORCE_HTTPS=False
SQUIDEX_DOMAIN=localhost:5000
SQUIDEX_ADMINEMAIL=user@example.com
SQUIDEX_ADMINPASSWORD=user@123!
SQUIDEX_GITHUBCLIENT=
SQUIDEX_GITHUBSECRET=
SQUIDEX_GOOGLECLIENT=
SQUIDEX_GOOGLESECRET=
SQUIDEX_MICROSOFTCLIENT=
SQUIDEX_MICROSOFTSECRET=
  1. run docker-compose up and access the app from http://localhost:5000
  2. create a rule for elastic search and fill the required information:
Host: http://localhost:9200/
Index Name: app_name
  1. Now create new content and check the log for elasticsearch log.

Environment

  • [x] Self hosted with docker
  • [ ] Self hosted with IIS
  • [ ] Self hosted with other version
  • [ ] Cloud version

Version: [VERSION]

Browser:

  • [x] Chrome (desktop)
  • [ ] Chrome (Android)
  • [ ] Chrome (iOS)
  • [ ] Firefox
  • [ ] Safari (desktop)
  • [ ] Safari (iOS)
  • [ ] IE
  • [ ] Edge

Can you post the content of the logs for the failed events? How can I read the logs from a screenshot?

Sorry for the late responst, this was the output from docker logs containerId

squidex_squidex_1  | {
squidex_squidex_1  |   "logLevel": "Information",
squidex_squidex_1  |   "filters": {
squidex_squidex_1  |     "appId": "96ef7fb8-6e39-49cf-a096-8bbe79ff085a",
squidex_squidex_1  |     "appName": "gwell-test",
squidex_squidex_1  |     "userId": "5ee8728eba46000001dca454",
squidex_squidex_1  |     "clientId": "squidex-frontend",
squidex_squidex_1  |     "costs": 1
squidex_squidex_1  |   },
squidex_squidex_1  |   "elapsedRequestMs": 411,
squidex_squidex_1  |   "app": {
squidex_squidex_1  |     "name": "Squidex",
squidex_squidex_1  |     "version": "4.0.0.0",
squidex_squidex_1  |     "sessionId": "e7aa10f5-0f44-408c-a2fd-f2eedb2f7b12"
squidex_squidex_1  |   },
squidex_squidex_1  |   "web": {
squidex_squidex_1  |     "requestId": "|2b8956f9-46d833981a6724dc.",
squidex_squidex_1  |     "requestPath": "/api/content/gwell-test/articles/5ed4093f-82a2-4b42-ab90-ac2bda9745e6",
squidex_squidex_1  |     "requestMethod": "PUT"
squidex_squidex_1  |   },
squidex_squidex_1  |   "timestamp": "2020-06-18T09:15:58Z"
squidex_squidex_1  | }
squidex_squidex_1  | 
squidex_squidex_1  | {
squidex_squidex_1  |   "logLevel": "Information",
squidex_squidex_1  |   "filters": {
squidex_squidex_1  |     "appId": "96ef7fb8-6e39-49cf-a096-8bbe79ff085a",
squidex_squidex_1  |     "appName": "gwell-test",
squidex_squidex_1  |     "userId": "5ee8728eba46000001dca454",
squidex_squidex_1  |     "clientId": "squidex-frontend",
squidex_squidex_1  |     "costs": 0.1
squidex_squidex_1  |   },
squidex_squidex_1  |   "elapsedRequestMs": 3,
squidex_squidex_1  |   "app": {
squidex_squidex_1  |     "name": "Squidex",
squidex_squidex_1  |     "version": "4.0.0.0",
squidex_squidex_1  |     "sessionId": "e7aa10f5-0f44-408c-a2fd-f2eedb2f7b12"
squidex_squidex_1  |   },
squidex_squidex_1  |   "web": {
squidex_squidex_1  |     "requestId": "|2b8956fb-46d833981a6724dc.",
squidex_squidex_1  |     "requestPath": "/api/apps/gwell-test/history",
squidex_squidex_1  |     "requestMethod": "GET"
squidex_squidex_1  |   },
squidex_squidex_1  |   "timestamp": "2020-06-18T09:15:58Z"
squidex_squidex_1  | }
squidex_squidex_1  | 
squidex_squidex_1  | {
squidex_squidex_1  |   "logLevel": "Warning",
squidex_squidex_1  |   "message": "Task [Id=1, Status=RanToCompletion] in WorkGroup [Activation: S127.0.0.1:11111:330167726*grn/1E6BF8FC/be8b96a0@98bad8a7 #GrainType=Squidex.Domain.Apps.Entities.Contents.Text.Lucene.LuceneTextIndexGrain Placement=RandomPlacement State=Valid] took elapsed time 0:00:00.2483516 for execution, which is longer than 00:00:00.2000000. Running on thread 15",
squidex_squidex_1  |   "eventId": {
squidex_squidex_1  |     "id": 101215
squidex_squidex_1  |   },
squidex_squidex_1  |   "task": "[Id=1, Status=RanToCompletion]",
squidex_squidex_1  |   "grainContext": "[Activation: S127.0.0.1:11111:330167726*grn/1E6BF8FC/be8b96a0@98bad8a7 #GrainType=Squidex.Domain.Apps.Entities.Contents.Text.Lucene.LuceneTextIndexGrain Placement=RandomPlacement State=Valid]",
squidex_squidex_1  |   "duration": "0:00:00.2483516",
squidex_squidex_1  |   "turnWarningLengthThreshold": "00:00:00.2000000",
squidex_squidex_1  |   "thread": "15",
squidex_squidex_1  |   "app": {
squidex_squidex_1  |     "name": "Squidex",
squidex_squidex_1  |     "version": "4.0.0.0",
squidex_squidex_1  |     "sessionId": "e7aa10f5-0f44-408c-a2fd-f2eedb2f7b12"
squidex_squidex_1  |   },
squidex_squidex_1  |   "timestamp": "2020-06-18T09:15:58Z",
squidex_squidex_1  |   "category": "Orleans.Runtime.Scheduler.WorkItemGroup"
squidex_squidex_1  | }

if you click the gears icon of an event in the your first screenshot, you should see some details.

It just contains Elapsed Time.

You can check the Screenshot

Can you also give me the full logs? PM is also fine. Please upload it as a file somewhere.

You can find all the docker logs here.

If you can debug it, it would be great. I guess it is just a stupid bug or so.

The only reason I can think of is, the rule execution was not completed within 2 seconds. Is there a way I can raise this value and check if that works?

Then you would get a timeout. The short execution timeout is more something like a null ref exception or whatever.

yes I got similar error message in the log: took elapsed time 0:00:00.2483516 for execution, which is longer than 00:00:00.2000000. Running on thread 15.

It didn’t took much time when I insert the data to elasticsearch using curl.

I will try to dig down into this issue, thank you for guiding me till now.

Oh, I guess it is just a docker problem, you cannot use localhost, because you want to make a connection with the host machine.

Does this mean elasticsearch should be running in another machine?

You can just add it to the same dockerfile and then use the name that is used inside the dockerfile as a hostname, e.g. squidex_elastic or so. Similar to the mongodb connection string.

I have updated my docker-compose.yml file to include elasticsearch as well

version: '2.1'
services:
  squidex_mongo:
    image: mongo:latest
    volumes:
      - mongo:/data/db
    networks:
      - internal
    restart: unless-stopped
   
  squidex_elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    container_name: squidex_elasticsearch
    ports:
      - "9200:9200"
    hostname: ${SQUIDEX_DOMAIN}
    environment:
      - cluster.initial_master_nodes=squidex_elasticsearch
    volumes:
      - elasticsearch:/usr/share/elasticsearch/data
    networks:
      - internal
    restart: unless-stopped

  squidex_squidex:
    image: "squidex/squidex:dev"
    ports:
      - "5000:80"
    hostname: ${SQUIDEX_DOMAIN}
    environment:
      - URLS__BASEURL=${SQUIDEX_PROTOCOL}://${SQUIDEX_DOMAIN}/
      - URLS__ENFORCEHTTPS=${SQUIDEX_FORCE_HTTPS}
      - EVENTSTORE__MONGODB__CONFIGURATION=mongodb://squidex_mongo
      - STORE__MONGODB__CONFIGURATION=mongodb://squidex_mongo
      - IDENTITY__ADMINEMAIL=${SQUIDEX_ADMINEMAIL}
      - IDENTITY__ADMINPASSWORD=${SQUIDEX_ADMINPASSWORD}
      - IDENTITY__GOOGLECLIENT=${SQUIDEX_GOOGLECLIENT}
      - IDENTITY__GOOGLESECRET=${SQUIDEX_GOOGLESECRET}
      - IDENTITY__GITHUBCLIENT=${SQUIDEX_GITHUBCLIENT}
      - IDENTITY__GITHUBSECRET=${SQUIDEX_GITHUBSECRET}
      - IDENTITY__MICROSOFTCLIENT=${SQUIDEX_MICROSOFTCLIENT}
      - IDENTITY__MICROSOFTSECRET=${SQUIDEX_MICROSOFTSECRET}
      - IDENTITY__SHOWPII=true
      - IDENTITY__PRIVACYURL=http://research.squidex.com/privacy.html
      - REBUILD_APPS=true
      - LETSENCRYPT_HOST=${SQUIDEX_DOMAIN}
      - LETSENCRYPT_EMAIL=${SQUIDEX_ADMINEMAIL}
      - UI__ONLYADMINSCANCREATEAPPS=True
    depends_on:
      - squidex_elasticsearch
      - squidex_mongo
    volumes:
      - squidex_assets:/app/Assets
    networks:
      - internal
    restart: unless-stopped

networks:
  internal:
    driver: bridge

volumes:
  mongo:
    driver: local
  squidex_assets:
    driver: local
  elasticsearch:
    driver: local

All containers are up and running.

But still the data is not being upserted to Elasticsearch.

But there was this warning when I check the log in elasticsearch container:

{"type": "server", "timestamp": "2020-06-19T04:48:17,473Z", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "docker-cluster", "node.name": "localhost", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [squidex_elasticsearch] to bootstrap a cluster: have discovered [{localhost}{a4pLlzRcSqqSZxOLF-6I2A}{azxhuWRlRa2EJucNq3wElg}{172.21.0.2}{172.21.0.2:9300}{dilmrt}{ml.machine_memory=16629026816, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305] from hosts providers and [{localhost}{a4pLlzRcSqqSZxOLF-6I2A}{azxhuWRlRa2EJucNq3wElg}{172.21.0.2}{172.21.0.2:9300}{dilmrt}{ml.machine_memory=16629026816, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }

I update my docker-compose file with elasticsearch again with

version: '2.1'
services:
  squidex_mongo:
    image: mongo:latest
    volumes:
      - mongo:/data/db
    networks:
      - internal
    restart: unless-stopped
   
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    container_name: es01
    environment:
      - node.name=es01
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es02,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data01:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - internal
  es02:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    container_name: es02
    environment:
      - node.name=es02
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data02:/usr/share/elasticsearch/data
    networks:
      - internal
  es03:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    container_name: es03
    environment:
      - node.name=es03
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es02
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data03:/usr/share/elasticsearch/data
    networks:
      - internal

  squidex_squidex:
    image: "squidex/squidex:dev"
    ports:
      - "5000:80"
    hostname: ${SQUIDEX_DOMAIN}
    environment:
      - URLS__BASEURL=${SQUIDEX_PROTOCOL}://${SQUIDEX_DOMAIN}/
      - URLS__ENFORCEHTTPS=${SQUIDEX_FORCE_HTTPS}
      - EVENTSTORE__MONGODB__CONFIGURATION=mongodb://squidex_mongo
      - STORE__MONGODB__CONFIGURATION=mongodb://squidex_mongo
      - IDENTITY__ADMINEMAIL=${SQUIDEX_ADMINEMAIL}
      - IDENTITY__ADMINPASSWORD=${SQUIDEX_ADMINPASSWORD}
      - IDENTITY__GOOGLECLIENT=${SQUIDEX_GOOGLECLIENT}
      - IDENTITY__GOOGLESECRET=${SQUIDEX_GOOGLESECRET}
      - IDENTITY__GITHUBCLIENT=${SQUIDEX_GITHUBCLIENT}
      - IDENTITY__GITHUBSECRET=${SQUIDEX_GITHUBSECRET}
      - IDENTITY__MICROSOFTCLIENT=${SQUIDEX_MICROSOFTCLIENT}
      - IDENTITY__MICROSOFTSECRET=${SQUIDEX_MICROSOFTSECRET}
      - IDENTITY__SHOWPII=true
      - IDENTITY__PRIVACYURL=http://research.squidex.com/privacy.html
      - REBUILD_APPS=true
      - LETSENCRYPT_HOST=${SQUIDEX_DOMAIN}
      - LETSENCRYPT_EMAIL=${SQUIDEX_ADMINEMAIL}
      - UI__ONLYADMINSCANCREATEAPPS=True
    depends_on:
      - squidex_mongo
    volumes:
      - squidex_assets:/app/Assets
    networks:
      - internal
    restart: unless-stopped

networks:
  internal:
    driver: bridge

volumes:
  mongo:
    driver: local
  squidex_assets:
    driver: local
  elasticsearch:
    driver: local
  data01:
    driver: local
  data02:
    driver: local
  data03:
    driver: local

and output of localhost:9200 is

{
  "name" : "es01",
  "cluster_name" : "es-docker-cluster",
  "cluster_uuid" : "79G-nLh-SRqvnFu5lFDqpA",
  "version" : {
    "number" : "7.8.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "757314695644ea9a1dc2fecd26d1a43856725e65",
    "build_date" : "2020-06-14T19:35:50.234439Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

which clearly shows that elasticsearch is up and running from docker.
but still upsert to elasticsearch is failing.

Elasticsearch Rule info

Server URL: http://localhost:9200/
index-name: example-index

I am still getting this message in squidex image log

{
  "logLevel": "Warning",
  "message": "Task [Id=1, Status=RanToCompletion] in WorkGroup [LowPrioritySystemTarget: S127.0.0.1:11111:330249566*stg/0/00000000@S00000000] took elapsed time 0:00:00.2149219 for execution, which is longer than 00:00:00.2000000. Running on thread 17",
  "eventId": {
    "id": 101215
  },
  "task": "[Id=1, Status=RanToCompletion]",
  "grainContext": "[LowPrioritySystemTarget: S127.0.0.1:11111:330249566*stg/0/00000000@S00000000]",
  "duration": "0:00:00.2149219",
  "turnWarningLengthThreshold": "00:00:00.2000000",
  "thread": "17",
  "app": {
    "name": "Squidex",
    "version": "4.0.0.0",
    "sessionId": "a4b2a699-ccef-49ce-a890-a970802e5ee3"
  },
  "timestamp": "2020-06-19T07:59:28Z",
  "category": "Orleans.Runtime.Scheduler.WorkItemGroup"
}

This log is not related, but I can have a look as well.

sure, Please have a look. I don’t know if this is not working only on my machine.

I tested it locally and it works fine, but I have not tested it with docker compose. But I am going to deploy a small fix that makes debugging easier. It is definitely a socket error.

What I woudl try

Add this to your docker-compose

  squidex_elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
      discovery.type: single-node
      http.cors.enabled: "true"
      http.cors.allow-origin: "*"
    networks:
      - squidex_network
    restart: always

Then use http://squidex_elasticsearch:9200/ as server url.

1 Like

Thank you very much, This worked for docker compose as well. Thank you

1 Like