Non-Standard Port Installation

I’m submitting a…

  • [ ] Regression (a behavior that stopped working in a new release)
  • [X ] Bug report
  • [ ] Performance issue
  • [ ] Documentation issue or request

Current behavior

  • Using non-standard port of 8080 instead of 80, as 80 is already being used by Nginx running on the server. Modified the docker-compose.yml file to be 8080:80. Modified the base url to be 8080.

  • Squidex site loads.

  • Can login successfully

  • Login popup closes

  • Page doesn’t refresh

  • Manual refresh and stuck on loading squidex loader

Expected behavior

  • Squidex site loads.
  • Can login successfully
  • Login popup closes
  • Page refreshes
  • Dashboard loads

Minimal reproduction of the problem

On OSX or a server, set the port to 8080 instead of the standard port 80.

Environment

  • [ X] Self hosted with docker
  • [ ] Self hosted with IIS
  • [ ] Self hosted with other version
  • [ ] Cloud version

Version: Latest

Browser:

  • [ X] Chrome (desktop)
  • [ ] Chrome (Android)
  • [ ] Chrome (iOS)
  • [ ] Firefox
  • [ ] Safari (desktop)
  • [ ] Safari (iOS)
  • [ ] IE
  • [ ] Edge

This does not work with Docker on localhost.

The reason is that Squidex needs to make a request to itself. But when you have the port mapping from 8080 => 80 then it will listen to port 80 in docker itself and when Squidex makes a request to localhost:8080 it is not targeting your host machine but the container and cannot do the request.

What you can try:

  • Change the port with the following environment variable: ASPNETCORE_URLS=http://+:8080
  • Change the port mapping to 8080:8080 or just 8080

If you have a public domain or another domain which is not localhost it should work.

Hi Sebastian,

I was trying this with a domain on a server as well as with localhost on my machine with the same result on both.

I’m not sure where that APSNETCORE_URLS variable is to set? I can’t see that in the docker-compose.yml file. What I have set is URLS__BASEURL=${SQUIDEX_PROTOCOL}://${SQUIDEX_DOMAIN}:8080/

If I set the port to 8080:8080 in the docker-compose.yml file then visiting the domain:8080 doesn’t load anything.

Here’s the docker-composer.yml I have that will get it to the state of the site loading, and login working but not being able to get past that.

The docker-compose file

version: '2.1'
services:
  squidex_mongo:
    image: mongo:latest
    ports:
      - "27017:27017"
    volumes:
      - /etc/squidex/mongo/db:/data/db
    networks:
      - internal
    restart: unless-stopped

  squidex_squidex:
    image: "squidex/squidex:latest"
    ports:
      - "5000:80"
    environment:
      - URLS__BASEURL=${SQUIDEX_PROTOCOL}://${SQUIDEX_DOMAIN}:8080/
      - URLS__ENFORCEHTTPS=${SQUIDEX_FORCE_HTTPS}
      - EVENTSTORE__MONGODB__CONFIGURATION=mongodb://squidex_mongo
      - STORE__MONGODB__CONFIGURATION=mongodb://squidex_mongo
      - IDENTITY__ADMINEMAIL=${SQUIDEX_ADMINEMAIL}
      - IDENTITY__ADMINPASSWORD=${SQUIDEX_ADMINPASSWORD}
      - IDENTITY__GOOGLECLIENT=${SQUIDEX_GOOGLECLIENT}
      - IDENTITY__GOOGLESECRET=${SQUIDEX_GOOGLESECRET}
      - IDENTITY__GITHUBCLIENT=${SQUIDEX_GITHUBCLIENT}
      - IDENTITY__GITHUBSECRET=${SQUIDEX_GITHUBSECRET}
      - IDENTITY__MICROSOFTCLIENT=${SQUIDEX_MICROSOFTCLIENT}
      - IDENTITY__MICROSOFTSECRET=${SQUIDEX_MICROSOFTSECRET}
      - LETSENCRYPT_HOST=${SQUIDEX_DOMAIN}
      - LETSENCRYPT_EMAIL=${SQUIDEX_ADMINEMAIL}
      - VIRTUAL_HOST=${SQUIDEX_DOMAIN}
    depends_on:
      - squidex_mongo
    volumes:
      - /etc/squidex/assets:/app/Assets
    networks:
      - internal
    restart: unless-stopped

  squidex_proxy:
    image: squidex/nginx-proxy
    ports:
      - "8080:80"
      - "443:443"
    volumes:
      - /etc/squidex/nginx/vhost.d:/etc/nginx/vhost.d
      - /etc/squidex/nginx/certs:/etc/nginx/certs:ro
      - /etc/squidex/nginx/html:/usr/share/nginx/html
      - /var/run/docker.sock:/tmp/docker.sock:ro
    depends_on:
      - squidex_squidex
    networks:
      - internal
    labels:
      - com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy

  squidex_encrypt:
    image: jrcs/letsencrypt-nginx-proxy-companion
    volumes:
      - /etc/squidex/nginx/certs:/etc/nginx/certs:rw
      - /var/run/docker.sock:/var/run/docker.sock:ro
    volumes_from:
      - squidex_proxy
    depends_on:
      - squidex_proxy
    networks:
      - internal
    restart: unless-stopped
    
networks:
  internal:
    driver: bridge

The env file

SQUIDEX_PROTOCOL=http
SQUIDEX_FORCE_HTTPS=False
SQUIDEX_DOMAIN=squidex.evaske.com
SQUIDEX_ADMINEMAIL=admin@email.com
SQUIDEX_ADMINPASSWORD=password
SQUIDEX_GITHUBCLIENT=
SQUIDEX_GITHUBSECRET=
SQUIDEX_GOOGLECLIENT=
SQUIDEX_GOOGLESECRET=
SQUIDEX_MICROSOFTCLIENT=
SQUIDEX_MICROSOFTSECRET=

If you have a public host name like in your example you do not need the port mapping, but if you do not want to use HTTPS then you can also get rid of nginx.

I would remove nginx and set the port mapping to "“8080:80” and it should work.

Hi Sebastien,

I would have liked to use SSL but it doesn’t appear that I can find a way to get it working.

Managed to get it working without SSL so I guess that will have to do which is a shame.

I am not sure if you can get SSL working on a non 443 port with letsencrypt. Why can you not use port 443 instead? Your other proxy can just forward the requests as well.

I already have NGINXrunning on the server for the standard websites.

So I have a Digital Ocean droplet that is my “dev” environment. Has a load of sites set up on it using Laravel Forge and I just wanted to add another “site” that was Squidex and so have it running on ports other than 80 and 443 so that it wouldn’t interfere with my server’s NGINX.

The way I got it working before was by having a proxy from a site set up in NGINX to port 5000 of Squidex which worked non-ssl. I couldn’t see a way of getting SSL working this way though.

So I’ve literally just created a brand new Digitial Ocean droplet to test this out. Only changed the domain name and admin credentials in the .env file. Left the ports as standard 80 and 443.

Created the mongoDB folder as per the install instructions.

The site loads on the SSL url: https://squid.watchraffle.co.uk/

I go to login, the login box comes up correctly. I enter the admin credentials and then nothing. I refresh the page and just get the spinning loading Squidex and it never goes any further…

I get the following correctly in the console log:
UserManager.signinPopup: signinPopup successful, signed in sub: 5dbb41acfe9c680001fbba13

Eventually gate a gateway timeout:

https://squid.watchraffle.co.uk/api/apps 504

Can you send me the logs?

Logs: https://pastebin.com/dA0HU2XV

Looks normal, can you PM me your username and password?

I set the PII to false to get some more information. This error appears in there:

Unable to get config file at https://squid.watchraffle.co.uk/identity-server/.well-known/openid-configuration

But I can browse to it.

Can you ping the domain from within your container?

Can ping it but not connect to it with wget for example.

Seems not…

Resolving squid.watchraffle.co.uk (squid.watchraffle.co.uk)… 209.97.191.80

Connecting to squid.watchraffle.co.uk (squid.watchraffle.co.uk)|209.97.191.80|:443…

Times out

Seems this is only an issue with accessing its own address. I can curl/wget any other domain name.

Managed to get this working… not sure why this is required:

Had to add this to the /etc/hosts file of the squidex_squidex container:

IP_OF_PROXY_CONTAINER squid.watchraffle.co.uk

It was then able to access the site correctly and I can login. Is this something that should be being done automatically and isn’t?

I have never needed it to be honest. What does it mean?

It’s basically forcing the connection of the squid.watchraffle.co.uk to use the internal IP address to the proxy container instead of the world IP address which it doesn’t seem to be able to connect to.

One entry in google? :wink:

Could be a something special for DigitalOcean, I have tried a setup ion Azure, AWS, Vultr and a few kubernetes clusters and never had this problem bevore.

Potentially. This was using the Digital Ocean Docker image so maybe they’ve added something a bit odd with that.

1 Like