We are moving to Discord!

Join us on the Countly Community Discord Server! Engage in discussions, share your feature ideas, and learn from fellow Countly users to build exceptional apps and experiences.

502 Gateway error in control panel after install and reboot

I installed on a fresh Ubuntu 18.04 install, and successfully installed countly. And works fine until I reboot the server. 
Then I get a 502 gateway on the control panel, I have checked system resources and all seems fine there, also no major issues 

0

Comments

6 comments
  • Official comment

    Hello to who got the same issue. :)

    There is an issue with latest release on Ubuntu 18 & 20 which will be fixed with next minor release, please run these commands after fresh installation:

    cd `countly dir`
    curl https://raw.githubusercontent.com/Countly/countly-server/master/bin/commands/systemd/install.sh > bin/commands/systemd/install.sh
    rm -f /etc/systemd/system/mongod.service
    bash bin/scripts/detect.init.sh

    Comment actions Permalink
  • Hello,

    HTTP 502 means NGINX is running but it can't reach to Countly API. Can you share the output of `countly status`?

    0
    Comment actions Permalink
  • I ran into this issue twice now! First time I thought I made a mistake and reinstalled. A few days later it's the same situation. It got so bad that the error.log file from nginx was north of 100GB.

     

    nginx always logs these errors:

    "http://127.0.0.1:3001/i?app_key=my-app-key&timestamp=1607626035129&hour=19&dow=4&tz=60&sdk_version=20.04.4&sdk_name=java-native-android&begin_session=1&metrics=%7B%22_device%22%3A%22SM-G930F%22%2C%22_os%22%3A%22Android%22%2C%22_os_version%22%3A%228.0.0%22%2C%22_carrier%22%3A%22MEDIONmobile%22%2C%22_resolution%22%3A%221080x1920%22%2C%22_density%22%3A%22XXHDPI%22%2C%22_locale%22%3A%22de_DE%22%2C%22_app_version%22%3A%229.10.3%22%2C%22_store%22%3A%22com.android.vending%22%7D&aid=%7B%22adid%22%3A%22f3c6c3d9-d5dc-4748-8480-646479f09e01%22%7D&device_id=85f7721d8bf90715&checksum=15323f91ba991180d1e21a6d2b988a67881ce300", host: "backend.domain.com"

     

    Countly status

     

    ● countly.service - countly-supervisor
    Loaded: loaded (/etc/systemd/system/countly.service; enabled; vendor preset: enabled)
    Active: active (running) since Thu 2020-12-10 20:19:18 CET; 4min 46s ago
    Docs: http://count.ly
    Main PID: 958 (supervisord)
    Tasks: 23 (limit: 4915)
    CGroup: /system.slice/countly.service
    ├─ 958 /usr/bin/python /usr/bin/supervisord --nodaemon --configuration /root/countly/bin/config/supervisord.conf
    ├─1463 countly: dashboard node /root/countly/frontend/express/app.
    └─1464 countly: api master node /root/countly/api/api.js

    Dec 10 20:19:18 v220201139069133430 systemd[1]: Started countly-supervisor.
    Dec 10 20:19:19 v220201139069133430 supervisord[958]: 2020-12-10 20:19:19,666 CRIT Set uid to user 0
    Dec 10 20:19:19 v220201139069133430 supervisord[958]: 2020-12-10 20:19:19,694 CRIT Server 'unix_http_server' running without any HTTP authentication checking

     

    nginx config:

     

    server {
    server_name backend.domain.com;

    access_log off;

    location = /i {
    if ($http_content_type = "text/ping") {
    return 404;
    }
    proxy_pass http://127.0.0.1:3001;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Real-IP $remote_addr;
    }

    location ^~ /i/ {
    if ($http_content_type = "text/ping") {
    return 404;
    }
    proxy_pass http://127.0.0.1:3001;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Real-IP $remote_addr;
    }

    location = /o {
    if ($http_content_type = "text/ping") {
    return 404;
    }
    proxy_pass http://127.0.0.1:3001;
    }

    location ^~ /o/ {
    if ($http_content_type = "text/ping") {
    return 404;
    }
    proxy_pass http://127.0.0.1:3001;
    }

    location / {
    if ($http_content_type = "text/ping") {
    return 404;
    }
    proxy_pass http://127.0.0.1:6001;
    proxy_set_header Host $http_host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Real-IP $remote_addr;
    }

    listen [::]:443 ssl ipv6only=on; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/backend.domain.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/backend.domain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

    }
    server {
    if ($host = backend.domain.com) {
    return 301 https://$host$request_uri;
    } # managed by Certbot


    listen 80;
    listen [::]:80 ipv6only=on;
    server_name backend.domain.com;
    return 404; # managed by Certbot


    }

     

    Trying to access the dashboard throws a 502 Bad Gateway response and no user data gets logged as it seems. Any idea? There was literally no change this I accessed the dashboard the last time. Maybe a day or two ago.

     

    Thanks!

    0
    Comment actions Permalink
  • Hello,

    Yes, error.log of NGINX may be filled up by upstream errors when the server get enormous requests while Countly isn't working. You need to disable error logging of NGINX or set log rotate for it.

    I can't see the worker process of Countly in the output of `countly status` so can you run `countly restart` command, wait 2 minutes and share the content of `countly-api.log`  which is located under `log/` directory in Countly's root directory?

    0
    Comment actions Permalink
  • Here's the last 100 lines of countly's log:

    tail -n 100 countly-api.log
    hourly: '2020.12.10.16',
    weekly: 50,
    month: '12',
    day: '10',
    hour: '16'
    },
    app_user: {
    _id: '90d991a044426c0f37fd0c56d0663713a0f06dcf',
    uid: '1Ar',
    did: '3fc383685f730daf',
    cc: 'DE',
    cty: 'Sibbesse',
    last_req: 'dfaafcaf35cd3722efdddd2767d3183c84242bc21607613261425',
    loc: { gps: false, geo: [Object], date: 1606382436692 },
    rgn: 'NI',
    tz: 60,
    av: '9:10:3',
    c: 'Congstar',
    d: 'SM-G973F',
    dnst: 'aXXHDPI',
    fs: 1584959895,
    la: 'de',
    lac: 1607613261,
    lo: 'de_DE',
    ls: 1607613245,
    mt: false,
    p: 'Android',
    pv: 'a10',
    r: '1080x2042',
    sc: 463,
    src: 'com.android.vending',
    sd: 16,
    tsd: 31370,
    lest: 1607600851,
    lbst: 1607613245,
    fac: 1584959895000,
    hadAnyFatalCrash: 1607600851,
    hadAnyNonfatalCrash: 1607600851,
    last_sync: 1607613271,
    sdk: { name: 'java-native-android', version: '20.04.4' },
    lv: 'Catalogs',
    lvt: 1603562434,
    vc: 0,
    dt: 'mobile',
    ingested: false,
    lsid: '51a70accfdf4141662141f42254773a60769b6801607613245785_1Ar_1607613245785',
    data: { events: 2 },
    hos: true
    },
    request_hash: 'dfaafcaf35cd3722efdddd2767d3183c84242bc21607613261425',
    preservedEvents: '[{"key":"CB_Stats","count":1,"timestamp":1607600851694,"hour":12,"dow":4,"segmentation":{"Compaction_duration":"5ms"},"sum":0},{"key":"my key","count":1,"timestamp":1607613247979,"hour":16,"dow":4,"segmentation":{"other key":"false"},"sum":0}]',
    logging_is_allowed: true,
    log_processed: true,
    previous_session: '51a70accfdf4141662141f42254773a60769b6801607613245785_1Ar_1607613245785',
    previous_session_start: 1607613245,
    request_id: 'dfaafcaf35cd3722efdddd2767d3183c84242bc21607613261425_1Ar_1607613261425',
    session_duration: 16,
    views: [],
    viewsNamingMap: {},
    response: { code: 200, body: '{"result":"Success"}' }
    }
    2020-12-10T15:14:32.017Z: ERROR [batcher] Error updating documents BulkWriteError: The dollar ($) prefixed field '$ - Grand fouet#' in 'meta_v2.entered item.$ - Grand fouet#' is not valid for storage.
    at UnorderedBulkOperation.handleWriteError (/root/countly/node_modules/mongodb/lib/bulk/common.js:1257:9)
    at UnorderedBulkOperation.handleWriteError (/root/countly/node_modules/mongodb/lib/bulk/unordered.js:117:18)
    at resultHandler (/root/countly/node_modules/mongodb/lib/bulk/common.js:521:23)
    at handler (/root/countly/node_modules/mongodb/lib/core/sdam/topology.js:942:24)
    at /root/countly/node_modules/mongodb/lib/cmap/connection_pool.js:350:13
    at handleOperationResult (/root/countly/node_modules/mongodb/lib/core/sdam/server.js:558:5)
    at MessageStream.messageHandler (/root/countly/node_modules/mongodb/lib/cmap/connection.js:277:5)
    at MessageStream.emit (events.js:315:20)
    at processIncomingData (/root/countly/node_modules/mongodb/lib/cmap/message_stream.js:144:12)
    at MessageStream._write (/root/countly/node_modules/mongodb/lib/cmap/message_stream.js:42:5)
    at writeOrBuffer (_stream_writable.js:352:12)
    at MessageStream.Writable.write (_stream_writable.js:303:10)
    at Socket.ondata (_stream_readable.js:719:22)
    at Socket.emit (events.js:315:20)
    at addChunk (_stream_readable.js:309:12)
    at readableAddChunk (_stream_readable.js:284:9) {
    driver: true,
    code: 52,
    writeErrors: [ WriteError { err: [Object] }, WriteError { err: [Object] } ],
    result: BulkWriteResult {
    result: {
    ok: 1,
    writeErrors: [Array],
    writeConcernErrors: [],
    insertedIds: [],
    nInserted: 0,
    nUpserted: 0,
    nMatched: 0,
    nModified: 0,
    nRemoved: 0,
    upserted: []
    }
    }
    }
    2020-12-10T15:30:50.378Z: INFO [jobs:manager] Starting job manager in 1596
    2020-12-10T19:08:42.142Z: INFO [jobs:manager] Starting job manager in 1551
    2020-12-10T19:18:08.629Z: INFO [jobs:manager] Starting job manager in 2047
    2020-12-10T19:19:21.922Z: INFO [jobs:manager] Starting job manager in 1464

     

    And nginx config is fine:

     

    sudo nginx -t
    nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /etc/nginx/nginx.conf test is successful

     

    I hope this info helps in solving this issue! I don't feel like reinstalling just to run into the issue in a few days again...

    0
    Comment actions Permalink
  • Hello,

    I don't need last n line of the log file but all it since the last restart as I've mentioned before. By the way, is it a standalone MongoDB server or a sharded cluster? Can you provide `log/countly-api.log` & `api/config.js` file, please?

    Feel free to contact me via our Countly Community Slack or by email(kk@count.ly) to send files.

    0
    Comment actions Permalink

Please sign in to leave a comment.