Logging Topology and Configuration

Logging Topology and Configuration

You are here:
< Back


The CLI’s logging configuration is located in ~/.cloudify/config.yaml, under the {{logging}} directive.
The structure of the logging directive is coupled to the logic implemented by the CLI’s logging facility (located at https://github.com/cloudify-cosmo/cloudify-cli/blob/4.3/cloudify_cli/logger.py).

  • If the config.yaml file is missing, it is created using hard-coded defaults.
  • Otherwise, it is read and parsed by the CLI’s logging facility.

To configure the logging level of individual loggers, you will need to list these loggers under the logging directive.
The default logging configuration sends all logs into ~/.cloudify/logs/cli.log, and enables only a couple of loggers at INFO level.
If -vvv is provided in the command-line, the CLI will automatically set all of its configured loggers to DEBUG level, regardless of the logging configuration.


For this section, we will use the token AGENT_DIR to signify the location of the agent’s installation.

  • On Linux, AGENT_DIR is a subdirectory of the agent user’s home directory (specified in the agent_config property). The subdirectory is named after the node instance ID of the relevant cloudify.nodes.Compute node. For example, if the agent’s user is centos, its home directory is /home/centos, and the node instance ID is server_a1b2c3, then AGENT_DIR would be /home/centos/server_a1b2c3.
  • On Windows, the location is C:\Program Files\Cloudify Agents\<node-instance-id>.


In AGENT_DIR/work, there are files of the form:
node_instance_id will always be the same; that’s the node instance ID of that same Compute node.
The number that follows is the PID of the specific Celery process. Each Celery process logs into its own file, for serialization purposes. One of these processes (you can identify them by looking at the OS’s process list and looking for python or celery) is the Celery master process; the others are worker processes.

  • The master process doesn’t perform tasks; its role is to connect to RabbitMQ and wait for tasks. Once a task is received, it is dispatched to an available worker process. Occasionally, the master process will kill worker processes and restart them, resulting in new log files being created (as new processes will get their own PID).
  • worker process actually performs orchestration tasks. You will find logs related to task execution in the worker processes’ logs.

I haven’t yet figured out what this log file is for. I do know that it’s maintained by the Celery master process, and it logs (among other things) REST API calls to the manager.

*.err / *.out

AGENT_DIR/work/logs/tasks, *.err / *.out
This is new with 4.3. You will find sets of files there, each set comprises of a base name of a UUID, with extensions of .out and .err.
These files contain the standard output and standard error streams of scripts invoked by the script plugin, on that agent. The files are written-to in real time. Tailing them may be very useful for troubleshooting.


The vast majority of the Cloudify Manager logs are located in /var/log/cloudify. There is only one exception (Riemann).

REST Service

This is likely to be the starting point when troubleshooting any “internal server error” or “HTTP 500” error messages.
The REST service’s logs are located at /var/log/cloudify/rest.

  • gunicorn.log contains the log file of gunicorn. Errors there usually mean that the REST service has a system-level problem that isn’t really related to the Cloudify REST functionality, so you may want to pay attention to this file.
  • gunicorn-access.log contains a summary of each REST API call processed by the REST service.
  • cloudify-rest-service.log contains the log of our actual REST API implementation. This is usually the most useful file when it comes to troubleshooting.

Management Workers

The most useful log files are those produced by the management workers, under /var/log/cloudify/mgmtworker.

Logging Level

By default, the management workers log in INFO level.
To change that:

  • Edit /etc/sysconfig/cloudify-mgmtworker
  • Change the value of the CELERY_LOG_LEVEL variable from INFO to something else (such as DEBUG).
  • Restart the management workers (note: currently-running workflows will stop and will not be resumable):

    sudo systemctl restart cloudify-mgmtworker


This file is shared by all management worker processes, and contains basic information about tasks, from Celery perspective:

  • Log of connection to RabbitMQ.
  • Task acceptance.
  • Task completion, including error details.

Note that logging to the Cloudify context logger does not arrive here.


This log file contains logging of system workflows. If you are having problems with, for example, plugin installation or snapshot operations, this file may be useful.


The Nginx logs are located in /var/log/cloudify/nginx.
Nginx logs access and errors for the following components:

ComponentAccess LogError Log
File Servercloudify-files.logerror.log
REST APIcloudify.access.logcloudify.error.log
  • An “Access Log” shows basic HTTP request and response information for all HTTP requests.
  • An “Error Log” shows HTTP request and response information for all HTTP requests that ended with a non-OK response code (4xx and 5xx HTTP response codes).

There is also an access.log file, however under normal circumstances it should be zero-length. If this file is not zero-length, please let us know because it implies there’s a gap in Nginx’s logging configuration.

Manager Installer

The Manager’s installation script (cfy_manager), new with 4.3, logs into /var/log/cloudify/manager/cfy_manager.log.
This file is only updated during installation or re-configuration of the manager.


The UI logs are located in /var/log/cloudify/stage.

  • The apps directory contains the logs of the actual UI application.
  • access.log contains information about incoming HTTP requests, as well as HTTP response code for each request.
  • errors.log contains a subset of access.log: only HTTP requests that ended up with errors will be shown here.


Logstash’s logs are located in /var/log/cloudify/logstash.
Logstash is a Java application, hence:

  • logstash.stdout contains the standard output stream of the JVM in which Logstash runs. You normally won’t find much help in these logs.
  • logstash.err contains the standard error stream of the JVM. You are unlikely to find anything useful here, unless the JVM ended abnormally.
  • logstash.log contains the actual log of the Logstash application.

The only log file of (limited) interest here is logstash.log. Should you ever need to adjust Logstash’s logging configuration, refer to: https://www.elastic.co/guide/en/logstash/current/logging.html


InfluxDB’s logs are located in /var/log/cloudify/influxdb.


PostgreSQL logs are actually written to /var/lib/pgsql/9.5/data/pg_log, but we hold a symlink to this directory at /var/log/cloudify/postgresql/pg_log.


RabbitMQ’s logs are located at /var/log/cloudify/rabbitmq.

  • rabbit@<hostname>.log is the main RabbitMQ log file.
  • rabbit@<hostname>-sasl.log contains logging information pertaining to authentication and authorization.


Logs are located in /var/log/cloudify/composer.


Riemann has three log files:

  • /var/log/riemann/riemann.log contains nothing.
  • /var/log/cloudify/riemann/riemann.log contains the actual Riemann logs.
  • /tmp/riemann.log contains all Riemann’s logging before we actually configure Riemann logging. For most purposes, this file is entirely useless.


    Back to top