Structured Logging

Structured Logging makes Liquibase operation data easily available and machine-readable. You can use monitoring and analysis tools to read, query, and act upon this data in automated workflows. Liquibase not only does the tricky work of database schema versioning and management, it also helps you understand the data around these operations and how they fit into your overall DevOps and CI/CD performance.

Tools you can use with Liquibase Structured Logging include AWS Cloudwatch, Grafana, Opensearch, Sematext, Splunk, ElasticSearch and other analysis instruments. Monitoring and analysis tools can easily determine and act upon both real-time and long-term trend data for Liquibase usage with Structured Logging. Other data that Liquibase can use are performance, errors, security, tracking for auditablility and outcomes, and even standard DORA DevOps metrics.

Liquibase uses a Mapped Diagnostic Context (MDC) to generate structured logs in a JSON format.

This feature is available in Liquibase 4.21.0 and later. Additional functionality is available in later releases.

Examples

[2023-02-27 16:37:00] FINE [liquibase.integration] Performance monitoring disabled
[2023-02-27 16:37:00] FINE [liquibase.configuration] No configuration value for liquibase.outputFile found
[2023-02-27 16:37:00] FINE [liquibase.configuration] No configuration value for liquibase.strict found

Tip: For example dashboards using tools like AWS CloudWatch, Datadog, Elastic, and Splunk, see the GitHub repository liquibase/Observability-Dashboards.

Enable Structured Logging

  1. Set the --license-key property using your Liquibase Pro license key. See How to Apply Your Liquibase Pro License Key.
  2. Set the --log-format property to JSON or JSON_PRETTY:
    • CLI: liquibase --log-format=JSON or JSON_PRETTY
    • Properties file: liquibase.logFormat=JSON or JSON_PRETTY
    • Environment Variable: LIQUIBASE_LOG_FORMAT=JSON or JSON_PRETTY
  3. Set the --log-level property to a value other than OFF. Use this to define what to include in your structured logs. Valid values are FINE, INFO, WARN, and SEVERE.
  4. (Optional) Set the --log-file properties so the log data is sent to a file rather than to the console. This is useful if you want to capture a file to be ingested by your log analysis tool, such as Elastic or Cloudwatch, etc.

    To do so, set the --log-file property equal to the log file of your choice in the file, as an environment variable, or in the CLI.

    Example: liquibase --log-format=JSON_PRETTY --log-level=INFO update --log-file=mylogfile.log

Structured Logging is enabled successfully!

Note: By default, console messages are included in your log. If you need to disable this, set mirror-console-messages-to-log to false.

Analyze and Use Structured Log Data

After you enable Liquibase structured logs, you can configure tools outside of Liquibase to send the data to useful places:

  1. Select an external observability platform like AWS CloudWatch, Datadog, Dynatrace, Elastic, or Splunk. Your observability platform contains several tools to index logs and display them in human-readable dashboards.
  2. Configure a logging agent compatible with your observability platform like AWS CloudWatch Agent, Datadog Agent, Dynatrace OneAgent, Filebeat or Logstash (Elastic Stack), or Splunk Universal Forwarder or Splunk HTTP Event Collector (HEC).
  3. Ingest Liquibase structured logs using your observability platform's logging agent.
    • If your company has an observability subject matter expert (SME), consult them to determine what architecture to use to index your logs. You may want to ingest logs in multiple environments, each of which may have unique requirements.
    • If your company doesn't have an observability SME, contact Liquibase Support for assistance.
    • Note: Configuration of an external tool may not be entirely covered by your Liquibase support plan.

  4. Analyze (index) the logs you have ingested using your observability platform's log indexing tool, such as AWS CloudWatch Logs Insights, Datadog Logs Processing Pipeline, Dynatrace Log Management and Analytics, Elasticsearch or Logstash, or Splunk Indexer. Consult your observability SME for guidance.
  5. View your data by constructing a dashboard in your observability platform. You must query the logs you have ingested and indexed and send this data to your dashboard to represent it visually. The Liquibase Observability-Dashboards repository on GitHub contains several sample dashboards for common metrics. For a list of keys (endpoints) you can include in your queries, see Structured Logging Keys.
  6. Automate data monitoring and reporting in your CI/CD pipeline. Ensure that your logging jobs run automatically after every Liquibase operation in your pipeline. Using your indexed logs, write queries that automatically notify you of problematic behavior. You can also use Liquibase Operation Reports to receive information about common Liquibase operations.

Disable Structured Logging

To disable structured logging, do one of the following:

  • Set --log-format to TEXT to explicitly use unstructured logs
  • Omit the --log-format parameter and Liquibase reverts to the default value of TEXT

Supported commands

You can get structured logs from the output of the following commands:

Note: Structured Logging affects log output, not command output. Commands like status, history, and diff deliver command output directly in the console. Log output is fine-grained and often related to code or metadata.

Also, not all commands are yet structured. These commands will still produce a structured log, but there will not be additional data or content in these log objects. The new content that is available in structured logs includes things like deploymentID, command parameters, label filter settings, nested objects, and more.

Keys

For a list of keys that Liquibase includes in the JSON output, see Structured Logging Keys.