Introduction

This document describes how to containerize the Tridion Content Delivery microservices. It provides detailed steps to build Docker images, explains how licensing works within containers, and lists a few recommended best practices for running containers.

The document is split into the following sections:

  • What is Docker?
  • Content Delivery microservices
  • Building a Docker image
  • Running a Docker container
  • Connecting to a Docker container
  • Running using Docker Compose
  • Licensing
  • Best practice

What is Docker?

Docker is an application that performs operating-system-level virtualization. It allows you to build and run self-contained applications.

Containers are isolated from each other and bundle their own application, tools, libraries and configuration files. They can communicate with each other through well-defined channels. Containers are run by a single operating system kernel and are thus more lightweight than virtual machines. Containers are created from images that specify their precise contents. Images are often created by combining and modifying standard images downloaded from public repositories.

Content Delivery microservices

Tridion’s Content Delivery microservices are well-suited to containerization, as each provides a particular capability within the Delivery platform.

The following microservices can be run as Docker containers:

Service

Description

Discovery Service

Provides capability registration and discovery, together with an OAuth token service.

Deployer Service

The primary content ingestion end-point.

Deployer Worker Service

Processes deployed content and stores it within the database repositories.

Content Service

A delivery service for querying stored content.

Session-enabled Content Service

A delivery service for querying stored content which also supports Experience Manager.

Preview Service

A service which supports Experience Manager.

Context Service

A service which provides device properties and contextual data.

Contextual Image Delivery Service

A service which provides real-time image manipulation.

UGC Community Service

A service for contributing user generated content.

UGC Moderation Service

A service for moderating user generated content.

Index Service

A service for indexing content.

Query Service

A service for querying indexed content.

Audience Manager Service

A service to support Audience Manager.

Audience Manager Sync Service

A synchronization service for Audience Manager contacts.

Experience Optimization Management Service

A management service for Experience Optimization.

Experience Optimization Query Service

A query service for Experience Optimization.

Monitoring Service

A service for monitoring endpoints.

 

Please refer to the Tridion documentation for further details of these services.

Building a Docker image

To create a Docker image from any of the Content Delivery microservices, first ensure that you have the latest version of Docker installed for your operating system. You can create Docker containers based on Unix or Windows.

Then follow these steps for each microservice:

  • Create a folder for building the image
  • Copy the standalone service artifacts from the distribution layout into this folder
  • Create a Dockerfile which contains the following details:
 # Always use a specific version of any base image
# This image uses Alpine Linux to minimize image size
# Use specific versions of base images, not latest
FROM anapsix/alpine-java:8u172b11_server-jre

# Our start-up script checks for this location and behaves differently
RUN mkdir /cd-cis-service && chmod 755 /cd-cis-service

# This line assumes the service has been expanded into a folder named content/standalone
COPY content/standalone /cd-cis-service
# Each command creates a new layer
RUN \
    chmod +x /cd-cis-service/bin/start.sh && \
    apk update && apk add curl

# Define working directory
WORKDIR /data

# Optionally wait for discovery service to be up and running
CMD echo "Waiting for discovery-service to be up and running" && \
   while [ "$(curl --write-out %{http_code} --silent --output /dev/null ${tokenurl})" -ne "200" ]; do sleep 1; done && \
   echo "discovery-service is up and running" && /bin/bash -C '/cd-cis-service/bin/start.sh' "$startparameters"

EXPOSE 8081
MAINTAINER My Company
  • Run the following command to build the image:
docker build -t my-company/content:11.5.0 .

Running a Docker container

To run a Docker container, follow these steps:

  • Identify the name of the image
  • Run the container:
docker run -p 8081:8081 -t my-company/content:11.5.0

Running a Docker container interactively

To run a Docker container interactively, follow these steps:

  • Identify the name of the image
  • Run the container, passing in the name of a command:
docker run -it my-company/content:11.5.0 bash

Running using Docker Compose

Docker Compose provides a simple orchestration mechanism for running multiple containers. To run using Docker Compose, follow these steps:

  • Create a docker-compose.yml containing the desired container configurations:
version: "3"
services:
  mysql:
    image: my-company/mssql:11.5.0-SNAPSHOT
    ports:
      - "1433:1433"
    environment:
      - SA_PASSWORD=1234
    container_name: mysql
  discovery:
    image: my-company/discovery:11.5.0-SNAPSHOT
    ports:
      - "8082:8082"
    links:
      - mssql
    environment:
      - loglevel=info
      - dbdriver=com.microsoft.sqlserver.jdbc.SQLServerDriver
      - dbadapter=mssql
      - dbtype=MSSQL
      - dbhost=mssql
      - dbclass=com.microsoft.sqlserver.jdbc.SQLServerDataSource
      - dbname=cd_broker_content_db
      - dbuser=sa
      - dbpassword=1234
      - dbport=1433
      - dbvalidationquery=SELECT 1
      - discoveryurl=http://discovery:8082/discovery.svc
      - tokenurl=http://discovery:8082/token.svc
      - startparameters=--auto-register
      - environment=DockerCompose
      - service=discovery
      - cdenvironment=staging
    container_name: discovery
  content:
    image: my-company/content:11.5.0
    ports:
      - "8081:8081"
    links:
      - mssql
      - discovery
    environment:
      - discoveryurl=http://discovery:8082/discovery.svc
      - tokenurl=http://discovery:8082/token.svc
      - oauthenabled=true
      - loglevel=info
      - dbdriver=com.microsoft.sqlserver.jdbc.SQLServerDriver
      - dbadapter=mssql
      - dbtype=MSSQL
      - dbhost=mssql
      - dbclass=com.microsoft.sqlserver.jdbc.SQLServerDataSource
      - dbname=cd_broker_content_db
      - dbuser=sa
      - dbpassword=1234
      - dbport=1433
      - dbvalidationquery=SELECT 1
      - environment=DockerCompose
      - service=content
      - customer=my-company
      - startparameters=
      - oauthenabled=false
    container_name: content
  • Run Docker Compose:
docker-compose up

Note that this example references a pre-built Microsoft SQL Server image. If required, you can build a new image yourself, or use an external database instance.

Licensing

The Tridion Content Delivery platform comprises of a set of standalone microservices, where each provides a particular capability. Each service consumes a proportion of your Tridion Content Delivery license entitlement. The license is based on virtual CPU (vCPU) core usage. When using containerize services, it is important to ensure that you are allocating the correct capacity from your container host. The table below describes some common terms used in licensing within containers:

Term

Description

Example

CPU core

The number of cores that a CPU has

4

Threads per core

The number of simultaneous operations (or threads) that a CPU core is capable of processing

2

vCPU

The number of virtual CPU cores allocated to a container (the number of CPU cores multiplied the number of Threads per core)

8

CPU share

The number of CPU units to assign to the container. This weighting value is calculated by multiplying the number of vCPUs by 1024.

8192

The microservices do not necessarily need to consume vCPU evenly. For example, you could assign the Content Service to consume 4 vCPUs cores, whilst the Discovery Service only 1 vCPU core. Customers can start up any number of capabilities provided that the total vCPU core count does not exceed their licensed maximum.

Most cloud providers and container orchestrators allow you to limit how many vCPUs are exposed to the individual containers. This allows us to correctly proportion CPU license allocations to the individual microservices. In order to specify these, we use hard limited CPU values for each service. Our services will consume the number of vCPUs configured on the container.

Customers who don’t provide limits on their containers may find that their container orchestration platform allows containers to consume more vCPUs from their license than they are licensed to use (this is often described as a soft limit). It is recommended that hard limits are used when allocating CPUs, to ensure that you are not in breach of your license agreement.

Docker CPU controls

Docker provides several runtime options for controlling CPU usage. The two most commonly used are described below:

Option

Description

Example

--cpus

The proportion of available CPU resources that the container uses. This is a hard limit and cannot exceed the physical number of CPUs in the host machine.

4

--cpu-shares

Set this flag to a value greater or less than the default of 1024 to increase or reduce the container’s weight, and give it access to a greater or lesser proportion of the host machine’s CPU cycles. This is only enforced when CPU cycles are constrained.  When plenty of CPU cycles are available, all containers use only as much CPU as they need. In this respect, this is a soft limit. This can be used in conjunction with the --cpus option.

2048

Sharing services

It is common for Cloud deployments to share a cluster of machines between multiple containers, running containerised software from multiple vendors. Our license agreement allows this, provided that the correct vCPU limits have been associated with the container. In these setups, it’s the customer’s responsibility to ensure that the 3rd party containers do not consume all of the vCPU capacity of the underlying cluster. There are two ways to orchestrate this, as described below:

Dedicated Docker hosts

The first is to have dedicated nodes for the Tridion Content Delivery containers, as illustrated below:

Dedicated host

In this example, the Tridion Software is licensed to use 12 vCPUs across all Content Delivery containerized services. The customer has setup 2 dedicated Tridion Docker hosts to handle all the services. Each service is pre-configured to use a specific proportion of each host vCPU. They also have an additional Docker host serving 3rd party software containers.

Shared Docker hosts

An alternative configuration is to allow any Docker host to deploy the Tridion Content Delivery containerized services. As illustrated below:

Shared docker host

In this example, the Tridion Software is licensed to use 12 vCPUs across all Content Delivery containerized services. The customer has multiple shared Docker hosts to handle all the services from both Tridion and 3rd party vendors. Each Tridion Content Delivery service is pre-configured to use a specific proportion of each host vCPU. The orchestration environment will typically start a service task on whichever Docker host has available capacity.

Hard and soft limits

Most container orchestration systems support the concept of hard and soft limits. As discussed previously, Tridion Content Delivery containers should be configured using hard limits to ensure that you do not breach the terms of the license agreement.

Hard limits are also desirable for best performance, particularly where a docker host is shared amongst other 3rd party services. By setting a hard limit: we guarantee that the Tridion service has full access to the resources assigned to it, and to prevent other services from cannibalizing the shared host’s resources.

Best practices

Below are some guidelines for deploying Content Delivery services on custom Docker containers.

What

Why

Plan for sufficient vCPU capacity

Running services on constrained vCPU values will reduce the performance of the services. Ensure you have sufficient license entitlement for your desired vCPU usage.

Plan for scalability

Consider how your Cloud orchestration environment scales. A service configured to bring up new containers on demand will consume license capacity from the vCPU allocation. The license agreement does allow for short-lived additional usage of license capacity for the duration of any failover operations.

Use appropriate instance types

Most Cloud providers support different instance types but with the same vCPU allocations. The difference between instance types with the same vCPU values is often the underlying bare metal hardware, memory allocation, network bandwidth or storage mechanisms. It’s important that you chose the correct hardware for your performance needs as well as the appropriate vCPU license capacity.

It may be more cost effective to use a larger instance type, with more memory and network bandwidth compared to multiple smaller instance types. If you are unsure, then it’s suggested that you run performance tests against different configurations to determine the best use of resource. Some of the Content Delivery service are CPU intensive vs Memory intensive (depending on your data size and expected load).

Avoid placing configuration within the Dockerfiles

Docker images are immutable. Instead, use environment variables for placeholders within the configuration files.

Don’t store data in containers

Use configuration properties or Docker shared volumes

Minimize docker layers

When building Dockerfiles, use Multi-stage builds to minimize the number of layers. (see https://docs.docker.com/develop/develop-images/multistage-build/)

Always specify specific versions of dependent images

Do not use the latest tag, as this is a moving target.

Consider adding metadata to your images using the LABEL directive

See http://label-schema.org/rc1/ for details of a proposed convention.

Set memory correctly

When running your containers, ensure they have sufficient allocated memory to accommodate both the JVM and the operating system.

Use our endpoints

Make use of the /health and /info endpoints on all our services to help monitor and report on the service status.