Continuous Deployment with Hippo CMS, tutum and docker

​ Mike Marmar

​ 2015-01-08

 

Brands are investing in digital experiences to compete for customers and revenue. Those digital customer experiences are made up of the latest features and functions delivered and deployed as part of an agile software development lifecycle. With so much at stake, these companies must minimize time to market and address their customers’ expectations more frequently than ever before, continuously.

For leading brands, automated, rapid and no-risk releases require a collaborative focus by infrastructure, operations, application development and business leaders to organize into an agile and modern service delivery model. This organizational discipline is often referred to as "DevOps" (Development + Operations) and has given rise to concepts like "Continuous Deployment", "Continuous Delivery" and "Continuous Integration".

In this post, we examine an approach using Hippo CMS, Docker and Tutum.

Docker

Docker is an open-source technology used to package, ship and run an application. Docker has become synonymous with the concept of containers, which are used to create a complete environment for a software application to run, including code, runtime, system tools and libraries. These containers are highly portable and are used to deploy applications across environments. Containers are not new, however Docker has made containers easier and safer to use, standardizing their use and integration with other DevOps technology.

Specifically, Docker makes it possible to set up local development environments that are exactly like a live production server, run multiple development environments from the same host that each have unique software, operating systems, and configurations, test projects on new or different servers, and allow anyone to work on the same project with the exact same settings, regardless of the local host environment.


Walkthrough

Using Docker to implement Continuous Deployment ("Docker Continuous integration") for Hippo CMS is relatively straightforward. We have to make some minor modifications to the application so that it runs successfully in Docker, set up the application stack, and then automate it with some simple scripts.


Implementation details

Step 1: Dockerize Hippo

Starting from the Hippo Maven project, there are a few steps required to build Hippo to a Docker image:

  1. Add context and repository configurations for the Docker image
  2. Add a Dockerfile to the repository
  3. Create a new assembly definition for adding the right files to the Docker image
  4. Create a new maven profile that uses docker-maven-plugin to build a Docker image

The application running in Docker requires slightly different settings than one running locally. To allow for this, create two new files conf/repository.xml and conf/docker-context.xml. By default Hippo uses the filesystem as a backing store for the Jackrabbit repository, which is not as fast. These files set up hippo to use mysql, which is more appropriate for a production system.

Next, create a Dockerfile that defines the Docker image we want to build:

FROM tomcat:jre8
ENV CATALINA_OPTS "-Djava.security.egd=file:/dev/./urandom -Drepo.bootstrap=true -Drepo.config=file:/usr/local/tomcat/conf/repository.xml -Djava.rmi.server.hostname=127.0.0.1 "
ENV JAVA_ENDORSED_DIRS "/usr/local/tomcat/endorsed"

ADD <YOUR ARTIFACT NAME HERE>-1.01.00-SNAPSHOT-distribution.tar.gz /usr/local/tomcat/

Make sure to replace <YOUR ARTIFACT NAME HERE> with the name of the maven artifact you are building. This defines a new Docker image based on Tomcat 8, with the proper environment set up and assembly deployment. Dockerfiles allow you to specify a rich set of directives that can modify the behavior of a Docker image. For more information see the Dockerfile reference

To generate the assembly, create src/main/assembly/docker-distribution.xml. This file defines the set of files that will get deployed into the Docker image.

Finally, create a new maven profile to generate the Docker image:

<profile>
    <id>docker</id>
      <dependencies>
        <dependency>
          <groupId>org.slf4j</groupId>
          <artifactId>slf4j-log4j12</artifactId>
          <scope>provided</scope>
        </dependency>
        <dependency>
          <groupId>org.slf4j</groupId>
          <artifactId>jcl-over-slf4j</artifactId>
          <scope>provided</scope>
        </dependency>
        <dependency>
          <groupId>log4j</groupId>
          <artifactId>log4j</artifactId>
          <scope>provided</scope>
        </dependency>
        <dependency>
          <groupId>mysql</groupId>
          <artifactId>mysql-connector-java</artifactId>
          <version>5.1.37</version>
          <scope>provided</scope>
        </dependency>
      </dependencies>
    <build>
        <plugins>
          <plugin>
            <artifactId>maven-assembly-plugin</artifactId>
            <executions>
              <execution>
                <id>distro-assembly</id>
                <phase>validate</phase>
                <goals>
                  <goal>single</goal>
                </goals>
                <configuration>
                  <descriptors>
                    <descriptor>${project.basedir}/src/main/assembly/docker-distribution.xml</descriptor>
                  </descriptors>
                </configuration>
              </execution>
            </executions>
          </plugin>
          <plugin>
              <groupId>com.spotify</groupId>
              <artifactId>docker-maven-plugin</artifactId>
              <version>0.3.3</version>
              <configuration>
                  <imageName>labs/hippo</imageName>
                  <dockerDirectory>src/main/docker</dockerDirectory>
                  <resources>
                      <resource>
                          <targetPath>/</targetPath>
                          <directory>${project.build.directory}</directory>
                          <include>${project.build.finalName}-distribution.tar.gz</include>
                      </resource>
                  </resources>
              </configuration>
          </plugin>
        </plugins>
    </build>
</profile>

This build profile uses the docker-maven-plugin to build the Docker image, copying the docker assembly tar to the right location in the target folder so that it can be copied into the image.


2. Set up Tutum

Tutum combines the services of Docker Registry, Docker Engine, Docker Compose, provides a management GUI and API, and can also manage and provision cloud nodes through AWS, Azure, Digital Ocean and other cloud providers. As such, it is a very handy tool for deploying and managing Docker images, and the API makes automation very straightforward (as we will see in section 6). To set up Tutum:

  1. Create an account at tutum.co. If you already have a docker hub account, you can use that.
  2. Provision a node. The full details of how to do so are beyond the scope of this tutorial, but Tutum allows you to either link a cloud provider and provision a node that way, or use a system you control with the "Bring your own node" option.
  3. Create a repository for the hippo application. In the "Repositories" tab, click "Create new repository". Call it "hippo-cd". This repository will store the hippo docker image we configured in section 1.


3. Push Hippo Docker Image

Now that we have created a repository for the docker image, we should build and push the image:

  1. In the maven project we set up in section 1, run
    mvn clean package && mvn docker:build -P docker

    This will build the docker image.
     

  2. Tag the docker image:
    docker tag -f labs/hippo tutum.co/<YOUR TUTUM USERNAME>/hippo-cd

    replacing <YOUR TUTUM USERNAME> with your tutum account username. This will "mark" the image as belonging to the repository we created in the previous step.
     

  3. Log into the tutum repository:
    docker login tutum.co/<YOUR TUTUM USERNAME>/hippo-cd

     

  4. Push the image:
    docker push tutum.co/<YOUR TUTUM USERNAME>/hippo-cd

     

After running all of these steps, you can check on the Tutum repository tab to confirm that the image has been successfully pushed.


4. Deploy Tutum Application Stack

One of the most useful features of Tutum is the ability to create Application Stacks. Similar to Docker Compose, these stacks allow you declaratively define multiple docker containers, configure them, and configure the links between them.

In Tutum, go to the "Stacks" tab and create a new Stack. Paste the following into the Stackfile editor. Replace the text in brackets as appropriate:

hippo:
  image: 'tutum.co/<YOUR TUTUM USERNAME>/hippo-cd'
  autoredeploy: true
  ports:
    - '8080:8080'
  links:
    - hippo-mysql
hippo-mysql:
  image: 'mysql/mysql-server:latest'
  environment:
    - MYSQL_DATABASE=hippo
    - MYSQL_PASSWORD=hippo
    - 'MYSQL_ROOT_PASSWORD=<SOME RANDOM PASSWORD>'
    - MYSQL_USER=hippo
  expose:
    - '3306'

This creates a simple two-application Stack. The first application is a container running the hippo-cd docker image we built in the previous section. autoredeploy: true specifies that the application will be automatically re-deployed any time Tutum detects that a new version of the image has been deployed to the registry. The ports directive defines the ports that this container will expose on the host machine. In order to access Hippo from the outside world, we have to expose port 8080. The links directive sets up the internal network to allow communication between containers in the stack. By specifying hippo-mysql, the hippo container will be able to access the hippo-mysql container over its local network.

The second application is a container running the latest version of mysql, taken from Docker Hub. The environmentdirective allows us to define environment variables to control the behavior of the container. In the case, we can define a new database called "hippo", as well as the credentials to access it. The expose directive specifies the ports which will be exposed to the internal network. By exposing port 3306, the hippo container will be able to access the database on hippo-mysql.

After creating this stack, deploy it to the node. Once it is done deploying, go to the "Stacks" tab, and select the stack we just deployed. Here, open the "Endpoints" tab and you will see an endpoint on port 8080. You should be able to access hippo there.

We have now set up a simple continuous deployment system. Try it out by making a change to the code or configuration in the maven project, build the project, build the docker image, and tag and push it. The hippo container in Tutum will automatically re-deploy from the latest image and your change will take effect.

A word about Volumes

Open up the service definition for hippo-mysql. Under the "Configuration" tab we see a section called "Volumes". Here, a volume for "/var/lib/mysql" is defined. Volumes are Docker's way of providing semi-persistent storage for containers.
In this context it means that in general, the content entered into hippo (and hence into the mysql database stored at /var/lib/mysql) will not be deleted when the mysql container is re-deployed. To see this in action, make a change in hippo. Then in Tutum, re-deploy the hippo-mysql image. In the popup, make sure that "Reuse existing container volumes?" is set to "ON". After the mysql image finishes deploying, check hippo again and confirm that your change is still there. The net result of this is that content entry changes will not be overwritten when containers are redeployed for builds, unless the redeploy is explicitly set to delete containers.

For more information see the Docker volumes documentation.

5. Implement Blue/Green Deployments

The system we have set up so far is sufficient for development or staging environments where downtime is not a concern. In production, 2 minutes of downtime for a build is certainly going to be a problem. To overcome this, we will now set up the stack to allow it to support zero-downtime deployments.

The first step is to modify the Stackfile so that it can support a Blue/Green deployment:

hippo-blue:
  image: 'tutum.co/<YOUR TUTUM USERNAME>/kis:latest'
  environment:
    - 'HTTP_CHECK=OPTIONS / HTTP/1.1\r\nHost:\ www.<YOUR PRODUCTION SITE NAME>.com:8080/site'
  expose:
    - '8080'
  links:
    - hippo-mysql
hippo-green:
  image: 'tutum.co/<YOUR TUTUM USERNAME>/kis:latest'
  expose:
    - '8080'
  links:
    - hippo-mysql
hippo-lb:
  image: 'tutum/haproxy:latest'
  environment:
    - 'HEALTH_CHECK=check inter 500 rise 2 fall 4'
  links:
    - hippo-blue
    - hippo-green
  ports:
    - '1936:1936'
    - '8080:8080'
  restart: always
  roles:
    - global
hippo-mysql:
  image: 'mysql/mysql-server:latest'
  environment:
    - MYSQL_DATABASE=hippo
    - MYSQL_PASSWORD=hippo
    - 'MYSQL_ROOT_PASSWORD=<SOME RANDOM PASSWORD>'
    - MYSQL_USER=hippo
  expose:
    - '3306'

The idea behind a Blue/Green deployment is that there are two container definitions, "Blue" and "Green". Generally, only one of them is active. During a deployment we can then spin up the inactive container, wait for it to stabilize, and then bring down the previously active container. In order to implement this with docker containers, we create two containers hippo-blue and hippo-green with very similar definitions. Note that port 8080 is no longer exposed on the host as that would prevent both containers from running simultaneously. We use HAProxy as a load balancer in front of the containers. tutum/haproxy is a special build of HAProxy that automatically reconfigures itself based on the currently running containers.

So, when hippo-blue (or hippo-green) becomes available, HAProxy will detect that change and start serving to it. Note the HEALTH_CHECK and HTTP_CHECK options defined in environment variables. These tell HAProxy how to do a health check on the two hippo containers to make sure they are actually serving content before passing requests off to them. Also note that HAProxy has a status page on port 1936. This is useful for automating the Blue/Green deployment, as we can poll the status page to detect when servers are ready.

Now that this new stack is in place, we can manually walk through the stages of a Blue/Green deployment. First, ensure that only hippo-blue is running. hippo-green should be stopped.

  1. Start hippo-green
  2. Check the status page on port 1936. The username and password are both 'stats'
  3. hippo-green should appear on the status page, and show status "DOWN". Keep refreshing the page until both hippo-blue and hippo-green show status "UP"
  4. Stop hippo-blue

6. Automate

To automate the Blue/Green deployment, we can use the tutum-cli tool to run through the steps outlined in the previous section. Note that the Tutum API could be used for this as well:

#!/bin/bash

BLUE=$1
GREEN=$2
HAPROXY=$3

wait_up() {
SERVER=$1
echo Waiting for $SERVER to come up
i=0
while [[ ! $UP ]]; do
  UP=$(curl -u stats:stats --silent "http://$HAPROXY:1936/;csv" | grep $SERVER | grep L7OK)
  sleep 1
  i=$((i++));

  #Wait at most 10 minutes
  if (( i > 600 )); then
    echo "$SERVER failed to start up"
    return 1
  fi  
done
}

ISGREEN=$(tutum service inspect $GREEN | grep Running)
if [[ $ISGREEN ]]; then 
  echo "Switching to Blue"
  tutum service redeploy --sync $BLUE
  wait_up BLUE
  if [ $? -ne 0 ]; then
    tutum service stop $BLUE
    return 1
  else
    tutum service stop $GREEN 
  fi  
else
  echo "Switching to Green"
  tutum service redeploy --sync $GREEN
  wait_up GREEN
  if [ $? -ne 0 ]; then
    tutum service stop $GREEN 
    return 1
  else
    tutum service stop $BLUE 
  fi  
fi

This script first uses tutum service inspect to figure out which service is currently active. Then, it either starts Blue and stops Green or vice-versa. The wait_up function simply polls the HAProxy status page, waiting for the target service to show "UP" status. This ensures zero downtime. The script can be running by passing 3 arguments: The name of the Blue container, the name of the Green container, and the IP address (or hostname) of the node.

Now that all the individual pieces are in place, the build can be automated with a CI tool such as Jenkins or Bamboo. The basic steps are

  1. Run the maven build
  2. Run unit and integration tests
  3. If tests pass, build the docker image and push it to Tutum
  4. Run the Blue/Green swap script

Conclusion and Further Reading

The use of containers is seeing mainstream adoption even in the most risk averse industries, suggesting that it should be considered by any organization looking to be more agile and responsive in the their software development and deployment.

For additional details on Continuous Delivery, Continuous Integration and Continuous Deployment, check out our blog on the agile experience delivery model.

For further technical reading, check out the Docker documentation , the Tutum Stackfile reference , and the Tutum API documentation 

Did you find this page helpful?
How could this documentation serve you better?
On this page
    Did you find this page helpful?
    How could this documentation serve you better?