WordPress on Google Container Engine

In this post, I’m going to describe how to deploy the stateless docker container image from one of my last posts onto Google Container Engine. Container Engine is a hosted Kubernetes Service, which means this setup is based on Kubernetes specific deployment configurations. Building our configurations based on Kubernetes allows us to decouple from the Cloud Provider which enables us to switch to another Kubernetes based Service, or even host our own cluster. Google Container Engine provides a fully managed solution that allows great flexibility in terms of scaling up nodes and provisioning resources and deploying docker containers with all the dependencies of our applications onto the nodes. We are also going to use the hosted MySQL Service CloudSQL for the WordPress database. This post is based on the stateless WordPress image from my previous post linked here.

Google Cloud Setup

If you haven’t already, register for Google Cloud and install the gcloud command line tools. We are going to use the command line for certain tasks, it is a powerful tool to manage almost all of your cloud resources.  You can download and install them from the Google docs here.

We also install another command line tool called kubectl. This tool is specific to Kubernetes and it is used to manage all the resources/configurations of the Kubernetes Cluster.

gcloud components install kubectl

After you installed the command tools, connect your command line with your cloud account via the following command.

gcloud auth application-default login

Container Cluster Setup

First we have to setup a new Container Cluster. This can be done via the UI in the Google Cloud Console or via the following commands using the gcloud command line.

# gcloud create new container cluster

- List the available zones

For easier command line use, we set our desired compute zone. This dictates in which data center the cluster is going to be started in.
> gcloud compute zones list

- Select a zone from the list
> gcloud config set compute/zone $zone

- Create new cluster

To save costs we do provide some additional settings like image type and node count.
> gcloud container clusters create example-cluster --machine-type g1-small --num-nodes 2 --no-enable-cloud-endpoints --no-enable-cloud-monitoring

- Fetch credentials to allow to control the cluster via kubectl command line tool
> gcloud container clusters get-credentials example-cluster

# gcloud clean up
- To temporary stop the cluster
> gcloud container clusters resize example-cluster --size=0

- To delete the cluster
> gcloud container clusters delete example-cluster

CloudSQL Setup

In our setup, we are going to use the 2nd generation MySQL instances.
I would advise you to use the guided UI to create an instance. Visit the docs here.
Make sure to save your newly set root password.
Some settings that I chose for simplicity and cost savings:

instance type: db-f1-micro
storage: 10GB
disktype: HDD
authorized networks: None

Now we have to create a service account to allow our Kubernetes Cluster to talk to the CloudSQL instance.


# Setup Kubernetes to connect to the Cloud SQL instance

[Link to Docs](https://cloud.google.com/sql/docs/mysql/connect-container-engine)

- Enable Cloud SQL API

[Link to UI](https://console.cloud.google.com/flows/enableapi?apiid=sqladmin)

- Create Service Account

[Link to UI](https://console.cloud.google.com/iam-admin/serviceaccounts/)

As Role select Cloud SQL > Cloud SQL Client.
Select "Furnish a new private key" with JSON format.
Download your private key.

- Create a custom User for access via the Container Engine Cluster

[Link to UI](https://console.cloud.google.com/sql/instances) select your database -> Access Control -> Users
Create a user "kubernetes" and choose a password.

- Register the Private Key as a secret in the Container Engine Cluster
> kubectl create secret generic cloudsql-instance-credentials --from-file=credentials.json=downloaded-privatekey.json

- Register User/Password as a secrets in the Container Engine Cluster
> kubectl create secret generic cloudsql --from-literal=username=kubernetes --from-literal=password=kubernetes

- List saved secrets for control
> kubectl get secrets

Importing Data into Cloud SQL

There are multiple ways to import your exported .sql database dumps. Now there is even an option in the Cloud Console backend on your instance page called “import” to import data via the UI.
I prefer to use PHPMyAdmin to manage the database. With docker, we can run PHPMyAdmin locally and connect to the CloudSQL instance. For this to work, we first have to run the Cloud SQL Proxy application which creates a tunnel from a local port on our machine to the database in the cloud.

Starting local Cloud SQL Proxy Container

Google provides an already built docker container image which includes the proxy application. To start a container on our machine we can execute the following command.

docker run --name cloud_sql_proxy -d \
    -v /etc/ssl/certs:/etc/ssl/certs \
    -v $PRIVATE_KEY:/credential.json \
    -p 127.0.0.1:3306:3306 \
    b.gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy \
    -instances=$INSTANCE=tcp:0.0.0.0:3306 -credential_file=/credential.json

Replace $PRIVATE_KEY with the full path pointing to your previously downloaded service account private key file.
Replace $INSTANCE with the string found on your CloudSQL instance details page listed under Properties > “Instance connection name”. If you cannot find the information the string consists of “project-id:zone:instance-name”.

Starting local PHPMyAdmin Container

Now that the Cloud SQL Proxy is running we start another local docker container with the publicly available phpmyadmin image and link it to the proxy. We map the container port 80 to port 8090 on our local interfaces.

docker run --name phpmyadmin -d --link cloud_sql_proxy:db -p 8090:80 phpmyadmin/phpmyadmin

Open your browser at 127.0.0.1:8090.
You should see the PHPMyAdmin login page. Login with your root account credentials and import your WordPress Database onto the Cloud SQL instance.

Uploading your WordPress Docker image to Google Container Registry

Before we are able to run our WordPress image on our cluster, we first have to upload it to Google Container Registry. The Registry is a hosted Docker Image Repository Service which allows us to upload custom built images and access them from our Container Engine cluster. To enable image uploads we have to activate the Registry API first. In your Google Cloud Console web project go to API Manager -> Search for “Registry” -> Select & Activate Google Container Registry.

The sample folder in the wordpress-stateless repo contains the Dockerfile that I’m using for this example.

In the sample folder, execute the following commands.

# build and tag the image locally with the Dockerfile in the current dir "." 
docker build -t my-wordpress .
# tag the image with the full url where the image will be hosted
docker tag my-wordpress us.gcr.io/$PROJECT_ID/my-wordpress:v1
# upload the image to Google Container Registry
gcloud docker -- push us.gcr.io/$PROJECT_ID/my-wordpress:v1

Replace $PROJECT_ID with your project id.

Kubernetes

Deployment

Now we want to create a deployment configuration of our WordPress container image. For this, we create a new wordpress-deployment.yml file in the sample folder of with the following contents.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-wordpress
spec:
  replicas: 1
  revisionHistoryLimit: 3
  template:
    metadata:
      labels:
        app: my-wordpress
    spec:
      containers:
        - image: $IMAGE_URL
          name: my-web
          env:
            - name: WORDPRESS_DEV
              # Show Error Logs
              value: "true"
            - name: WORDPRESS_DB_HOST
              # Connect to the SQL proxy over the local network on a fixed port.
              value: 127.0.0.1:3306
            - name: WORDPRESS_DB_NAME
              value: $WORDPRESS_DB_NAME
            # These secrets are required to start the pod.
            # [START cloudsql_secrets]
            - name: WORDPRESS_DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: cloudsql
                  key: password
            - name: WORDPRESS_DB_USER
              valueFrom:
                secretKeyRef:
                  name: cloudsql
                  key: username
            # [END cloudsql_secrets]
          ports:
            - containerPort: 80
              name: wordpress-nginx
        # Change $INSTANCE here to include your GCP
        # project, the region of your Cloud SQL instance and the name
        # of your Cloud SQL instance. The format is
        # -instances=$PROJECT:$REGION:INSTANCE=tcp:3306.
        # [START proxy_container]
        - image: gcr.io/cloudsql-docker/gce-proxy
          name: cloudsql-proxy
          command: ["/cloud_sql_proxy", "--dir=/cloudsql",
                    "-instances=$INSTANCE=tcp:3306",
                    "-credential_file=/secrets/cloudsql/credentials.json"]
          volumeMounts:
            - name: cloudsql-instance-credentials
              mountPath: /secrets/cloudsql
              readOnly: true
            - name: ssl-certs
              mountPath: /etc/ssl/certs
            - name: cloudsql
              mountPath: /cloudsql
        # [END proxy_container]
      # [START volumes]
      volumes:
        - name: cloudsql-instance-credentials
          secret:
            secretName: cloudsql-instance-credentials
        - name: ssl-certs
          hostPath:
            path: /etc/ssl/certs
        - name: cloudsql
          emptyDir:
      # [END volumes]

Replace $IMAGE_URL with the previously tagged full image url.
Replace $WORDPRESS_DB_NAME with the name of your imported database.
Replace $INSTANCE with the same string “Instance connection name” used above for the local cloudsql proxy.

The deployment configuration looks a bit like a docker-compose.yml config file. There are some familiarities like the ability to define multiple containers and define volumes that are mapped into the containers.
Kubernetes has a concept called Pods, which allow us to deploy multiple containers together as a single entity onto a node. Since we need the cloudsql-proxy container to connect to the CoudSQL database instance, this is a perfect use case for a deployment configuration with a pod that consists of two containers, our wordpress image and the cloudsql-proxy image.

ENV Variables

We use the env config setting to pass in data into our wordpress image. The image contains startup scripts that read those env variables and reconfigure the wp-config.php files.
To prevent storing sensitive password information in the deployment configuration file, we can use another Kuberentes feature that allows us to reference a secret that was defined in the cluster. In the CloudSQL Setup section we registered the database user and password into our cluster, now we can reference them in our configuration.

Volumes

In the volumes config setting we define 3 different types of data volumes for use with the volumeMounts setting in the container definition.  The first volume “cloudsql-instance-credentials” is referencing the service account privatekey secret in the cluster. The volume “ssl-certs” maps a local host directory from the cluster node into the conatiner, this is necessary to provide access to cert authority files for the cloudsql-proxy. The volume “cloudsql” is an empty directory for the cloudsql-proxy to save a file socket entry, which then could be referenced in multiple containers. But in this config we are binding the cloudsql-proxy to the port 3306 and we connect from the wordpress container via localhost.

As you can see only the cloudsql-proxy container is using volumeMounts to reference defined volumes. The wordpress container does not need any outside volumes because it contains all the wordpress files inside of the image we built before hand. For the wp-content directory we are using a Plugin to store uploaded files in Google Cloud Storage. The setup is intentionally done this way to enable the container to be scaled onto multiple nodes without having to setup a replicated file system.

Deploying the configuration onto the Cluster

Now we want to tell our cluster to download our image and deploy it onto the cluster. For this we are using the Kubernetes command line tool kubectl.

# Apply the deployment configuration
kubectl apply -f wordpress-deployment.yml
# View the deployment status
kubectl get deployments
# View the pods status
kubectl get pods
# Wait till both pods ( wordpress container / cloudsql proxy ) are running.
NAME READY STATUS RESTARTS AGE
my-wordpress-1422364771-s1vfv 2/2 Running 0 2m

Now lets try to access the WordPress site to see if it is up and running.  At the moment the WordPress container is not yet accessible via a public IP. To check if the deployment is running successfully we can use the kubectl port-forward command to map a local port to a port on the container running in the cluster.

To listen on port 8080 locally, forwarding to port 80 in the container on the cluster, enter the following command.

# use the pod name listed from the "kube get pods" command
kubectl port-forward my-wordpress-1422364771-s1vfv 8080:80

Now when you visit your Browser on 127.0.0.1:8080 you will notice that you will get redirected to your pre-configured WordPress Domain with “/wp-signup.php?new=127.0.0.1:8080” added to the URL. This is actually a sign that the site is running and that the database connection is working. WordPress stores your site URLs in the Database, when it receives a request with no matching URL it sends a redirect to the configured main site domain. To prevent this we would have to replace the URLs in the database with a tool like WP-CLI. But for now, we skip this and move forward putting a Service layer in front to access your WordPress via a public IP.

Service

Since our container Pods can be started or scaled onto any node inside our Kubernetes cluster, we have to create a Service configuration to provide a unified way to access our WordPress installation. A Kubernetes Service allows us to provide an abstraction layer on top of the deployments which automatically resolves the current location of our containers inside the cluster. The service can also act as a load-balancer when you scale your containers onto multiple nodes. A service can also create a public IP address that allows anyone can access it over the Internet.

We create a new file wordpress-service.yml with the following contents.

apiVersion: v1
kind: Service
metadata:
  name: wordpress-service
  labels:
    app: public-webservice
spec:
  type: LoadBalancer
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: my-wordpress
Type

There are different types of services. We are using the “LoadBalancer” type.
On Google Container Engine this creates a Google Network TCP Loadbalancer with a public IP address.

Deploying the service onto the Cluster

# Apply the service configuration
kubectl apply -f wordpress-service.yml
# View the service status
kubectl get services
NAME                CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
kubernetes          10.23.240.1               443/TCP        2d
wordpress-service   10.23.246.77   104.111.111.111   80:32744/TCP   1m

Note the newly created External-IP address. This address is allocated as a static IP in the Google Cloud Console Networking backend. You will get billed for the static IP so do not forget to remove it when you don’t need it anymore.

Now visit your browser on your EXTERNAL-IP. If everything worked correctly, you should again get redirected to your WordPress domain name with the “/wp-signup.php..”. Now to be able to browse the site we can temporarily add a new entry to the /etc/hosts file with our configured WordPress domain name and the EXTERNAL-IP.

# Add Domain to the local name resolution
sudo sh -c "echo '104.111.111.111 www.mydomain.com' >> /etc/hosts"

On windows, this file is in another location. You can google it, or perhaps rethink your OS selection.

Now when you refresh your browser you should see your WordPress site. If not try to clear your browsers DNS cache or check the contents of your /etc/hosts file.

Next you want to create a public DNS record for your domain pointing at the EXTERNAL-IP so anyone can visit your site. There are many DNS services that enable you to edit your DNS settings for your domain.

Closing

The setup at first may seem quite complex, but if you want to leverage to flexibility and scale-ability of a Kubernetes cluster in conjunction with the power the Google Cloud, creating only 3 configuration files ( Dockerfile, deployment.ymlservice.yml)  doesn’t seem to be that complex considering the complexities if you would want to create a similar setup yourself. Using Container Engine, you can also deploy any other types of applications next to your WordPress Site to build any type of Service.

You can improve and tweak this setup to fit your needs. I personally added another load-balancer layer based on nginx to manage my sites and do SSL termination. I’m also going to add a Caching container to store session information and other shared data to improve the scaling of containers.

If you have any questions about the setup, feel free to leave a comment below.

 

Blogception – How this Blog is setup

Welcome to my first Post on my personal Blog.
In this Post, I’m going to outline how I built this Blog and what technologies and hosting setup I’m using.

You maybe already guessed on which platform this Blog is built on, just by looking at the current Theme or checking the source of this page.
The Blog is running on WordPress.

WordPress is easy to setup and really extensible. In the past, I’ve tried and even built different Blog engines but I came back to WordPress.
After all, at the time of writing this WordPress is powering 25% of the Internet.

The setup

WordPress is the most popular CMS on the Internet, this also makes it lucrative for hackers to find security vulnerabilities to mass target and attack WordPress installations to distribute malware.
Therefore security is a big priority in my setup.

The first step to more security is installing WordPress inside a Docker Container. There are many WordPress Dockerfiles available on DockerHub ready to download. There even is an official Docker Image provided, you can check it out here. I decided to build my own image for reasons I will explain later on. This post will simply describe the architecture of my setup and not every detailed configuration.
I will write a follow-up post with a detailed explanation of the configurations and Dockerfiles and post them to my Github page.

Requirments

Stateless

An important requirement for me was to have a “stateless setup”. The setup in a container enables me to scale and replicate to multiple nodes. To achieve this had to make some adjustments to how the docker image is setup.

WordPress by itself is heavily stateful. The WordPress code is saved on the filesystem ( which also represents a state in some way ). For instance, your wp-config.php is saved on the filesystem which contains important configuration details. The installed themes and plugins are saved in the wp-content directory and of course, the Database contains all the Posts and configuration about your site. Updates of WordPress itself also need to update the PHP code stored on the filesystem.

The Database component will never be truly stateless and we will deploy Mysql as a separate instance in a container. The scaling of the Database itself should also be handled independently.

In a typical WordPress setup, we would install WordPress directly on the filesystem and modify the configuration via a text editor. We would be adding our own themes directly into the installation. This approach is hard to maintain and if you want to run your WordPress Blog on 2 separate nodes you have to make a copy of all files. Updating WordPress via the UI would also not work with multiple nodes.

Support for different Environments

I also wanted to be able to setup different environments to develop locally and later deploy to test, staging and production servers.

Easy way to handle WordPress Updates

WordPress updates should be able to be tested and applied to production nodes without to much manual work.

Steps to support all our Requirments

Eliminating Filesystem state

So first I needed to get rid of the filesystem as the main state representation. There are different ways to solve this Problem.

Version Control WordPress

One possibility would be to use Git to version control the WordPress code itself including your own themes and plugins codes. This way one could write scripts that execute a “git pull” on your production nodes to update all production servers with the newest version of your code. This would also enable us to have different environments. Any updates or plugin changes done via the WordPress Admin UI have to be manually checked in to Git and distributed across all nodes.

Use Docker to build a WordPress Version

Docker images can be tagged and stored in a central registry. Docker images are built via Dockerfiles. In a Dockerfile, we can download the WordPress Code and make our own modifications to it in a reproducible way. The official WordPress Docker image does exactly that, but it also creates a Docker Volume for the installation itself. When we run the official container, a separate Directory for this Docker Volume is created on our local filesystem. This enables the user to log into the WordPress Admin and install plugins and themes and after restarting the container the data is still available. This makes the Filesystem stateful and we do not want that.

To solve this I built my own Dockerfile and skipped the creation of a Docker Volume for the WordPress installation. I added all my own themes and plugins via the Dockerfile while building the image with “docker build”. This provides a big advantage, that the state is now represented in the Dockerfile and the filesystem is now stateless, anytime the docker container is restarted the filesystem is reset to the state in the Dockerfile. The drawback is that any manual changes introduced via the WordPress Admin are lost during a restart of the container. But I can take this drawback for a stateless filesystem and much better security.

Other ways

I’m sure there are other ways to deploy and scale a WordPress installation. But for this Setup, I have used the approach via the Dockerfile.

WordPress Media Files

The Media Library also defines local state with all the Files stored in wp-upload. To fix this we can offload our media files to a CDN provider like Google Cloud Storage or Amazon S3. I have used a WordPress Plugin for that.

Multiple Environments

To support multiple environments we can either add some entry point Scripts in the Docker container that replaces WordPress configuration variables or fully map a wp-config.php file into the container via Docker Volume mappings.
This in conjunction with Docker-Compose we are able to run a local Dev environment and use the same Docker image to run in production.

Frontend Server

This Blog is running on the Web, which means it is accessible via the HTTP Protocol. This means that we need a Webserver which is able to execute the WordPress PHP code every time an HTTP Request is made by the browser. I have used use Nginx together with PHP-fpm. Nginx executes the PHP code via FastCGI.

I added Nginx directly into the Dockerfile that includes the WordPress installation.

Google Cloud

There are many hosting providers that allow you to deploy a WordPress installation. Some are fully managed and others are more flexible. For my setup, I choose the Google Cloud because it enables fine grade resource control and the scaling to multiple nodes. There also is extensive Docker support via a hosted Container Registry and Google Container Engine.

CloudSQL

For the Database, I decided to use the Google Managed Mysql Service CloudSQL. I choose to use the managed Service because it frees me from managing yet another piece of software. Monitoring the Database and applying updates or managing data with backups scripts is a very resource intensive task. For this reason, I’m willing to pay a little more than I otherwise would have to, if I was running a Mysql instance myself.

Kubernetes

For the actual deployment of the Docker Container that includes WordPress, I’m using the open source Cluster Manager Kubernetes.

Kubernetes is developed by Google engineers together with the open source community. The concepts of Kubernetes are derived from a Google internal Cluster Manager called Borg. Kubernetes makes it easy to deploy Docker containers onto a Cluster of compute instances.

We could setup Kuberenetes on any Hosting provider. This again requires the need for maintenance to monitor and update all the Services needed to run a Kubernetes cluster. Luckily Google also provides a service for that, called Google Container Engine. Container Engine offers a fully managed Master Node, this master node is automatically updated. The slave nodes show up as normal compute instances that are running a special Image which includes all the services that are required to run Kuberentes.

Summary

So that’s it for now. To summarize in one Sentence; This Blog is operating on WordPress, versioned in a Docker Image, served via Nginx and deployed via Kubernetes, managed by Google Container Engine that is running on Google Compute Engine.
More detailed posts will follow and I will open source all of my Dockerfiles needed for this setup.