Blog

Book Review – The Bulletproof Diet

Link to the book

Review

The book is written by Dave Asprey. He was one of the first Bio Hackers who was actively publishing his results online. The term Biohacking is often used to describe the process of manipulating environmental inputs to your body to trigger a specific output result. Biohacking includes a broad range of applications, taking substances to improve your mental performance or just beeing more selective about your food choices to improve overall body function. Dave used to weight about 300lbs and his mental performance was very impaired, suffering from constant brain fog and the inability to focus.

In this book, Dave describes the diet that he developed over many years of trial and error to improve & regain his own health and mental performance. He calls it the “Bulletproof Diet”.

At the beginning, Dave talks about environmental toxins that harm our bodies and cause weight gain and brain fog. He mentions mold toxins that can occur in food that was not processed with enough quality control. Coffee beans can contain such mold that is invisible to the human eye. In America, a lot of commercial coffee allegedly contains mold that when consumed causes an instant decline in mental performance.

The Bulletproof diet does not encourage counting calories. The reasoning behind that is that it is simply not possible to accurately measure calories consumed.
Dave describes that our brains can be separated into two parts.
The first part, which we can think of as the “reptile brain”, controls low-level processes like temperature regulation. This part always needs to get enough nutrients in able to stay alive. The other part is called “limbic brain” and it is referred to as the “Labrador retriever” part of the brain. This part controls the instincts that keep our species alive, like searching for food. When you’re depriving yourself of calories, this part triggers urges to eat anything in sight to make sure you do not die of starvation. The goal of the Bulletproof diet is to balance your bodies hormones and keep your hunger hormones ghrelin (increases appetite) and leptin (decreases appetite) in check.

The diet itself is based on unprocessed foods and a lot of animal products and healthy fat sources. The macronutrient intake should consist of moderate protein, high fat and low to moderate carbohydrate. Dave recommends eating a lot of healthy, unprocessed fats and animal meats.
The book provides a roadmap with a list of foods from red to green. Red indicates toxic and green indicates bulletproof. He states that you should try to eat in the green range if you want the gain maximum mental and physical performance.
The Roadmap is available as a Poster here:

The diet is not a ketogenic diet, all dough carbohydrates are kept low during the day. The last meal should contain some carbohydrates, the book recommends around 30g on most days with 1-2 nights a week of a higher intake in the range of 100-150g. The carbs should come from healthy starches like sweet potatoes.

For breakfast, Dave recommends a high-fat meal. He also provides a concept of intermitted fat fasting. It is a modified version of intermitted fasting where you are allowed to consume calories in the morning as long as no big insulin release is triggered. This way the benefits of fasting are still there and your hunger signals are turned off and your brain feels more energized. Consuming a fat only meal does not raise insulin, therefore, Dave recommends a special coffee recipe called “Bulletproof Coffee” which contains butter and MCT oil.
Dave also created a product called “Brain Octane Oil” which contains only the C8 Fat ( Caprylic Acid ) from the MCT ( Medium Chain triglyceride ). The C8 fat is extremely potent and very easily converted to ketones in your body.
Dave states that making bulletproof coffee with brain octane oil enables your body to go into nutritional ketosis ( > 0.5 mmol ) with just one cup of coffee in the morning while fasting until lunch.

The last chapter states that different cooking methods can have negative or positive consequences on your body. Smoking, frying, and grilling can damage foods and when consumed can cause inflammation in the body.
The book ends with a whole list of Bulletproof recipes to allow you to have a vast variety of dishes in your diet.

Rating

It is definitely noticeable that Dave put a lot of research and trial and error into developing the Bulletproof diet. Consuming all that information condensed in a book is a very effective way to learn about optimizing your body and mental performance. The recommendations in the book are solid and the book has scientific references and is based on current research. It also is not too complex to understand as Dave gives some great explanations.
The Bulletproof coffee recipe may be a bit of marketing for his own products but the products have valid science behind them.

My overall rating is 8/10.

Quotes

“The best thing you can do to live a long time is to eat the highest-quality food.”

“Eating carbs in the morning will set you up for an energy spike and crash along with food cravings throughout the entire day. If you decide to test this for yourself, it will be blindingly obvious. Try having just Bulletproof Coffee instead of your usual breakfast and see how long it takes you to want food. For most people, it turns off the desire for food for at least 5 to 6 hours.”

“Leptin is produced by fat cells, and your leptin levels are proportionate to your body-fat levels. This means that the fatter you are, the more leptin you have in your body.”

Actions

I personally tried many different diet variations. Naturally, I also had to try the Bulletproof diet on myself.
I’ve moved all of my carbohydrates towards the evening and I’m eating a higher fat based diet during the day.
I started consuming Bulletproof coffee after 12 hours of overnight fasting to keep hunger levels down and push my first real meal to 2 pm. My first meal is always a high-fat moderate protein meal.

I also bought the Brain Octane product and put it in my coffee and in the morning. I measured my ketones after consuming the coffee and I mostly hit ketone levels above 0.4mmol. Which means my body is using ketones for fuel and the product actually works.

I noticed that I have much better focus during the day and I even lost some fat without much food cravings.

The only negative effect the diet had for me, I lost a little bit of strength in the gym. Probably because my carbohydrate intake was too low for my activity levels. When doing heavy strength training I would suggest eating at least 100-200g of carbs on most nights.

Recommendation

I recommend this book to anyone interested in optimizing body and mental performance. Also, anyone who in the past tried to lose weight and restricted fat and overall calories too much so that they were just feeling weak and miserable. With the bulletproof diet approach, it is certainly possible to lose weight in a more healthy way.

WordPress on Google Container Engine

In this post, I’m going to describe how to deploy the stateless docker container image from one of my last posts onto Google Container Engine. Container Engine is a hosted Kubernetes Service, which means this setup is based on Kubernetes specific deployment configurations. Building our configurations based on Kubernetes allows us to decouple from the Cloud Provider which enables us to switch to another Kubernetes based Service, or even host our own cluster. Google Container Engine provides a fully managed solution that allows great flexibility in terms of scaling up nodes and provisioning resources and deploying docker containers with all the dependencies of our applications onto the nodes. We are also going to use the hosted MySQL Service CloudSQL for the WordPress database. This post is based on the stateless WordPress image from my previous post linked here.

Google Cloud Setup

If you haven’t already, register for Google Cloud and install the gcloud command line tools. We are going to use the command line for certain tasks, it is a powerful tool to manage almost all of your cloud resources.  You can download and install them from the Google docs here.

We also install another command line tool called kubectl. This tool is specific to Kubernetes and it is used to manage all the resources/configurations of the Kubernetes Cluster.

gcloud components install kubectl

After you installed the command tools, connect your command line with your cloud account via the following command.

gcloud auth application-default login

Container Cluster Setup

First we have to setup a new Container Cluster. This can be done via the UI in the Google Cloud Console or via the following commands using the gcloud command line.

# gcloud create new container cluster

- List the available zones

For easier command line use, we set our desired compute zone. This dictates in which data center the cluster is going to be started in.
> gcloud compute zones list

- Select a zone from the list
> gcloud config set compute/zone $zone

- Create new cluster

To save costs we do provide some additional settings like image type and node count.
> gcloud container clusters create example-cluster --machine-type g1-small --num-nodes 2 --no-enable-cloud-endpoints --no-enable-cloud-monitoring

- Fetch credentials to allow to control the cluster via kubectl command line tool
> gcloud container clusters get-credentials example-cluster

# gcloud clean up
- To temporary stop the cluster
> gcloud container clusters resize example-cluster --size=0

- To delete the cluster
> gcloud container clusters delete example-cluster

CloudSQL Setup

In our setup, we are going to use the 2nd generation MySQL instances.
I would advise you to use the guided UI to create an instance. Visit the docs here.
Make sure to save your newly set root password.
Some settings that I chose for simplicity and cost savings:

instance type: db-f1-micro
storage: 10GB
disktype: HDD
authorized networks: None

Now we have to create a service account to allow our Kubernetes Cluster to talk to the CloudSQL instance.


# Setup Kubernetes to connect to the Cloud SQL instance

[Link to Docs](https://cloud.google.com/sql/docs/mysql/connect-container-engine)

- Enable Cloud SQL API

[Link to UI](https://console.cloud.google.com/flows/enableapi?apiid=sqladmin)

- Create Service Account

[Link to UI](https://console.cloud.google.com/iam-admin/serviceaccounts/)

As Role select Cloud SQL > Cloud SQL Client.
Select "Furnish a new private key" with JSON format.
Download your private key.

- Create a custom User for access via the Container Engine Cluster

[Link to UI](https://console.cloud.google.com/sql/instances) select your database -> Access Control -> Users
Create a user "kubernetes" and choose a password.

- Register the Private Key as a secret in the Container Engine Cluster
> kubectl create secret generic cloudsql-instance-credentials --from-file=credentials.json=downloaded-privatekey.json

- Register User/Password as a secrets in the Container Engine Cluster
> kubectl create secret generic cloudsql --from-literal=username=kubernetes --from-literal=password=kubernetes

- List saved secrets for control
> kubectl get secrets

Importing Data into Cloud SQL

There are multiple ways to import your exported .sql database dumps. Now there is even an option in the Cloud Console backend on your instance page called “import” to import data via the UI.
I prefer to use PHPMyAdmin to manage the database. With docker, we can run PHPMyAdmin locally and connect to the CloudSQL instance. For this to work, we first have to run the Cloud SQL Proxy application which creates a tunnel from a local port on our machine to the database in the cloud.

Starting local Cloud SQL Proxy Container

Google provides an already built docker container image which includes the proxy application. To start a container on our machine we can execute the following command.

docker run --name cloud_sql_proxy -d \
    -v /etc/ssl/certs:/etc/ssl/certs \
    -v $PRIVATE_KEY:/credential.json \
    -p 127.0.0.1:3306:3306 \
    b.gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy \
    -instances=$INSTANCE=tcp:0.0.0.0:3306 -credential_file=/credential.json

Replace $PRIVATE_KEY with the full path pointing to your previously downloaded service account private key file.
Replace $INSTANCE with the string found on your CloudSQL instance details page listed under Properties > “Instance connection name”. If you cannot find the information the string consists of “project-id:zone:instance-name”.

Starting local PHPMyAdmin Container

Now that the Cloud SQL Proxy is running we start another local docker container with the publicly available phpmyadmin image and link it to the proxy. We map the container port 80 to port 8090 on our local interfaces.

docker run --name phpmyadmin -d --link cloud_sql_proxy:db -p 8090:80 phpmyadmin/phpmyadmin

Open your browser at 127.0.0.1:8090.
You should see the PHPMyAdmin login page. Login with your root account credentials and import your WordPress Database onto the Cloud SQL instance.

Uploading your WordPress Docker image to Google Container Registry

Before we are able to run our WordPress image on our cluster, we first have to upload it to Google Container Registry. The Registry is a hosted Docker Image Repository Service which allows us to upload custom built images and access them from our Container Engine cluster. To enable image uploads we have to activate the Registry API first. In your Google Cloud Console web project go to API Manager -> Search for “Registry” -> Select & Activate Google Container Registry.

The sample folder in the wordpress-stateless repo contains the Dockerfile that I’m using for this example.

In the sample folder, execute the following commands.

# build and tag the image locally with the Dockerfile in the current dir "." 
docker build -t my-wordpress .
# tag the image with the full url where the image will be hosted
docker tag my-wordpress us.gcr.io/$PROJECT_ID/my-wordpress:v1
# upload the image to Google Container Registry
gcloud docker -- push us.gcr.io/$PROJECT_ID/my-wordpress:v1

Replace $PROJECT_ID with your project id.

Kubernetes

Deployment

Now we want to create a deployment configuration of our WordPress container image. For this, we create a new wordpress-deployment.yml file in the sample folder of with the following contents.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-wordpress
spec:
  replicas: 1
  revisionHistoryLimit: 3
  template:
    metadata:
      labels:
        app: my-wordpress
    spec:
      containers:
        - image: $IMAGE_URL
          name: my-web
          env:
            - name: WORDPRESS_DEV
              # Show Error Logs
              value: "true"
            - name: WORDPRESS_DB_HOST
              # Connect to the SQL proxy over the local network on a fixed port.
              value: 127.0.0.1:3306
            - name: WORDPRESS_DB_NAME
              value: $WORDPRESS_DB_NAME
            # These secrets are required to start the pod.
            # [START cloudsql_secrets]
            - name: WORDPRESS_DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: cloudsql
                  key: password
            - name: WORDPRESS_DB_USER
              valueFrom:
                secretKeyRef:
                  name: cloudsql
                  key: username
            # [END cloudsql_secrets]
          ports:
            - containerPort: 80
              name: wordpress-nginx
        # Change $INSTANCE here to include your GCP
        # project, the region of your Cloud SQL instance and the name
        # of your Cloud SQL instance. The format is
        # -instances=$PROJECT:$REGION:INSTANCE=tcp:3306.
        # [START proxy_container]
        - image: gcr.io/cloudsql-docker/gce-proxy
          name: cloudsql-proxy
          command: ["/cloud_sql_proxy", "--dir=/cloudsql",
                    "-instances=$INSTANCE=tcp:3306",
                    "-credential_file=/secrets/cloudsql/credentials.json"]
          volumeMounts:
            - name: cloudsql-instance-credentials
              mountPath: /secrets/cloudsql
              readOnly: true
            - name: ssl-certs
              mountPath: /etc/ssl/certs
            - name: cloudsql
              mountPath: /cloudsql
        # [END proxy_container]
      # [START volumes]
      volumes:
        - name: cloudsql-instance-credentials
          secret:
            secretName: cloudsql-instance-credentials
        - name: ssl-certs
          hostPath:
            path: /etc/ssl/certs
        - name: cloudsql
          emptyDir:
      # [END volumes]

Replace $IMAGE_URL with the previously tagged full image url.
Replace $WORDPRESS_DB_NAME with the name of your imported database.
Replace $INSTANCE with the same string “Instance connection name” used above for the local cloudsql proxy.

The deployment configuration looks a bit like a docker-compose.yml config file. There are some familiarities like the ability to define multiple containers and define volumes that are mapped into the containers.
Kubernetes has a concept called Pods, which allow us to deploy multiple containers together as a single entity onto a node. Since we need the cloudsql-proxy container to connect to the CoudSQL database instance, this is a perfect use case for a deployment configuration with a pod that consists of two containers, our wordpress image and the cloudsql-proxy image.

ENV Variables

We use the env config setting to pass in data into our wordpress image. The image contains startup scripts that read those env variables and reconfigure the wp-config.php files.
To prevent storing sensitive password information in the deployment configuration file, we can use another Kuberentes feature that allows us to reference a secret that was defined in the cluster. In the CloudSQL Setup section we registered the database user and password into our cluster, now we can reference them in our configuration.

Volumes

In the volumes config setting we define 3 different types of data volumes for use with the volumeMounts setting in the container definition.  The first volume “cloudsql-instance-credentials” is referencing the service account privatekey secret in the cluster. The volume “ssl-certs” maps a local host directory from the cluster node into the conatiner, this is necessary to provide access to cert authority files for the cloudsql-proxy. The volume “cloudsql” is an empty directory for the cloudsql-proxy to save a file socket entry, which then could be referenced in multiple containers. But in this config we are binding the cloudsql-proxy to the port 3306 and we connect from the wordpress container via localhost.

As you can see only the cloudsql-proxy container is using volumeMounts to reference defined volumes. The wordpress container does not need any outside volumes because it contains all the wordpress files inside of the image we built before hand. For the wp-content directory we are using a Plugin to store uploaded files in Google Cloud Storage. The setup is intentionally done this way to enable the container to be scaled onto multiple nodes without having to setup a replicated file system.

Deploying the configuration onto the Cluster

Now we want to tell our cluster to download our image and deploy it onto the cluster. For this we are using the Kubernetes command line tool kubectl.

# Apply the deployment configuration
kubectl apply -f wordpress-deployment.yml
# View the deployment status
kubectl get deployments
# View the pods status
kubectl get pods
# Wait till both pods ( wordpress container / cloudsql proxy ) are running.
NAME READY STATUS RESTARTS AGE
my-wordpress-1422364771-s1vfv 2/2 Running 0 2m

Now lets try to access the WordPress site to see if it is up and running.  At the moment the WordPress container is not yet accessible via a public IP. To check if the deployment is running successfully we can use the kubectl port-forward command to map a local port to a port on the container running in the cluster.

To listen on port 8080 locally, forwarding to port 80 in the container on the cluster, enter the following command.

# use the pod name listed from the "kube get pods" command
kubectl port-forward my-wordpress-1422364771-s1vfv 8080:80

Now when you visit your Browser on 127.0.0.1:8080 you will notice that you will get redirected to your pre-configured WordPress Domain with “/wp-signup.php?new=127.0.0.1:8080” added to the URL. This is actually a sign that the site is running and that the database connection is working. WordPress stores your site URLs in the Database, when it receives a request with no matching URL it sends a redirect to the configured main site domain. To prevent this we would have to replace the URLs in the database with a tool like WP-CLI. But for now, we skip this and move forward putting a Service layer in front to access your WordPress via a public IP.

Service

Since our container Pods can be started or scaled onto any node inside our Kubernetes cluster, we have to create a Service configuration to provide a unified way to access our WordPress installation. A Kubernetes Service allows us to provide an abstraction layer on top of the deployments which automatically resolves the current location of our containers inside the cluster. The service can also act as a load-balancer when you scale your containers onto multiple nodes. A service can also create a public IP address that allows anyone can access it over the Internet.

We create a new file wordpress-service.yml with the following contents.

apiVersion: v1
kind: Service
metadata:
  name: wordpress-service
  labels:
    app: public-webservice
spec:
  type: LoadBalancer
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: my-wordpress
Type

There are different types of services. We are using the “LoadBalancer” type.
On Google Container Engine this creates a Google Network TCP Loadbalancer with a public IP address.

Deploying the service onto the Cluster

# Apply the service configuration
kubectl apply -f wordpress-service.yml
# View the service status
kubectl get services
NAME                CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
kubernetes          10.23.240.1               443/TCP        2d
wordpress-service   10.23.246.77   104.111.111.111   80:32744/TCP   1m

Note the newly created External-IP address. This address is allocated as a static IP in the Google Cloud Console Networking backend. You will get billed for the static IP so do not forget to remove it when you don’t need it anymore.

Now visit your browser on your EXTERNAL-IP. If everything worked correctly, you should again get redirected to your WordPress domain name with the “/wp-signup.php..”. Now to be able to browse the site we can temporarily add a new entry to the /etc/hosts file with our configured WordPress domain name and the EXTERNAL-IP.

# Add Domain to the local name resolution
sudo sh -c "echo '104.111.111.111 www.mydomain.com' >> /etc/hosts"

On windows, this file is in another location. You can google it, or perhaps rethink your OS selection.

Now when you refresh your browser you should see your WordPress site. If not try to clear your browsers DNS cache or check the contents of your /etc/hosts file.

Next you want to create a public DNS record for your domain pointing at the EXTERNAL-IP so anyone can visit your site. There are many DNS services that enable you to edit your DNS settings for your domain.

Closing

The setup at first may seem quite complex, but if you want to leverage to flexibility and scale-ability of a Kubernetes cluster in conjunction with the power the Google Cloud, creating only 3 configuration files ( Dockerfile, deployment.ymlservice.yml)  doesn’t seem to be that complex considering the complexities if you would want to create a similar setup yourself. Using Container Engine, you can also deploy any other types of applications next to your WordPress Site to build any type of Service.

You can improve and tweak this setup to fit your needs. I personally added another load-balancer layer based on nginx to manage my sites and do SSL termination. I’m also going to add a Caching container to store session information and other shared data to improve the scaling of containers.

If you have any questions about the setup, feel free to leave a comment below.

 

Book Review – Delivering Happiness

Link to the book
Website of the book

The book was recommended on a podcast called “The Random Show” hosted by Tim Ferriss and Kevin Rose.

Review

The book is about an American company called Zappos. It is written by the CEO of the company, Tony Hsieh’s. He describes his challenges and victories he encountered whilst building the company which later was acquired by Amazon, in Amazon stock worth 1.2 Billion dollars. Zappos is an E-commerce website mainly focused in the shoe business. Tony invested in Zappos with money he earned from selling his first internet company to Microsoft.
The book is structured in three parts.
Profits :
For a company to survive, the cash flow has to be positive over the long term. Tony explains how he first invested in Zappos to allow the company to grow and mature its business model. While the invested money was running out, Zappos looked for more investors but at the time nobody was willing to invest in them. Tony then invested almost all of his own worth into the company., while working hard to make the company profitable. This chapter shows that while your business may be growing, without the support of big investors, being cash flow positive is a necessity for a business to survive.

Profits and Passion:
This part of the book sets the emphasis on the passion you have to hold for your role and responsibilities in the company. The book includes many E-mails from Tony that were sent to the whole company, E-mails which included good and or bad news.
Zappos made the decision to put a lot of effort and emphasis on the company culture. They set out to make the workplace a place, where people actually are looking forward to got to on daily basis. The key is to put value in work/life integration instead of work/life separation.
Zappos took a very different approach to management and leadership. They defined a set of core values that defined the company. Those values are:
– Deliver WOW Through Service
– Embrace and Drive Change
– Create Fun and A Little Weirdness
– Be Adventurous, Creative, and Open-Minded
– Pursue Growth and Learning
– Build Open and Honest Relationships With Communication
– Build a Positive Team and Family Spirit
– Do More With Less
– Be Passionate and Determined
– Be Humble
They made sure every employee knows the Core Values of Zappos and most importantly acts according to them. Tony explains the critical hiring process of people that fit the company culture. They have developed a pipeline of employee roles which most new hires go trough while earning promotions along the way. This pipeline allowed the employees more control over their success in the company while at the same time removes the direct dependency on a single employee for specific tasks. There will always be someone else who is going trough the pipeline process who is capable of doing the same tasks.

Profits, Passion and Purpose:
The last part of the book goes into the long-term accomplishments and sustainability of the company. Tony explains for a company to have success over a long period of time, the element of passion almost always is one of the most important factors. They put an enormous emphasis on customer experience which in turn resulted in returning customers, allowing the business to stay relevant and gain market share over time.
In the end, Tony goes into the pursuit of real happiness and he explains three major parts of happiness.
Pleasure: Short term high, shortest lasting.
Passion: Also known as flow, peak performance meets peak engagement where time just flies by, second longest lasting.
Higher Purpose: Being part of something bigger than yourself that has meaning to you, the longest lasting.

At last, Tony mentions the parallels of true happiness to a successful business. The only part that is different is the short lasting part, pleasure, which is replaced by profits. The passion and purpose elements are the same and the key for long term success.

Rating

My overall rating is 9/10.

Quotes

“Your personal core values define who you are, and a company’s core values ultimately define the company’s character and brand. For individuals, character is destiny. For organizations, culture is destiny.”

“Never outsource your core competency”

“I was now learning that alignment with shareholders and the board of directors was just as important”

Actions

Having the goal of owning a company with a team of employees, I will definitely make the effort to build a good culture while building the team.
I also will undertake the concept of employee pipelines as soon as it starts makes sense for the company.
In my own process of finding true happiness, I will focus on my passions ( Information Technology, Health, Music ) while aspiring to ultimately find my purpose in life.

Recommendation

I recommend this book to anybody who is entrepreneurial and wants to learn more about building a company culture and about the relationship with work and happiness. Moreover, to everyone who wants to know more about the story of Zappos.

Book Review – Eat Fat Get Thin

Link to the book

I’ve read a lot of different books about health and nutrition since I started reading a lot.
I came across Dr. Hyman’s book via a podcast from Lewis Howes where Dr. Hyman was interviewed talking about nutrition. You can find the podcast here.

Review

The book starts off by reviewing past research about the causes of a term called Metabolic Syndrom . The term is often used to describe the following diseases originating from a bad diet and unhealthy habits :
– Obesity
– Elevated blood pressure
– Elevated fasting glucose
– High triglycerides
– Low HDL ( good ) cholesterol levels

In the past, most researchers blamed a high-fat diet, especially a diet high in dietary cholesterol and saturated fats to be the main causes of metabolic syndrom. Dr. Hyman invalidates the research and shows that the real cause of metabolic syndrome is actually an excess of refined carbohydrates, mostly sugar.

The book shows that eating a diet high in good fats leads to a healthy environment for our bodies which allows for long-term weight loss to occur.
The most recommended foods high in healthy fats are :
– Fatty fish ( Salmon )
– Nuts and Seeds ( Macadamia nuts, flax seeds )
– Oils ( Olive oil, Coconut oil )
– Avocados

It also shows that consuming saturated fats combined it with healthy fiber containing vegetables does NOT cause metabolic syndrom. The book also mentions that our gut bacteria are playing a central role in our health and that eating a lot of fiber is important to achieving a healthy body. The main message from the book is that eating the right food is the most powerful medicine.
Dr. Hyman provides a 21 Day diet for resetting your gut and improving your metabolism.
In the end, the book also contains a lot of healthy high-fat recipes.

Rating

The book is very well-written and contains a lot of science references. Dr. Hyman presents the information in an understandable way and gives good recommendations.

My overall rating is 8/10.

Quotes

“When you have enough omega-3 fats in your diet, the effect of saturated fat on your cholesterol is either neutral or beneficial.”

Actions

I’ve already been eating a healthy diet consisting of whole foods for some years now. I am also really active with weight training multiple times a week. I do a caloric restriction period once a year to shed some body fat. I always went very low fat during those periods, but after hearing the benefits of eating higher fat on health and overall metabolism I’m going to try a higher fat diet together with a caloric restriction to try out how it’s affecting my body.

Recommendation

I can highly recommend this book to anyone who wants to learn about the current state of research regarding the very common diseases around metabolism and obesity. Also, anyone who is not happy with his/her current body condition and wants to improve not only the look but also the feeling of his/her body.

New Year Resolutions

Almost everyone makes them, but almost no one can stick to them. Myself included, for the last years i have been consistently making new years resolutions, some have turned out great and some have failed miserably.

The biggest mistake is that people tend to make overblown goals that require a complete change of their habits. Habits which were obtained over many years are not that easy to break, just with the beginning of a new year. For example, someone who never exercises makes the resolution to go to the gym 4 times a week. The drastic change increases the likelihood of failure extensively. The trick is to lower the level of success so the goal is achievable which leads to better adherence in the long term.

My recommendation about creating successful new year resolutions is to start small and try to modify an already existing habit like eating food when you are hungry. Instead of eating a cookie, you eat something healthy like some nuts. Habits are based on a trigger signal. If you can change the Action to the trigger you can rewire your brain and therefore change a habit.

There is one big exception to this. And that’s when you are dealing with addiction. I used to smoke cigarettes for more than 5 years consistently. At the peak when I was in the military I smoked more than 20 cigarettes a day. I made it my new year resolution to stop 2 consecutive years in a row and it never worked out.

2015

In 2015 I again made the effort to try to stop smoking. Knowing of my past failures I decided to not stop on the first of January but a month earlier on the first of December. In the process of quitting I read the book “The Easyway” by Allen Carr. I highly recommend this book to anyone who is struggling with smoking. Since December 1. 2015, I am officially a non-smoker.

2016

In 2016 I made the resolution to read more. I used to hate reading books of any kind. I bought a kindle and started small, I set the barrier to success to 1 hour a week. Now one year later, I have built the habit of reading every night before I go to sleep and it has become my favorite activity to relax. The books I have read so far all have changed my mindset for the better and I’m excited to read many more perspective changing books in the future.

2017

2017 is the year in which I’m writing this Post. My main resolution for this year is to write more. That’s also the reason why I started this Blog. My English writing needs a ton of improvement that I’m looking to get with more practice over the coming years of my Blogging journey.

Stateless WordPress Docker Container

In this Post, I will show you how you can build a stateless WordPress Setup in a Docker container. The container can later be used in production or as a reproducible development environment.

Motivation

My main motivation in building a stateless Docker container was to be able to deploy and scale the container without having to set up a clustered filesystem.
For this to work, we have to make some modifications and give up some flexibility, in favor of much better maintainability and easier scalability.

The following setup can also be used as a local development environment.

All the Code can be found on Github here wordpress-stateless.

Core Concepts of the Setup

The main difference from a traditional Docker WordPress container setup is that we do not mount Docker volumes into the container. Neither for the WordPress Installation itself nor for the wp-content directory. We install all our plugins and themes on docker build via the Dockerfile.

This way the “docker build” command creates a self-contained image with all plugins and themes that we can tag via “docker tag” to have it assigned a specific build version. Now we can distribute the container to multiple Hosts and don’t have to worry about the filesystem or any kind of volume mounts.

As said before we lose some flexibility. The simplicity of installing plugins via the WordPress Admin is no longer a valid option. Because we have no docker volume mounts, a container restart causes the filesystem to be reset to the initial state of the tagged docker image layer.

But how should we handle media uploads? I thought you might ask.
For this, we use a CDN ( Content Delivery Network) service in conjunction with a WordPress plugin that supports moving the uploaded files to the CDN and rewriting all media links to point to the CDN.

For my setup, I choose Google Cloud Storage and the WP-Stateless WordPress Plugin.

Security

Security with WordPress should always be taken seriously. If an attacker manages to get access to your container and embeds malware in your local files to serve to your users, you can just restart your container and reset it back to the build state.
I have described more on the update of WordPress itself in the Production section later on.

Docker images

I have built multiple Dockerfiles, some of them are depending on each other. It may sound complex but it actually makes things easier when working with the images. Let’s first build them via the build script found in the root of the Github repo.

./build-images.sh 4.7.2 

We have to provide a version for the images, I’m using the WordPress Core Version that is defined in the Base Image Dockerfile but you could provide a different version here.

Base container

FROM php:7.1-fpm

# install the PHP extensions we need
RUN apt-get update && apt-get install -y sudo wget unzip vim mysql-client libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \
	&& docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
	&& docker-php-ext-install gd mysqli opcache


# set recommended PHP.ini settings
# see https://secure.php.net/manual/en/opcache.installation.php
RUN { \
		echo 'opcache.memory_consumption=128'; \
		echo 'opcache.interned_strings_buffer=8'; \
		echo 'opcache.max_accelerated_files=4000'; \
		echo 'opcache.revalidate_freq=60'; \
		echo 'opcache.fast_shutdown=1'; \
		echo 'opcache.enable_cli=1'; \
	} > /usr/local/etc/php/conf.d/opcache-recommended.ini

# wordpress version from : https://github.com/docker-library/wordpress/blob/master/php7.0/fpm/Dockerfile
ENV WORDPRESS_VERSION 4.9
ENV WORDPRESS_SHA1 6127bd2aed7b7c0a2c1789c8f17a2222a9081d6c

# upstream tarballs include ./wordpress/ so this gives us /usr/src/wordpress
RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_VERSION}.tar.gz \
	&& echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \
	&& tar -xzf wordpress.tar.gz -C /usr/src/ \
	&& rm wordpress.tar.gz \
	&& chown -R www-data:www-data /usr/src/wordpress


##############################################################################################
# WORDPRESS CUSTOM SETUP
##############################################################################################

# extract wordpress on build
RUN tar cf - --one-file-system -C /usr/src/wordpress . | tar xf -

# add custom scripts
ADD vars.sh /vars.sh
ADD entrypoint.sh /entrypoint.sh
ADD plugins.sh /plugins.sh
RUN chmod +x /entrypoint.sh /vars.sh /plugins.sh


# execute custom entrypoint script
CMD ["/entrypoint.sh"]

The base container is largely based on the official WordPress Docker container. The Dockerfile downloads the WordPress installation via a version defined in the Dockerfile and verifies the code with the “sha1sum” command. If you want to install a different Version of WordPress you have to change the two ENV variables WORDPRESS_VERSION/WORDPRESS_SHA1.

I based the container on the latest PHP-FPM 7.x base image.

CLI Container

FROM wp-stateless-base:wp-4.9

##############################################################################################
# WORDPRESS CLI SETUP
##############################################################################################

# install less for wp-cli support , and xterm for terminal support
RUN apt-get update && apt-get install -y less
ENV TERM=xterm

# install wp-cli
RUN curl -o /usr/local/bin/wpcli https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar \
		&& chmod +x /usr/local/bin/wpcli

# add wpcli wrapper
ADD wpcli.sh /usr/local/bin/wp
RUN chmod +x /usr/local/bin/wp

# add tab completion
ADD wp-completion.bash /wp-completion.bash
RUN echo "source /wp-completion.bash" >> ~/.bashrc

##############################################################################################
# CUSTOM ENTRYPOINT
##############################################################################################
ADD entrypoint.sh /entrypoint_cli.sh
RUN chmod +x /entrypoint_cli.sh

ENTRYPOINT ["/entrypoint_cli.sh"]

As you can see, the CLI container is based on the Base container that you just saw. The CLI container adds the WP-CLI Command line utility. We can use this container to setup fresh WordPress installations or make some DB operations via the WP-CLI commands. The image also supports SEARCH REPLACE in the database via ENV variables. This is useful if you want to download your production database and replace the URLs with some local domain.

NGINX

FROM wp-stateless-cli:wp-4.9

# install nginx
RUN apt-get update && apt-get install -y nginx && rm -rf /var/lib/apt/lists/*


##############################################################################################
# NGINX SETUP
##############################################################################################
RUN rm -r /etc/nginx/sites-enabled/*
ADD default.conf /etc/nginx/sites-enabled/default.conf
ADD wordpress.conf /etc/nginx/global/wordpress.conf
ADD restrictions.conf /etc/nginx/global/restrictions.conf


##############################################################################################
# CUSTOM ENTRYPOINT
##############################################################################################
ADD entrypoint.sh /entrypoint_nginx.sh
RUN chmod +x /entrypoint_nginx.sh

# reset entrypoint from parent cli
ENTRYPOINT []
CMD ["/entrypoint_nginx.sh"]

Last but not least, I have built an image that includes Nginx as a Webserver to execute the PHP Files via FastCGI. The container depends on the Base image, which means that the web server and PHP-fpm are running inside of the same container. I like this setup because it is easier to deploy. I intentionally didn’t include Nginx in the base image, so you can decide yourself on how to setup your web server. If you want, you can run Nginx in a separate container and call PHP-FPM via sockets.

Local Site Setup

The setup folder in the repo is an example on how to use the CLI image to initialize a new WordPress Database.


# Wordpress Stateless Setup

Used to initialize a fresh Database and generate a wp-config.php file.

- Adjust settings in wp-cli.yml
- Run Container with mapped output folder to save wp-config.php
- Execute setup.sh via the CMD parameter

> docker run --name wp_stateless_setup --rm  --interactive \
-v $(pwd)/output:/var/config \
-v $(pwd)/wp-cli.yml:/var/www/html/wp-cli.yml \
-v $(pwd)/setup.sh:/var/www/html/setup.sh \
wp-stateless-cli:wp-4.7.5 /var/www/html/setup.sh

Let’s initialize a new WordPress database together. If you don’t have an existing MySQL instance available you can start a local database via the docker-compose file in sample/docker-comose.yml.

cd sample
docker-compose up -d db

Now we have to modify the wp-cli.yml config file with our Database connection details. Attention, since we do not link to the database container we cannot use “0.0.0.0” as the host address, instead, we use the docker0 bridge IP address to connect to the database. You can find the bridge IP via “ifconfig docker0”. In my case, the IP of the Bridge is “172.17.0.1”.

Let’s execute the setup.sh script inside the CLI container.

cd ../setup
docker run --name wp_stateless_setup --rm --interactive \
-v $(pwd)/output:/var/config \
-v $(pwd)/wp-cli.yml:/var/www/html/wp-cli.yml \
-v $(pwd)/setup.sh:/var/www/html/setup.sh \
wp-statless-cli:wp-4.7.2 /var/www/html/setup.sh

After the script has finished executing and no errors where raised, there should be a wp-config.php file created in the output folder.

Now we can stop the Database container.

cd ../sample/
docker-compose down

Sample Dockerfile

First, we have to move the generated wp-config.php file into the sample/wordpress directory.

mv ../setup/output/wp-config.php ./wordpress/

The provided Dockerfile in the sample folder shows you how to use the previously built images and customize it with your own plugins and themes.

FROM wp-stateless-nginx:wp-4.7.5

##############################################################################################
# CUSTOM PHP CONFIG
##############################################################################################
RUN { \
  		echo 'upload_max_filesize=10M'; \
  		echo 'post_max_size=10M'; \
  	} > /usr/local/etc/php/conf.d/upload.ini

##############################################################################################
# WORDPRESS Config
##############################################################################################
ADD ./wordpress/wp-config.php /var/www/html/wp-config.php
# chown wp-config.php to root
RUN chown root:root /var/www/html/wp-config.php

##############################################################################################
# WORDPRESS Plugins Setup
##############################################################################################
RUN mkdir /plugins

# Add All Plugin Files but
ADD ./wordpress/plugins/ /plugins

# Execute each on its own for better caching support
RUN /plugins.sh /plugins/base
RUN /plugins.sh /plugins/security

# Delete Plugins script and folder
RUN rm /plugins.sh && rm /plugins -r

# ADD OWN CUSTOM PLUGINS
ADD ./plugins/my-plugin /var/www/html/wp-content/plugins/my-plugin

##############################################################################################
# WORDPRESS Themes Setup
##############################################################################################
ADD ./themes/my-theme /var/www/html/wp-content/themes/my-theme

We are baseing the image of the WordPress container that includes the Nginx web server.
The interesting part starts at line number 20. We add the sample/wordpress/plugin folder into the container and execute a script called /plugins.sh. This script is provided in the base container that we have built before. What this script does is, it parses a file which is provided via the first argument and downloads the plugins from the central WordPress plugin directory and stores them in wp-content/plugins inside of the container.
The syntax of the Plugin file is:

# WordPress plugin name ( downloads latest version )
pluginname
# Specify plugin with a specific Version
pluginname version
# Plugin ZIP file via URL Download
pluginfileurl

At the end of the Dockerfile we add our own local plugins and themes. I provided one sample plugin and theme.

Let’s build our final Docker image. We use the docker-compose command to simply the parameters used for building the image.

# execute in sample folder
docker-compose build

After the image was built successfully we can start the full docker-compose setup with the database.

# execute in sample folder
docker-compose up -d
# verify containers running
docker-compose ps

Now we can visit the URL you provided in setup/wp-cli.yml. The sample URL is “www.mywordpress.local”. We can add a local resolution for this domain via the following command.

sudo /bin/su -c "echo '127.0.0.1 www.mywordpress.local' >> /etc/hosts"
cat /etc/hosts

The admin can be found at http://www.mywordpress.local/wp-admin/.
Default login is “root” / “root”.
In the Admin UI > Plugins, our installed plugins are listed and can be activated. The local plugin “My-Plugin” should also be listed there.
The local theme can be enabled in the Network Admin > Themes.

Docker Compose

The sample folder contains a docker-compose.yml file for local development. The file also simplifies the build process of the container.

I suggest that you copy the contents of sample folder into its own directory and version control it separately.

Plugin development

volumes:
- ./plugins/my-plugin:/var/www/html/wp-content/plugins/my-plugin # Plugin development
- ./themes/my-theme:/var/www/html/wp-content/themes/my-theme # Theme development

I have added a local mount for the Plugin “my-plugin” to the docker-compose.yml config. This way you can edit the Plugin Code and see the changes instantly when you are running the setup locally.
The mound overwrites the files which were  added on build time via the Dockerfile.

Theme development

The same concept applies to Themes. Just mount your local themes when you want to make changes and see them instantly reflected in the Browser.

Production

To run your own image in production, you can start the container and configure it to point to a production database. To change the connection details you can either mount your wp-config.php into the container or you can overwrite some settings via ENV variables.

The base image contains a script vars.sh that filters the wp-config.php on the start of the container to replace some variables.

You can see for yourself which variables are supported and if you need some other variables you can easily modify the script to support them too.

The ENV variable “WORDPRESS_DEV” should either be set to “false” or not provided at all. The variable controls global Error Output and enables/disables the OPCode cache.

Session Sync

As soon as you start to deploy you WordPress container to multiple hosts and you start useing a load balancer without sticky session, you have to setup a session store to sync your users session data.

WordPress Core Update

Each WordPress update requires a rebuild of the base docker image. And all images depending on the base image. It is wise to test your site locally with a new version and see if everyting is working before you deploy your container to production.

Automatic updates

By default minor updates (4.5 to 4.5.1, 4.5.1 to 4.5.2, etc) are automated as they often contain security fixes. This means if we do not change the default config our instances get the security updates too. But if your production server is restarted the security update is lost. For this reason, it is important to check your Mail for available security updates and rebuild the container as soon as possible to avoid any vulnerabilities.

Closing

So there you have it. My special setup that Im using to run this Blog. The way the container is setup i now have alot of flexability in terms of chosing a hosting provider. I deployed this Blog on Kuberenetes in Google Container Engine, but i could easily switch to some other providers like Digital Ocean or Amazon EC2 Container Service.

If you have any questions or improvments, Im happy to hear from you in the comments below.

 

Blogception – How this Blog is setup

Welcome to my first Post on my personal Blog.
In this Post, I’m going to outline how I built this Blog and what technologies and hosting setup I’m using.

You maybe already guessed on which platform this Blog is built on, just by looking at the current Theme or checking the source of this page.
The Blog is running on WordPress.

WordPress is easy to setup and really extensible. In the past, I’ve tried and even built different Blog engines but I came back to WordPress.
After all, at the time of writing this WordPress is powering 25% of the Internet.

The setup

WordPress is the most popular CMS on the Internet, this also makes it lucrative for hackers to find security vulnerabilities to mass target and attack WordPress installations to distribute malware.
Therefore security is a big priority in my setup.

The first step to more security is installing WordPress inside a Docker Container. There are many WordPress Dockerfiles available on DockerHub ready to download. There even is an official Docker Image provided, you can check it out here. I decided to build my own image for reasons I will explain later on. This post will simply describe the architecture of my setup and not every detailed configuration.
I will write a follow-up post with a detailed explanation of the configurations and Dockerfiles and post them to my Github page.

Requirments

Stateless

An important requirement for me was to have a “stateless setup”. The setup in a container enables me to scale and replicate to multiple nodes. To achieve this had to make some adjustments to how the docker image is setup.

WordPress by itself is heavily stateful. The WordPress code is saved on the filesystem ( which also represents a state in some way ). For instance, your wp-config.php is saved on the filesystem which contains important configuration details. The installed themes and plugins are saved in the wp-content directory and of course, the Database contains all the Posts and configuration about your site. Updates of WordPress itself also need to update the PHP code stored on the filesystem.

The Database component will never be truly stateless and we will deploy Mysql as a separate instance in a container. The scaling of the Database itself should also be handled independently.

In a typical WordPress setup, we would install WordPress directly on the filesystem and modify the configuration via a text editor. We would be adding our own themes directly into the installation. This approach is hard to maintain and if you want to run your WordPress Blog on 2 separate nodes you have to make a copy of all files. Updating WordPress via the UI would also not work with multiple nodes.

Support for different Environments

I also wanted to be able to setup different environments to develop locally and later deploy to test, staging and production servers.

Easy way to handle WordPress Updates

WordPress updates should be able to be tested and applied to production nodes without to much manual work.

Steps to support all our Requirments

Eliminating Filesystem state

So first I needed to get rid of the filesystem as the main state representation. There are different ways to solve this Problem.

Version Control WordPress

One possibility would be to use Git to version control the WordPress code itself including your own themes and plugins codes. This way one could write scripts that execute a “git pull” on your production nodes to update all production servers with the newest version of your code. This would also enable us to have different environments. Any updates or plugin changes done via the WordPress Admin UI have to be manually checked in to Git and distributed across all nodes.

Use Docker to build a WordPress Version

Docker images can be tagged and stored in a central registry. Docker images are built via Dockerfiles. In a Dockerfile, we can download the WordPress Code and make our own modifications to it in a reproducible way. The official WordPress Docker image does exactly that, but it also creates a Docker Volume for the installation itself. When we run the official container, a separate Directory for this Docker Volume is created on our local filesystem. This enables the user to log into the WordPress Admin and install plugins and themes and after restarting the container the data is still available. This makes the Filesystem stateful and we do not want that.

To solve this I built my own Dockerfile and skipped the creation of a Docker Volume for the WordPress installation. I added all my own themes and plugins via the Dockerfile while building the image with “docker build”. This provides a big advantage, that the state is now represented in the Dockerfile and the filesystem is now stateless, anytime the docker container is restarted the filesystem is reset to the state in the Dockerfile. The drawback is that any manual changes introduced via the WordPress Admin are lost during a restart of the container. But I can take this drawback for a stateless filesystem and much better security.

Other ways

I’m sure there are other ways to deploy and scale a WordPress installation. But for this Setup, I have used the approach via the Dockerfile.

WordPress Media Files

The Media Library also defines local state with all the Files stored in wp-upload. To fix this we can offload our media files to a CDN provider like Google Cloud Storage or Amazon S3. I have used a WordPress Plugin for that.

Multiple Environments

To support multiple environments we can either add some entry point Scripts in the Docker container that replaces WordPress configuration variables or fully map a wp-config.php file into the container via Docker Volume mappings.
This in conjunction with Docker-Compose we are able to run a local Dev environment and use the same Docker image to run in production.

Frontend Server

This Blog is running on the Web, which means it is accessible via the HTTP Protocol. This means that we need a Webserver which is able to execute the WordPress PHP code every time an HTTP Request is made by the browser. I have used use Nginx together with PHP-fpm. Nginx executes the PHP code via FastCGI.

I added Nginx directly into the Dockerfile that includes the WordPress installation.

Google Cloud

There are many hosting providers that allow you to deploy a WordPress installation. Some are fully managed and others are more flexible. For my setup, I choose the Google Cloud because it enables fine grade resource control and the scaling to multiple nodes. There also is extensive Docker support via a hosted Container Registry and Google Container Engine.

CloudSQL

For the Database, I decided to use the Google Managed Mysql Service CloudSQL. I choose to use the managed Service because it frees me from managing yet another piece of software. Monitoring the Database and applying updates or managing data with backups scripts is a very resource intensive task. For this reason, I’m willing to pay a little more than I otherwise would have to, if I was running a Mysql instance myself.

Kubernetes

For the actual deployment of the Docker Container that includes WordPress, I’m using the open source Cluster Manager Kubernetes.

Kubernetes is developed by Google engineers together with the open source community. The concepts of Kubernetes are derived from a Google internal Cluster Manager called Borg. Kubernetes makes it easy to deploy Docker containers onto a Cluster of compute instances.

We could setup Kuberenetes on any Hosting provider. This again requires the need for maintenance to monitor and update all the Services needed to run a Kubernetes cluster. Luckily Google also provides a service for that, called Google Container Engine. Container Engine offers a fully managed Master Node, this master node is automatically updated. The slave nodes show up as normal compute instances that are running a special Image which includes all the services that are required to run Kuberentes.

Summary

So that’s it for now. To summarize in one Sentence; This Blog is operating on WordPress, versioned in a Docker Image, served via Nginx and deployed via Kubernetes, managed by Google Container Engine that is running on Google Compute Engine.
More detailed posts will follow and I will open source all of my Dockerfiles needed for this setup.