Skip to content

Instantly share code, notes, and snippets.

@toonverbeek
Created June 10, 2016 12:32
Show Gist options
  • Save toonverbeek/833564158db1b7433172380d27211d73 to your computer and use it in GitHub Desktop.
Save toonverbeek/833564158db1b7433172380d27211d73 to your computer and use it in GitHub Desktop.
title date tags author authorTwitterUrl gravatarhash headerBackground headerColor headerWidth
Continuously Deploying Microservices to Kubernetes using Google Container Engine and Google Container Registry
2016-06-07
containers, gke, gcr, google container engine, google container registry,
Toon Verbeek
d4b19718f9748779d7cf18c6303dc17f
FAFAFA;
3DBDEE;
30%;

Note: This post is designed to be as verbose as possible, you may wish to jump ahead to parts further on in the post if you have already setup a Kubernetes cluster or have an application already building on Wercker.

Introduction

Many software companies, us included, have spent the last few years honing and optimising our CI/CD process. Utilising tooling, working out build automation, chaining our testing pipelines and integrating to our deploy targets.

The New Challenges

Containers have brought around a new set of challenges and indeed the paradigm of container registries and schedulers present further challenges still. Namely we need to talk about more than just build and deploy now.

How You Want to Do Things Now

We want to build optimal containers, run a series of tests, push them to the appropriate container registry such as Google Container Registry or DockerHub and then deploy to staging or production environments on a scheduler such as Google Container Engine (GKE) or Mesos.

Our Solution

At Wercker we’ve built tooling to enable that build, test and deploy automation and indeed a way of linking steps in to pipelines and chaining pipelines in to workflows to automate part or all of the process. This is giving our customers a way to fully leverage CI/CD in this new container and scheduler paradigm.

Creating a Managed Kubernetes Cluster on GKE

There are many ways of hosting Kubernetes clusters nowadays and infrastructure providers are increasingly supporting deployment of new Kubernetes clusters on their clouds. One provider which shines however is Google Cloud Platform. Google offer a managed hosted Kubernetes (free for up to five slave instances) cluster backed by slave instances which are provisioned on your Compute Engine account.

Getting Started and Prerequisites

Getting started is simple, you have a couple of options as to how you go around deploying your cluster. You can either choose to do it from the Google Cloud Platform Console (Web UI), or using the gcloud command as provided by the Google Cloud SDK. We're going to demonstrate the latter.

Firstly install the Google Cloud SDK. As the install.sh script provided by this archive is going to use binaries included within the download archive, it's recommended that you extract the Google Cloud SDK to somewhere you will not modify it.

After you have installed the SDK, you will either need to re-open your shell or source your bash profile in order to include the path for the Google Cloud SDK.

Once that's done, you can install the Kubernetes command line tool (kubectl) using:

$ gcloud components install kubectl

You will also need to authenticate with your Google Cloud account, this will open your web browser and a OAuth consent screen:

$ gcloud auth login

And lastly, ensure you're using the correct project on the Google Cloud Platform. Head over to your projects page within the IAM on the Google Cloud Platform Console and copy the "Project ID" of the project you wish to create your new cluster on. If you do not already have a project, go ahead and create one.

$ gcloud config set project $PROJECTNAME

Creating your Kubernetes Cluster

Creating your managed cluster is extremely simple, by default (with no extra options set) you will create a 3 node cluster with n1-standard-1 instances (1 vCPU and 3.75 GB Memory). You can change the zone to whichever one is closest to you.

$ gcloud config set compute/zone europe-west1-b
$ gcloud container clusters create my-cluster

This process shouldn't take too long, once it has completed you will recieve output including the cluster's master endpoint, after a little time has passed you will be able to access this IP in a web browser (HTTPS) displaying the KubeUI. Your username and password can be found with:

$ gcloud container clusters describe my-cluster

You will need to save the endpoint, clientCertificate, clientKey, and clusterCaCertificate values, as we'll need these later on to deploy our applications. Note: the clientCertificate, clientKey, and clusterCaCertificate are encoded in base64, so you will need to decode them first:

$ echo $clientKey | base64 -D

Without any extra work, kubectl will also be configured and usable on your new cluster. You can verify this by running:

$ kubectl get services

Setting Up a Service Account

Lastly, we need to setup a service account with keys which we'll be using to authenticate to Google Container Registry (GCR).

First list your service accounts, and then create a key for the default App Engine account.

# list accounts
$ gcloud iam service-accounts list
NAME                                    EMAIL
App Engine default service account      <id>@appspot.gserviceaccount.com
Compute Engine default service account  <id>@developer.gserviceaccount.com
# create key
$ gcloud iam service-accounts keys create \
    ~/key.json \
    --iam-account <your_id>@appspot.gserviceaccount.com
created key [4cdcca608f7a5832e8078ab16bae21f64bf0051a] of type [json] as [/Users/<username>/key.json] for [<id>@appspot.gserviceaccount.com]

The key has now been created and stored in your home directory. We'll use it later on to authenticate with GCR.

We've now set up GKE & GCR! Time to deploy some containers.

Setting up Wercker

The application we're going to be deploying consists of two containers: the todo-service and the todo-application. These two containers will be communicating over gRPC. The todo-service will not be exposed to the Internet; instead todo-api will expose an HTTP API and will talk to the todo-service to list existing and create new todos.

First, fork the two repositories:

Then, head over to Wercker and create a new project for each of the repositories.

Setting up the service

We'll start by setting up the todo-service. Open up the [wercker.yml](https://github.com/wercker/todo-service/blob/master/wercker.yml) to understand how we're going to build push and deploy:

In this YML file we have three defined pipelines; build, push-gcr, and gke-deploy. Each of these pipelines are made up of a series of steps which can be anything from inline shell commands to pre-packaged steps from our step registry, we will be utilising a few public steps in our application.

The build pipeline compiles our application and copies the binary to the $WERCKER_OUTPUT_DIR, where it will be made available to the next pipeline.

The push-gcr pipeline will push a minimal alpine container (<10 MB!) to GCR. The most important step is the internal/docker-push step, this will produce a Docker container and push it to the defined registry (be that GCR, Quay, DockerHub, or even your own registry).

For our deployment as we're pushing to GCR, Google requires a special username since we're authenticating with a service account: _json_key and the password will be the contents of the json file we produced earlier (~/key.json). Lastly, we specify which port this container should expose and which command to run upon execution.

Finally, our gke-deploy pipeline describes how we will deploy our container to GKE. To authenticate, we will need the various certificates that you copied earlier when creating the cluster. We will setup the relevant environment variables later on. The final step is executing the kubectl and applying the todo-kube.yml that contains our Deployment object and our Service manifest.

Now that we understand how our pipelines work, we need to add them to our Wercker project along with the environment variables. Head over to your todo-service application settings, and navigate to Workflows. On this page, you can add pipelines and chain them together.

Since Wercker automatically creates the build pipeline for you, you only need to add the push-gcr and gke-deploy pipelines. Once you've done that, you can chain them together using the visual Workflows editor at the top of the page. This Workflow is pretty straightforward: build listens for Git changes and will trigger push-gcr once it finishes. gke-deploy will trigger once push-gcr has successfully completed.

Lastly, we need to setup the environment variables. Head over to the environment variables page and add the following environment variabless and use the values you copied earlier (base64 decoded):

  • GKE_CA_PEM (clusterCaCertificate)
  • GKE_ADMIN_PEM (clientCertificate)
  • GKE_ADMIN_KEY_PEM (clientKey)

And add three more:

  • GKE_KUBERNETES_MASTER (with leading https://)
  • GCR_JSON_KEY_FILE (~/json.key file contents produced earlier)
  • GKE_PROJECT_ID (The Google Cloud Platform project ID)

Now that you've setup the Wercker project, go ahead and trigger a build by pushing a change to your repository. You should see your pipelines execute one by one and once gke-deploy finishes you can check that the service has deployed by running kubectl:

$ kubectl get services

Setting up the API

Since the todo-service is not accessible from the Internet we now need to deploy our todo-api as a public service.

The wercker.yml is exactly the same for the API, so there is no need to go over it again. Instead, head straight to your application settings page and start adding the pipelines and environment variables (basically repeating the previous steps).

Now push a change to your repository so your Workflow gets triggered. Once it finishes run kubectl get services and you should see that the todo-api received an external IP (this might take a while).

If you navigate to the http://<api_external_ip>/todos you should see an empty page - which means it works! Verify by POSTing a new todo:

$ curl -H "Content-Type: application/json" -X POST -d '{"Name":"Do something"}' http://<api_external_ip>/new

Reload the page again and you should have your newly inserted todo!

That's All!

You have successfully deployed a multi-service application to GKE using Wercker! From now on you will be able to keep deploying simply by re-running your workflow, or in other words, just committing code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment