So the following is an example of how you might use Github, Hugo, and Travis CI to build a continuous-deployment blogging platform.

Step 0) Get a domain name from Hover.

Hover is an amazing registrar - you should head over there and get yourself a new domain for your new blog.

Use promo code: JackOfDiamonds to give a shout out to the guys at Hello Internet, and to get 10% off.

Step 1) Setup a Github Account and Organization.

It’s pretty simple - you signup, create an organization, and then create a new git repository under that github organization.

If this is truly your first time ever using github you’re going to have to create an ssh key-pair, which is used to authenticate you so you can push up your code.

And if you don’t know git you might want to read this book.

Step 2) Start blogging locally with Hugo.

Hugo is a more modern, much faster, version of Jekyll.

The main advantage I see for using Hugo over Jekyll is that Hugo makes it much easier to build static websites that are not blogs, and you don’t need ruby to build website - you simply need a static Go binary.

Follow this introduction and within two minutes you should be well on your way to setting up a blog.

Step 3) Create an AWS Account.

In the very near future I’ll have an explicit blog post on how to get started with AWS.

But for now simply create a new account, and figure out away to get yourself some admin keys (AWS_ACCESS_KEY and AWS_SECRET_KEY) because you will need these to setup your AWS account to host this blog.

Step 4) Use a Terraform script to setup your AWS account for the blog.

I built you out a simple Terraform script which you can use to create all the required things to host a website in S3.

Here is the script:

provider "aws" {
  access_key = "AWS_ACCESS_KEY"
  secret_key = "AWS_SECRET_KEY"
  region = "us-west-2"
}

variable "domain_name" {
  default = "domain.com"
}

resource "aws_route53_zone" "primary" {
  name = "${var.domain_name}"
}

resource "aws_s3_bucket" "blog" {
  bucket = "${var.domain_name}"
  region = "us-west-2"
  acl = "public-read"
  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "AddPerm",
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:GetObject",
    "Resource": "arn:aws:s3:::${var.domain_name}/*"
  }]
}
EOF

  website {
    index_document = "index.html"
    error_document = "error.html"
  }
}

resource "aws_s3_bucket" "wwwblog" {
  bucket = "www.${var.domain_name}"
  region = "us-west-2"
  acl = "public-read"
  website {
    redirect_all_requests_to = "${var.domain_name}"
  }
}


resource "aws_route53_record" "blog" {
  zone_id = "${aws_route53_zone.primary.zone_id}"
  name = "${var.domain_name}"
  type = "A"
  alias {
    name = "${aws_s3_bucket.blog.website_domain}"
    zone_id = "${aws_s3_bucket.blog.hosted_zone_id}"
    evaluate_target_health = true
  }
}

resource "aws_route53_record" "wwwblog" {
  zone_id = "${aws_route53_zone.primary.zone_id}"
  name = "www.${var.domain_name}"
  type = "A"
  alias {
    name = "${aws_s3_bucket.wwwblog.website_domain}"
    zone_id = "${aws_s3_bucket.wwwblog.hosted_zone_id}"
    evaluate_target_health = true
  }
}

resource "aws_iam_user" "blog_deploy" {
  name = "${var.domain_name}_blog_deploy"
  path = "/s3/"
}

resource "aws_iam_access_key" "blog_deploy" {
  user = "${aws_iam_user.blog_deploy.name}"
}

resource "aws_iam_user_policy" "blog_deploy_rw" {
  name = "${var.domain_name}_rw"
  user = "${aws_iam_user.blog_deploy.name}"
  policy = <<EOF
{
  "Statement": [{
    "Effect": "Allow",
    "Action": [
      "s3:ListBucket",
      "s3:GetBucketLocation",
      "s3:ListBucketMultipartUploads"
    ],
    "Resource": "arn:aws:s3:::${var.domain_name}",
    "Condition": {}
  }, {
    "Effect": "Allow",
    "Action": [
      "s3:AbortMultipartUpload",
      "s3:DeleteObject",
      "s3:DeleteObjectVersion",
      "s3:GetObject",
      "s3:GetObjectAcl",
      "s3:GetObjectVersion",
      "s3:GetObjectVersionAcl",
      "s3:PutObject",
      "s3:PutObjectAcl",
      "s3:PutObjectAclVersion"
    ],
    "Resource": "arn:aws:s3:::${var.domain_name}/*",
    "Condition": {}
  }, {
    "Effect": "Allow",
    "Action": "s3:ListAllMyBuckets",
    "Resource": "*",
    "Condition": {}
  }]
}
EOF
}

Before you can use this you’ll need to substitute out AWS_ACCESS_KEY, AWS_SECRET_KEY, and domain.com with the appropriate values.

If you really are against using Terraform to setup this configuration you can of course always use the guide from Amazon directly.

Don’t worry, we’ll have more blogs about Terraform and other ‘Cloud DSLs’ in the future.

Step 5) Get a Travis CI account.

You might want to follow this guide to get your feet wet with Travis CI.

Step 6) Add a .travis.yml to your blog repository.

Here is a basic .travis.yml you’ll want to use which will publish your blog upon every push to your repository.

language: go
install: go get -v github.com/spf13/hugo
script:
  - hugo
  - python --version
  - sudo pip install s3cmd
  - s3cmd sync --delete-removed -P -M -r public/ s3://continuousfailure.com/
notifications:
    email:
        on_failure: always

Then, as described here, you will want to add the AWS_ACCESS_KEY and AWS_SECRET_KEY keys of the S3 user, which was created via the Terraform script, as build environment variables in the Travis CI project.

The keys can be found in the *.tfstate file created when the Terraform script was applied.

Note, we chose to use s3cmd because the provided Travis CI S3 deploy step doesn’t allow you to actually ‘sync’. Meaning if files are removed from your blog, their deploy step doesn’t remove them, and all files are always uploaded to the bucket, even if they haven’t changed, slowing down the deploy, and costing you money.

Step 7) Start blogging.

That’s it - now as you push changes to your blog, Travis CI will pick up the change and deploy your changes automagically to S3.

If you haven’t figured it out yet, this blog is built upon this exact process and is located here: flyinprogrammer/continuousfailure

All of the coding examples for this blog will be accessible via this Github organization.