Dockerizing a static blog

    Dockerizing a static blog

    My blog has been compiled using Acrylamid for about a year and a half now. It's been great and I don't regret switching to it from Wordpress. I can write my posts in Markdown, in the command line, my posts are stored in a clear text format in GitHub, my site is way faster now than it was on Wordpress. However, deploying a new post has always been a bit of a pain. It's easy to write from anywhere, all I need is Vim and git, but until now, it was com­pli­cat­ed to publish any new post, or do any changes to the site.

    The problem was that I had my en­vi­ron­ment for de­vel­op­ment set up on my home machine, which is running Arch Linux, and that whenever I was away from home, I would have to re­con­fig­ure whatever machine I had access to to be able to compile my blog. Because my setup at home was on Arch Linux, it wasn't exactly portable.

    After a long time, having traveled a lot last year, I decided I needed a way to be able to quickly setup my blogging en­vi­ron­ment anywhere, so that I can post while I am away. Good thing I did that, because I've been gone from home for a month now.

    The approach I chose was to use Docker to create an image which can run my blog.  Docker takes care of all the porta­bil­i­ty issues, so I can run it on Windows, Mac or Linux machines, and it will still behave the same way.

    Docker is similar to virtual machines to some extent, but it operates at a layer above them. While each virtual machine has it's own guest OS running inside the host OS, docker runs inside the OS and it just vir­tu­al­izes the file system, networking and any libraries that are needed for your ap­pli­ca­tion to run. This results in sig­nif­i­cant startup time reduction and reduced RAM usage. On non-Linux operating systems, I believe it uses a VM in the background, but all containers that you start on the same machine, share a VM.

    Setting up the Docker image didn't take too long, I managed to do it in a couple of hours, there were still two gotchas on which I spent some time. To one I found the correct solution (or I realized my initial mistake at least), while to the other one, I have a meh solution, which I don't really like, but luckily, I don't have to face that problem too often (hopefully, almost never).

    A Docker image is described in a Dockerfile, which contains in­struc­tions on how to create that image, what to install into it, what commands to run, how to setup the file system and so on. I'll share with you my Dockerfile and explain it.

    FROM debian:latest
    MAINTAINER rolisz <rolisz@gmail.com>
    
    # Update OS
    RUN apt-get update
    RUN apt-get -y upgrade

    This tells Docker to use the latest Debian image as a base for our image. I chose Debian because it is well known for it's stability and long term support. You can choose pretty much any image. It also updates the system and says who's the maintainer.

    # Install dependencies
    RUN apt-get install python-setuptools git libjpeg-dev zlib1g-dev build-essential
    python-dev rubygems imagemagick -y 
    RUN gem install sass
    RUN easy_install pip
    RUN pip install acrylamid asciimathml Markdown Pillow
    RUN pip install --upgrade acrylamid asciimathml Markdown Pillow

    Now I install all the stuff that I need to be able to compile my blog properly. You might start un­der­stand­ing why it wasn't easy before. Half of these packages have different names in different Linux dis­tri­b­u­tions and it actually needs three different package managers. :((

    # Make ssh dir
    RUN mkdir /root/.ssh/
    COPY my_ssh_key /root/.ssh/
    
    WORKDIR /rolisz_acrylamid
    
    # Expose default Pelican port
    EXPOSE 8000
    
    # Run Acrylamid
    CMD acrylamid autocompile

    Now we do some more in­ter­est­ing things. First, we create the .ssh folder and we copy there the ssh key for the server where the blog is deployed. This is only if you want automated pass­word­less pushes. If you are fine with entering your password each time you post, you can skip this step. Then we define the working directory of our image, which is where all subsequent operations will take place and which will be the default starting directory for any shells that are opened. Then we expose the 8000 port and then we start the Acrylamid com­pi­la­tion.

    My initial mistake was that I wanted to bake the blog into the image. However, this meant that for every new post, I would have to update the image. And not just for new posts, but for every tiny new update, because changes that happen to the Docker file system are not persisted, unless they are explicitly saved.

    The better solution is to mount an external directory to the working directory. That external directory is persistent, but doesn't need to be baked into the image, so unless I change some of the de­pen­den­cies of my blog, I won't have to update the image.

    Building the image can be done with:

    docker build -t blog .

    The first time you run this, it will take some time, because it will have to download the latest Debian image, update it, install all the programs and so on. But subsequent edits will be faster, because it will already have the cached in­ter­me­di­ate steps. This will create an image called blog, from the current folder.

    To run the image and start showing the blog, you must use the following command:

    docker run -p 8000:8000 -v ~/rolisz_acrylamid:/rolisz_acrylamid blog
    

    This tells Docker to map host port 8000 to container port 8000 and to mount the host folder ~/rolisz_acry­lamid to container folder /rolisz_acry­lamid, for the image "blog". You should be able to access the blog at 127.0.0.1:8000. However, if you are a on a Mac or Windows, Docker is running inside a VM, so you actually need to access the VM's IP address. You can find that out with:

    docker-machine ls

    This will give you the details of the VM. You will access the blog using that IP, but using the 8000 port.

    To deploy this using the built-in Acrylamid support for that, say if you have a command called rsync in your conf.py file:

    docker run -p 8000:8000 -v ~/rolisz_acrylamid:/rolisz_acrylamid blog acrylamid deploy rsync

    And voila, your content should be pushed. The above in­struc­tions should be easily adaptable to any other static website engine, just by sub­sti­tut­ing the ap­pro­pri­ate de­pen­den­cies and the right commands.

    The other gotcha that I haven't found a good solution for is what to do with my images. I cannot save them to GitHub, because I have several gigabytes of them (and binary blobs in Git are a no-no anyway). For now I have them on my server (and on my home computer), and if I want to do something big with them (say resize all of them or whatever), I have a small script that downloads them from there and removes the thumbnails. Not the best solution, but I don't know what else works nicely. But I don't do this too often, so I can live with it for now.

    Now my blog is ready to move to the cloud. The next step would be to actually be able to compile on my server, so I can write my post, commit to Github, push the repo to the server (possibly au­to­mat­i­cal­ly, with some hooks), and have the re­com­pi­la­tion happen there. This needs a bit more work, especially to make sure that I don't overload my server, and I also need to find a way to link all my images to my posts. Maybe in one years time I will do it :)))