3D printing and other software curiosities
by Clinton Freeman

Is adding thermal paste to a heated build platform a good idea?

24 Feb 2015

So I’m about to embark on a decent sized 3D printing run, and wanted to squeeze even more performance out of my heated build platform.

In the past I have explored using an insulator on top of the heated build platform to improve warm times by around 10%. But that wasn’t enough, I wanted my heated build platform to reach temperature faster!

So I constructed my build surface from three layers. Glass, PCB heatbed, and plywood. I want to get the heat from the PCB into the glass as fast as possible. Putting an insulator below the PCB (plywood), and on top of the glass (heated build platform cosy) was a decent start.

The three layers of a 3D printer build platform.

In my PC overclocking days, I wanted to get heat out of the CPU in my computer and into the heat-sink on top as efficiently as possible. I used a thermal grease called ‘Arctic Silver’, which was the bee knees at the time. This paste squashed out tiny microscopic pockets of air trapped between the heat-sink and CPU. Making for a better contact and greater conduction of heat. Theory was, this should also get the heat out of the PCB and into the glass faster.

Applying thermal paste to a 3D printer heated build platform

I grabbed a tube of Arctic Silver, removed the glass from my build platform and started applying. It was much harder applying the paste evenly over such a large surface. Using a business card, I worked in different areas, building it up till I had the whole PCB covered. I then followed a similar process on the bottom of the glass. Finally I used clamps to squish the two back together (bulldog clips didn’t apply enough pressure). After a few minutes, I removed the clamps.

Phew. Time to heat this puppy up, and test. You know what? It actually made things worse. I wasn’t happy with the bond between the two surfaces, so I tried again. Removing some of the excess paste with a lint free cloth and clamping again. A little better, but still worse than no paste at all (I used a similar test procedure to the insulator test).

Chart displaying heat changes over time, paste vs no-paste

I’m still a bit confused by this. It should be as good, if not a little better with the paste. You are much better off starting with decent insulators above the glass, and below the PCB. As for why the paste didn’t improve things further, I have a couple of ideas:

  • Arctic Silver has a 200 hour ‘break in period’, it takes up to 200 hours for thermal conductivity to reach its maximum. Arctic Silver now also sell a modern alternative ‘Céramique 2’. It only has a 25 hour break in time, and is also much cheaper (contains no silver). The break in times might be longer for such a large surface area, so the Céramique might offer better performance.
  • The thermistor touching the bottom of the glass surface might not be giving an accurate reading of glass temperature. I want to get one of those infrared thermometers to check temperature readings.
  • Maybe I still haven’t got the application of thermal paste correct. Perhaps the layer is too thick, and I have air bubbles still trapped inside.
 

Sticker Chart - January and February

19 Feb 2015

I’m not going to lie, it wasn’t long after I embarked on my wacky monastic engineering journey that reality (and terror) hit home. Questions like ‘What have you done?’ and ‘What the HELL are you doing?’ ricocheted around my head alongside images of destitution and bankruptcy.

I also became paranoid that the year would devolve into me lying on the couch binge watching television and movies. No! I want to create more, so I set myself personal KPI’s.

I know. Not exactly the first thing that leaps to mind when talking about a monastic journey. Good friends laughed. They suggested that maybe I should take a page from raising toddlers and call it a sticker chart instead.

So here is my sticker chart for January/February:

In hindsight, relocating to Cairns and doing it all in one big bang was a good plan. It wasn’t till we arrived in Cairns that a backlog of fatigue unfurled. I was horribly burnt out. Waiting several weeks for our life to arrive in a container forced a much needed reprieve. I had a long summer break with one guiding goal: 4pm. Pool. When I started the monastic engineering experience on the 12th of January, I was refreshed.

Create.

A photo of a hexagonal shade umbrella.

I created three things this month, but haven’t finished the project I set out to build. I’m still waiting on a few components from China (it has a hardware element) and I’m finishing other elements while I wait. So no sticker for creation this month. Fingers crossed everything arrives and I can finally assemble it all soon. I’m going to need to pipeline these projects better so that raw material shipping times don’t sink me.

However, I created and released two chef cookbooks, ‘credentials’ and ‘docker-images’. These cookbooks and a couple of supporting articles, helped me level up on managing systems with docker.

I completed a project that featured no software at all. I find building things within the constraints of the physical world an interesting exercise. This month I completed a set of shade umbrellas to stop some palms from getting burnt in the midday sun.

no sticker

I started writing again. Trying to explain concepts to others help me learn them faster. This month the articles covered:

Read.

thomas the tank engine sticker

Finished reading three books:

Steal like an Artist – Austin Kleon

Despite the link bait title, this book was not about stealing at all. It was a great, easy to read book on unlocking creativity by adoring and understanding those that inspire you the most. Study their work, understand who inspired them, reference and love them.

Browsing and seeing different workspaces is one of my most favoured, I just lost a couple hours browsing the internet themes. So I was pretty excited when I got to the great little section on workspaces. Austin suggested that splitting them into two broad categories, digital and analog. Where the analog area is for generating ideas, and digital is for editing and publishing them. This has some parallels with the use of tangible objects during the ideation phase of design thinking. So I got my own digital workspace cranking this month, but the analog is still a work in progress. I also like Austin’s ideas about writing fan letters. Writing praise for someone down should force me into a deeper understanding of their work.

A photo of my digital workspace.


Show your work! – Austin Kleon

Similar, easy to read coffee table format as ‘Steal like an Artist’ and just as good. It followed a similar rhetoric as Jeff Attwood’s ‘How to Stop Sucking And Be Awesome Instead’ but, yet I still found a couple of new pearls useful. The most liberating, was ditching the idea of guilty pleasures. Summed up with a neat Dave Grohl quote ‘I don’t believe in guilty pleasures. If you fucking like something, like it.’. The chapter on funding your work was quite good as well, blasting out of the gates with my favourite quote of both the books ‘Even the Renaissance had to be funded’. Austin goes on to challenge romantic notions of art and money, and explores some standard ways of funding projects (kickstarter, selling services, selling projects).


Getting things done – David Allen

Several years ago, I stumbled upon ‘Inbox Zero’ as a way of staying on top of emails. It helped manage the flood of digital communications that was bombarding me. Inbox Zero was part of a getting things done craze that floated around the software development community. Much of it stemming from David Allen’s work, and his book had been sitting in a ‘too read’ pile ever since. This was a liberating read. For the first time in a long while, I feel as though I’m finally ‘on top of things’. The big things I took away from this book were collating and capturing everything. All those ideas, projects, to do’s, to read’s and then systemising a way of working through all this stuff. The minimalist approach to managing projects was also useful, especially for smaller projects. State a goal, desired outcome and next action. It is a lightweight way of defining the project, and the actions you need to perform for completing the project.

Exercise.

octonaughts sticker

Ran twice a week. 31km. Acclimatising to summer in the tropics has been brutal. I think I sweated 2kg that first run.

Despite my sticker chart, I still question myself daily with ‘What the HELL are you doing?’ But I’m starting to find it healthy. It pushes away complacency and forces introspection. Is this really how you are going to spend the day?

 

Raspberry Pi comparison chart

12 Feb 2015

Confused by the Raspberry Pi naming scheme and not sure which to buy? This comparison chart outlines the differences between each model.

Compute Module Raspberry Pi, Model A+ Raspberry Pi, Model B+ Raspberry Pi 2, Model B
Topdown photo of Raspberry Pi Compute Module Topdown photo of Raspberry Pi, Model A+ Topdown photo of Raspberry Pi, Model B+ Topdown photo of Raspberry Pi 2, Model B
Price#
$250 USD## $20 USD $35 USD $35 USD
Processor
  • 700MHz
  • Single-core
  • ARM1176JZF-S
  • 900MHz
  • Quad-core
  • ARM Cortex-A7
Memory
512MB 256MB 512MB 1GB (1024MB)
Storage
  • 4GB eMMC flash
  • (Inbuilt)
Graphics
250MHz Broadcom VideoCore IV
Connections
  • HDMI
  • 1x USB2 port
  •  
  • 46 GPIO pins
  • 2x MIPI camera connector
  • HDMI
  • 1x USB2 port
  •  
  • 40 GPIO pins
  • MIPI camera connector
  • HDMI
  • 4x USB2 ports
  • 10/100 Ethernet
  • 40 GPIO pins
  • MIPI camera connector
Power
0.8W - 2.0W 0.5W - 1.2W 1.0W - 1.7W 1.2W - 1.8W
Size and Weight
  • Length: 6.8cm
  • Height: 3cm
  • Weight: 7g
  • Length: 6.5cm
  • Height: 5.7cm
  • Weight: 23g
  • Length: 8.6cm
  • Height: 5.7cm
  • Weight: 45g
Available at

2015/02/13 - EDIT: Added section on power usage.

 

Configure a docker host with chef, or how I stopped worrying and started deploying containers.

02 Feb 2015

So you have built a docker image or two on your development machine and want to get them into production. This article will step you through the process of hosting containers in production with chef.

Huh? Chef? I thought I didn’t need it anymore? Well if you are using a dedicated docker host, you won’t. Your hosting provider does all the work at maintaining the underlying operating system. Even the bigger hosts like DigitalOcean are providing images with docker pre-installed. Still, you can find yourself doing other configuration to the underlying operating system. For those cases and provisioning a raw OS, I like to use Chef automation.

Without a doubt, writing custom chef cookbooks for fiddly application specific configuration was frustrating. I felt like existing cookbooks were letting me down. Dammit. I just want to define metadata about the server at the node level, and be done with it. With docker, I can now do fiddly application specific stuff there. Leaving just the base generic server configuration with Chef. Perfect.

I have two chef ‘tiers’ defined as roles. A base role, common to all machines in the cluster (running ubuntu 14.04). The base role covers some basic house keeping and security settings. Then upon that the application tier defines what containers to run on what machine. All the fiddly application specific stuff is shipped with the container. It is a much cleaner separation and is much easier to maintain than a chef only setup.

My base.rb role looks something like:

name 'base'
description 'base configuration for a server'

default_attributes 'openssh' => { 'server' => { 'permit_root_login' => 'no',
                                                'password_authentication' => 'no' }},
                   'users' => ['<your uname here>'],
                   'firewall' => { 'rules' => ['sshd' =>    {'port' => '22',
                                                             'protocol' => 'tcp'},
                                               'http' =>    {'port' => '80',
                                                             'protocol' => 'tcp'},
                                               'https' =>   {'port' => '443',
                                                             'protocol' => 'tcp'}]},
                   'ntp' => { 'servers' => ['0.north-america.pool.ntp.org',
                                            '1.north-america.pool.ntp.org',
                                            '2.north-america.pool.ntp.org',
                                            '3.north-america.pool.ntp.org']}

run_list 'recipe[credentials]',         
         # passwords encrypted in a databag -- see https://github.com/cfreeman/chef-credentials
         
         'recipe[apt]',
         # Upgrade the apt-gets. 
         
         'recipe[unattended-upgrades]',
         # Yeah, just install security updates even if I am sleeping. 
         
         'recipe[ufw]',
         # For port blocking and source address restrictions.
         
         'recipe[openssh]',
         # Can I haz access.
         
         'recipe[ntp]',
         # Correct time and date please.
         
         'recipe[user::data_bag]',
         # Create a user account (used for deploying container updates)
         
         'recipe[fail2ban]',
         # Go away nasty people
         
         'recipe[docker]',
         # install docker
         
         'recipe[vim]'
         # text editing on the server. Rarely used. But you never know? 

While an application tier role would look something like:

name 'nginx'
description 'configures server with simple app'

default_attributes 'docker-images' => [{'name' => 'nginx', 'port' => '80:80'}]
#Pull the nginx image, create a container named 'nginx' and expose port 80 to port 80.

run_list 'recipe[docker-images]'
#Create containers from images defined in attributes -- see https://github.com/cfreeman/chef-docker-images

If i’m running private images from dockerhub I will create an encrypted data bag where I store my dockerhub username and password as attributes:

knife solo data bag create credentials production --secret-file 'data_bag_key'
knife solo data bag edit credentials production --secret-file 'data_bag_key'

{
    "id": "production",
    "docker_email": "your@email.com",
    "docker_password": "password123",
    "docker_username": "your_username"
}

Then just create a node including the base role and desired application roles:

{
    "run_list": ["role[base]", "role[nginx]"]        
}

Oh, and my Cheffile has the following dependancies:

#!/usr/bin/env ruby
#^syntax detection
site 'http://community.opscode.com/api/v1'

cookbook 'apt'
cookbook 'user', git: 'git://github.com/fnichol/chef-user.git'
cookbook 'credentials', git: 'git://github.com/cfreeman/chef-credentials.git'
cookbook 'openssh'
cookbook 'unattended-upgrades'
cookbook 'ntp'
cookbook 'fail2ban'
cookbook 'ufw'
cookbook 'docker'
cookbook 'docker-images', git: 'git://github.com/cfreeman/chef-docker-images.git'
cookbook 'vim'

And cook:

knife solo cook <ip of your server>

This will go off and do the base configuration of your machine, and install docker. While docker-images pulls your images from dockerhub, creates matching containers and performs host integration (creating init scripts so that your containers start when your operating system boots). For the above example, it will just have a default install of nginx running on the machine.

Deploying Container Updates

Deploying updates to your containers running on a server is pretty easy as well. When I create a new image, I push it to dockerhub with:

docker push nginx

Then I have a simple bash script for performing deploys on the server:

#!/bin/bash

sudo docker pull nginx
# Download the latest version of your application image.

sudo service nginx stop
# Stop the current container.

sudo docker rm nginx
# Remove the old container.

sudo docker run --name=nginx -d -p 80:80 nginx 
# Create a new container from the latest version of the pulled image.

sudo service nginx start
# Make sure upstart/systemd has the latest PID. 

It is a basic deploy process at the moment, with plenty of room for improvement. Like zero-downtime deploys and an easier way to upgrade multiple machines at once.

Related documentation on hosting docker containers:

 

Docker OSX Cheat Sheet

30 Jan 2015

Unfortunately OSX doesn’t have the necessary kernel features to run the docker daemon natively. But, yet ‘Boot2Docker’ is a all-in-one bundle that installs docker client for OSX. It also includes virtual box and a virtual machine all geared up for running the docker daemon.

Once you have installed the Boot2Docker bundle. Starting ‘boot2docker’ will spin up the virtual machine and the docker daemon. Docker is now ready to receive commands from the client (your OSX machine).

Note: Once the boot2docker console has started running, you can minimise it. This terminal is for the virtual machine running the docker daemon. Just think of it as some far-away server that receives commands from your OSX machine.

In a new terminal window, configure the docker client (which does run natively on OSX). First figure out what ip address your docker daemon is running on:

$ boot2docker ip

Now add the following configuration to your environment:

$vim .bash_profile

export DOCKER_TLS_VERIFY=1
export DOCKER_HOST=tcp://192.168.59.103:2376 #This should be the IP address from the boot2docker ip command above. The default docker port is 2376
export DOCKER_CERT_PATH=/<usr dir>/.boot2docker/certs/boot2docker-vm

$source .bash_profile

Finally you are all good to start ‘dockerizing’ some applications:

$docker info

Other documents related to installing and getting started with docker:

Docker Development lifecycle

Here I was aiming to replace vagrant and chef as a way of setting up development images. I wanted to be able to have each of my projects seperated into their own container.

To build and run your application, create a Dockerfile that uses:

* 'COPY' commands to copy your source/application from your development machine into the container. 
* 'RUN' commands to install and build your source/application within the container.
* 'CMD' instructions to fire up your application after it the container builds it.

Then build your image by executing your Dockerfile use (this copys source from your dev machine into the container, runs the build scripts and finally executes the resulting product):

$docker build -t <name of image> .

Once built, an instance of an image (called container) is created to run your application. The following creates a container locally:

$docker run --name <name of container> -d -p 80:80 <name of image>

Where,

  • -d runs the container in ‘detached’ mode. Containers running in the background are perfect for long running stuff like web apps
  • -p 80:80 Publishes the container port (80) to the host port (80), It takes the format of hostPort:containerPort.

On an OSX development machine, point your browser at the IP address returned by boot2docker ip to see your running site. (i.e. in this example 192.168.59.103)

Images can be rebuilt by running docker build again. However, you will need to remove a running container before you can use the same name again.

$docker rm -f <name of container>

Where,

  • -f Forces the container to stop before it is removed.

All current containers can be listed with:

$docker ps -a

All the locally available images can be listed with:

$docker images

Other documents on developing with docker:

Piecing it all together

  1. Define how to build and run your application in a Dockerfile.
  2. Edit source code normally would on your OSX machine.
  3. Build an image from your source code changes (docker build). This is surprisingly fast!
  4. Run your image as a container, and view the results on the boot2docker ip address.
  5. Rinse and repeat (just remember to delete the old container so that you can re-use the name).

Deploying ‘dockererized’ applications is a little more involved, will cover this (and how to prepare a server with chef) in a follow up soon.

 

What are the best SD cards to use in a Beagle Bone Black?

30 Jan 2015

MicroSD cards have a limited life, and the more you read and write to them, the shorter their lifespan. In a BeagleBone Black this makes things a little tricky, the microSD card gets a much tougher workout than it normally would in something like a digital camera.

A photo of a SanDisk extreme microSD card installed in a BeagleBone Black.

Up until now, I have always just picked up a cheap microSD card to go with my BeagleBone Black. Unfortunately these microSD cards are often low quality and didn’t last very long, with some failing in as little as a month.

So now I only get ones that feature wear levelling. The cheap microSD cards don’t have any wear levelling, and the BeagleBone Black gets into situations where certain areas on the microSD card gets written to over and over again until it wears out and fails. The BeagleBone then comes along and again tries to the use the same worn-out area, and promptly chokes. All despite other areas of the microSD card being hardly (or not) used at all!

The more expensive cards with wear levelling won’t just keep pummelling the same spot on the disk over and over again. Instead, it will try and spread wear out over the whole disk. A little like rotating the tyres on a car, wear levelling ensures that each part of the disk decays at about the same rate.

An Illustration showing difference in microSD card deterioration with and without wear levelling

I also get microSD cards that have way more space than I need, at least 8GB. This further increases the longevity of the card by increasing the total ‘surface area’ that will eventually wear away. With wear levelling, more free space means a longer lasting microSD card.

Mico SD-Cards compatible with the BeagleBoard Black:

 

How to store sensitive information (like passwords) in your chef kitchen.

22 Jan 2015

Configuration management contains information that shouldn’t be floating all over the place in plain text (like your root db credentials).

Within Chef, metadata can be stored in a role, cookbook, attribute or data bag. Although, only a data bag can be encrypted, making them perfect for storing sensitive information.

If you are using knife solo you will need the following plugin installed to get started:

gem install knife-solo_data_bag

Next, set an editor environment variable. So knife can spin up your text editor of choice when editing encrypted data bags:

vim .bash_profile
EXPORT editor=vim

A key (password) locks and unlocks your encrypted data bag. I like to defined this in a file named ‘data_bag_key’:

echo “super secret password” > data_bag_key

Make sure you include your data_bag_key file as part of your .gitignore file. No point in locking everything up, if you end up taping the key to the front of the lock.

To create an encrypted data bag with the knife command:

knife solo data bag create credentials production --secret-file 'data_bag_key'

Editing an encrypted data bag is also done with the knife command:

knife solo data bag edit credentials production --secret-file 'data_bag_key'

Viewing encrypted information in done with:

knife solo data bag show credentials production --secret-file 'data_bag_key'

Using encrypted data bag information within your cookbooks is a little more involved. So I created a little utility credentials cookbook to make things easier. Place ‘recipe[credentials]’ at the start of your run_list to decrypt the credentials data bag. The resulting metadata is automatically added to the node attributes.

With some sensible data bag naming, cookbook default attributes (like passwords) can be overridden.

 

Learning to Learn

15 Jan 2015

For a while now I have been trying to learn a little about functional programming. I was lucky. I went to some free courses run by NICTA and Tony Morris. I was also able to hassle Tony about concepts I didn’t understand. It was one of the most confronting experiences of my professional life. The subject material wasn’t easy.

Then last year, a couple of former colleagues launched a ‘MOOC’, The Science of Everyday Thinking.

It was an excellent course that helped me debug the way I thought about the world. Episode five, ‘Learning to Learn’ stood out the most. Actually it made me stop.

Shit. I am an expert beginner.

During my time at university I managed a bit of a fluke. By accident, I developed effective learning techniques (something not everyone does). I didn’t know it at the time, and I certainly didn’t appreciate it.

I found my internal monologue deriding people ‘That ain’t gonna help you.’ whenever I saw people furiously highlighting textbooks, and transcribing lectures. And, by the end of my university degree I had become so cocksure of myself. I thought I could learn anything. Give me three weeks, the course material, past exams and I could better than pass anything you threw at me.

By the end of my degree, I had refined The Study Ritual. Two or three weeks out from the exam period I would withdraw from the world and begin.

It started with a timetable. Two or three practice exams for each of my courses, interleaved over those weeks. Authentic exam conditions. Usually the kitchen table, timed, with only what I could take into the real exam. I can assure you, I ‘failed’ every first practice exam I did during The Study Ritual.

Each day, I had forty minutes of revision allocated to each subject, with a twenty minute break in-between. At the start of The Study Ritual, I revised by creating course summaries. Towards the end, I did basic retrieval practice. This practice was either working through example problems (like the ones I flunked in practice exams) or that classic ‘cover, write and check’ technique for remembering stuff.

It took a fair amount of discipline and work. But by the time an exam rolled round, I was confident. A few butterflies in the belly, but I didn’t seem stressed compared to some of my peers. It also worked well. My performance at university steadily improved as I refined The Study Ritual.

So what happened? Why was I finding this functional programming stuff so damn hard?

During a professional career, you often optimise on performance, not learning. As a software engineer I am evaluated on how quickly I can build quality systems. Not how well I understand the material I use to build these systems directly.

The problem? I was able to build systems well enough to get a few promotions. But, I wasn’t building underlying knowledge fast enough. How I build knowledge in my professional career is not as effective as The Study Ritual. Functional programming was the first desirable difficulty (the learning equivalent of no pain, no gain) that I had faced in five years.

In many ways, I hope my monastic engineering experience will address this problem. Will I be able to develop better learning techniques that are more compatible with the time pressures of professional work? Will the one month projects I build at the end of my engineering experience be better than those at the start?

 

Programming Nirvana

13 Nov 2014

Sunset over Thursday Island, QLD, Australia.

Today I ‘finished’ up at my day job. Why the quotation marks? We’ll get to that. But first, a little context, as this is a day I have been working toward for five years.

I have always been a restless employee. Despite working in great environments and on cool projects, I was constantly chasing the technology dragon. I always had a side-project or two on the go outside of work.

Often I had lofty ambitions for these side-projects; I wanted to create a business. My business. Something I could control and direct, something that would allow me to find a greater purpose.

When pursuing a startup or business, thoughts of fortune, fame or freedom are often lurking in the back of one’s head. During each side-project I would think, ‘This is the one, this is how I will catch that elusive break’. But it never happened - all I appeared to do was notch up an impressive collection of failures and setbacks.

It wasn’t till a couple of years ago that I realised that each new opportunity I picked up was a direct result of a quirky side-project. Irrespective of how I felt I had failed, I would land a new gig and get to work with new technologies alongside even more skilled people. But I was still chasing the technology dragon. It wasn’t enough. There was always ‘the next tech’ to use and another bunch of talented people I could be learning from.

I wasn’t notching up failures. I was slowly ‘grinding’ and levelling up on a strange massively multiplayer online game called professional software development.

With this epiphany, I decided to dig deeper into my own startup dreams. Was having my own business what I wanted? As it turns out, no. I was after the freedom to create.

I hinted a year ago, I have an interest in exploring monastic engineering experiences. I want to bang away on my keyboard and create, just for fun, without any distractions. Not to create a business, not to make products that people want, not to discover something new, not to publish a paper. But to indulgently explore oddities that perhaps only I find interesting.

So I have set myself a goal to build one project a month for twelve months. The only rule? I must be able to build it in a month. If I manage to build twelve projects, only to have someone ask, ‘Clinton why on earth did you … ?’ and my only answer is, ‘Because I could’, I will have achieved what I set out to do.

Armed with my insight and plan, I went to my boss to try and resign. When he heard what I was doing, he looked me in the eye and offered me a job. “Will you stay on and work casual for me?” My boss made it sound like I was doing him a favour, but it was the other way round. He was offering a safety net to work as much or as little as I needed - why? Because he is a bloody good boss, and was starting me on my journey. He was removing a distraction, the fear of losing a somewhat valuable career.

Phew. It has been a ton of work to get here. To get to the beginning; a long summer break before starting my project-a-month pursuit of programming nirvana in the new year. I have no idea what will unfold, but it is going to be fun finding out.

To everyone who helped get me here. Thank you.

 

Copyright Clinton Freeman Ⓒ 2007-2015