3D printing and other software curiosities
by Clinton Freeman

Image Stabalised Motion Detection

28 Jan 2014

The Gasworks project is an interactive art installation I’ve been involved with, which loosely mimics brain cells as clusters of lights. Webcams are used to detect motion and organically alter lighting sequences of ten different sculptures (or neurones), each suspended on steel cabling above a public amphitheatre.

image

Artist Michael Candy wanted the installation and lighting sequence to look as analog as possible, with the whole thing to reacting according to the speed of movements detected. Unfortunately this ruled out using the simple PIR (passive infrared) sensors typically found in security systems; these have a single output pin that is either off (no motion) or on (motion detected), and weren’t capable of giving us any insight into the amount of activity associated with any detected motion.

I eventually settled on using a webcam and a computer vision algorithm called optical flow, a technique often found in optical computer mice. I used the implementation found in OpenCV, which was really easy to integrate into Golang with cgo.

Optical flow returns back an array of vectors, one for each pixel captured by the webcam. The magnitude and direction of these vectors indicate how that pixel “moves” compared to the previous frame in the video stream.

image

To calculate the magnitude of a detected movement, I simply summed all the movement vectors that came out of optical flow algorithm, calculated the overall length (magnitude) and scaled it down so that frames with loads of movement had an ‘energy’ of 0.1 while frames with no movement had an ‘energy’ of 0.0.

func calcDeltaEnergy(flow *C.IplImage, config *Configuration) float64 {
    var i C.int
    var dx, dy float64

    // Accumulate the change in flow across all the pixels.
    totalPixels := flow.width * flow.height
    for i = 0; i < totalPixels; i++ {
            value := C.cvGet2D(unsafe.Pointer(flow), i/flow.width, i%flow.width)
            dx += math.Abs(float64(value.val[0]))
            dy += math.Abs(float64(value.val[1]))
    }

    // average out the magnitude of dx and dy across the whole image.
    dx = dx / float64(totalPixels)
    dy = dy / float64(totalPixels)

    // The magnitude of accumulated flow forms our change in energy for the frame.
    deltaE := math.Sqrt((dx * dx) + (dy * dy))
    fmt.Printf("INFO: f[%f] \n", deltaE)

    // Clamp the energy to start at 0 for 'still' frames with little/no motion.
    deltaE = math.Max(0.0, (deltaE - config.MovementThreshold))

    // Scale the flow to be less than 0.1 for 'active' frames with lots of motion.
    deltaE = deltaE / config.OpticalFlowScale

    return deltaE
}

It was here that we ran into a little problem. The sculptures are suspended on steel cable rigging, and sway in the wind. The algorithm was getting confused, and a gentle sway in the wind would be falsely detected as people moving around, thus changing the lighting sequence.

I first tried sticking an accelerometer to the webcam, and using readings from that to compensate for when the camera was swaying in the wind. This turned out to be a “Bad Idea”™, mostly because of the latency between getting accelerometer sensor readings and matching it with the right frame of video. But also it added a considerable amount of complexity, and needless to say everyone was relieved when I worked out a software approach that didn’t need any additional hardware.

I realised that when the webcams and sculptures are still, only parts of the image have vectors, indicating detected motion. However, when the sculptures and cameras are swaying in the wind, the whole image has vectors, indicating a general trend - the direction in which the camera is moving.

image

To work out the general direction in which the camera was moving, and to image stabilise the optical flow algorithm, I first worked out the mean movement vector for the frame and subtracted that from each movement vector (clamping at zero).

func calcDeltaEnergy(flow *C.IplImage, config *Configuration) float64 {
    var i C.int
    var dx, dy, mx, my float64

    totalPixels := flow.width * flow.height

    // Determine mean movement vector.
    for i = 0; i < totalPixels; i++ {
            value := C.cvGet2D(unsafe.Pointer(flow), i/flow.width, i%flow.width)
            mx += float64(value.val[0])
            my += float64(value.val[1])
    }
    mx = math.Abs(mx / float64(totalPixels))
    my = math.Abs(my / float64(totalPixels))

    // Accumulate the change in flow across all the pixels.
    for i = 0; i < totalPixels; i++ {
            // Remove the mean movement vector to compenstate for the sculpture that might be swaying in the wind.
            value := C.cvGet2D(unsafe.Pointer(flow), i/flow.width, i%flow.width)
            dx += math.Max((math.Abs(float64(value.val[0])) - mx), 0.0)
            dy += math.Max((math.Abs(float64(value.val[1])) - my), 0.0)
    }

    // average out the magnitude of dx and dy across the whole image.
    dx = dx / float64(totalPixels)
    dy = dy / float64(totalPixels)

    // The magnitude of accumulated flow forms our change in energy for the frame.
    deltaE := math.Sqrt((dx * dx) + (dy * dy))
    fmt.Printf("INFO: f:%f m:[%f,%f]\n", deltaE, mx, my)

    // Clamp the energy to start at 0 for 'still' frames with little/no motion.
    deltaE = math.Max(0.0, (deltaE - config.MovementThreshold))

    // Scale the flow to be less than 0.1 for 'active' frames with lots of motion.
    deltaE = deltaE / config.OpticalFlowScale

    return deltaE
}

It took a bit of tweaking, but in the end, the stabilising approach worked great and compensates for all but the most violent gusts of wind. The structural engineers have predicted the sculptures will experience 80cm of lateral movement in 100km/h wind gusts (a 1 in 5 year storm event). I’m actually really keen to see how the sculptures sense and react to a big subtropical storm - I reckon it would be a pretty awesome light show!

 

Using Golang to connect Raspberry PIs and Arduinos over serial

12 Jan 2014

The code running on Raspberry PI’s within the gasworks project (An art installation that loosely mimics brain cells as clusters of lights) is all written in Golang. While, the hardware architecture for each of the neurones has a Raspberry PI sending commands to an Arduino over serial. This communication link was one of the first things I prototyped for the project.

image

The veritable Dave Cheney maintains unofficial ARM builds of Go which are compatible with the Raspberry PI. So the first step is to grab that, and follow the along with the Golang installation instructions.

For serial communication I used the huin fork of the goserial library, mainly because the code had a far more idiomatic Go style than the others I looked.

Opening up a connection to the Arduino is a case of hunting around for that USB device that is most likely the Arduino:

package main

import (
	"github.com/huin/goserial"
	"io/ioutil"
	"strings"
)

// findArduino looks for the file that represents the Arduino
// serial connection. Returns the fully qualified path to the
// device if we are able to find a likely candidate for an
// Arduino, otherwise an empty string if unable to find
// something that 'looks' like an Arduino device.
func findArduino() string {
	contents, _ := ioutil.ReadDir("/dev")

	// Look for what is mostly likely the Arduino device
	for _, f := range contents {
		if strings.Contains(f.Name(), "tty.usbserial") ||
			strings.Contains(f.Name(), "ttyUSB") {
			return "/dev/" + f.Name()
		}
	}

	// Have not been able to find a USB device that 'looks'
	// like an Arduino.
	return ""
}

func main() {
	// Find the device that represents the arduino serial
	// connection.
	c := &goserial.Config{Name: findArduino(), Baud: 9600}
	s, _ := goserial.OpenPort(c)
}

The thing that tripped me up when prototyping the communication code was that I wasn’t able to immediately pump data down the serial connection to the Arduino, unless I had the Arduino serial monitor open.

When making a serial connection to an Arduino it automatically (unless it is a new Arduino Leonardo) resets it (similar to what happens when you press the reset button). It then takes about a second for the bootloader on the Arduino to do it’s thing and get into a state where it is able to accept data over the serial port.

I worked around this in Golang a little inelegantly by sleeping for a second, however it is possible to disable the Arduino reset on serial connection with a simple hardware hack.

func main() {
	// Find the device that represents the Arduino serial connection.
	c := &goserial.Config{Name: findArduino(), Baud: 9600}
	s, _ := goserial.OpenPort(c)
	
	// When connecting to an older revision Arduino, you need to wait
	// a little while it resets.
	time.Sleep(1 * time.Second)				
}

The communication protocol I used between the Raspberry PI and Arduino was very simple. Each command is five bytes, the first byte being the command identifier, with the four remaining bytes reserved for a single mandatory float argument (that could be ignored if necessary on the Arduino).

Packaging up commands and sending them over the wire was pretty easy with the binary encoding package bundled into the Golang standard library. It was a case of encoding the argument into a byte buffer, then looping over the bytes in both the command and argument byte buffer, writing them to the serial port:

// sendArduinoCommand transmits a new command over the nominated serial
// port to the arduino. Returns an error on failure. Each command is
// identified by a single byte and may take one argument (a float).
func sendArduinoCommand(
command byte, argument float32, serialPort io.ReadWriteCloser) error {
	if serialPort == nil {
		return nil
	}

	// Package argument for transmission
	bufOut := new(bytes.Buffer)
	err := binary.Write(bufOut, binary.LittleEndian, argument)
	if err != nil {
		return err
	}

	// Transmit command and argument down the pipe.
	for _, v := range [][]byte{[]byte{command}, bufOut.Bytes()} {
		_, err = serialPort.Write(v)
		if err != nil {
			return err
		}
	}

	return nil
}

Putting it all together within the main function becomes:

func main() {
	// Find the device that represents the arduino serial connection.
	c := &goserial.Config{Name: findArduino(), Baud: 9600}
	s, _ := goserial.OpenPort(c)
	
	// When connecting to an older revision Arduino, you need to wait
	// a little while it resets.
	time.Sleep(1 * time.Second)				
	sendArduinoCommand('a', 1.0, s)
}

Picking this data up on the Arduino side of the serial connection is done by reading the the first command byte and then using a union to decode the four argument bytes back into a float:

typedef struct {
  char instruction; // The instruction that arrived by serial connection.
  float argument;   // The argument that came with the instruction.
} Command;

/**
 * ReadCommand sucks down the lastest command from the serial port,
 * returns {'*', 0.0} if no new command is available.
 */
Command ReadCommand() {
  // Not enough bytes for a command, return an empty command.
  if (Serial.available() < 5) {
	return (Command) {'*', 0.0};
  }

  union {
	char b[4];
    float f;
  } ufloat;

  // Read the command identifier and argument from the serial port.
  char c = Serial.read();
  Serial.readBytes(ufloat.b, 4);

  return (Command) {c, ufloat.f};
}

Now, just make sure you set the same baud rate on the Arduino side of the connection, and start reading off commands from the serial connection:

/**
 * Arduino initalisation.
 */
void setup() {
  Serial.begin(9600);
}

/**
 * Main Arduino loop.
 */
void loop() {
  Command c = ReadCommand();
  
  // Do something awesome with the command. Like represent the state of a 
  // simulated neurone as a lighting sequence.
}

For a full example of how this all works, you can checkout the RaspberryPI code and Arduino code for the Gasworks project on github. Enjoy!

 

Neurones

23 Oct 2013

I’ve been a bit distracted lately, for good reason - I’ve been busy working with BESTCINCO, Michael Candy and Meagan Streader to design and build a series of interactive sculptures that will feature in the redevelopment of the heritage-listed ’Gasworks’ site here in Brisbane.

image

One of the best things about working with a group of creative people is brainstorming and thinking of new ways to stretch technology. “What if we made the lights mimic a brain?” I suggested, “We could get them to ‘fire’ like a neurone in response to activity they detect nearby.” “Can we do that?” was the reply. So here I am, cramming out some code that will turn the Gasworks into a simulated brain that senses and reacts to people using the space.

Suspended within the centre of the Gasworks structure will be ten sculptures, each containing a cluster of lights, a Raspberry PI, a webcam, Arduino and a host of electronics.

image

Each of these sculptures loosely simulate a brain neurone. The webcams act as the ‘dendrites’, continuously monitoring the spaces underneath each sculpture for physical activity. Any detected activity is translated into an energy level that drives the lighting sequence for the attached sculpture (neurone). This energy level is sent from the Raspberry PI to the Arduino where it’s converted into a lighting sequence. Sculptures with low energy levels have a dim and slow random light sequence, while sculptures with higher energy levels have brighter, faster lighting sequences.

The energy level of each neurone continues to increase as movement is detected, until it reaches a threshold and ‘fires’. When this occurs, that sculpture broadcasts part of its energy to the adjacent sculptures, which in turn have brighter and faster lighting sequences. Meanwhile the neurone that fired will ‘reset’, dropping its energy level back to zero.

Below is a video of a single sculpture/neurone prototype, demonstrating how it detects movement and transitions from low energy to high, and finally ‘firing’.

With a load of hard work and a bit of luck, I hope we get some emergent behaviour (i.e. effects I have not explicitly coded) arising when people start interacting with the sculptures. The work will be permanently installed at the site in Newstead from the end of the year.

 

API Engine intrigues developers

24 Sep 2013

image

Eleven months ago, a beer and a handshake kickstarted one of Brisbane’s most intriguing startups, API Engine.

Greg Davis, former engineering director of a Brisbane-based software company, was building a fairly hefty API that was a chore to document. His co-founders developed a nifty way to make documenting APIs easier, and over drinks late last year, they reached a gentlemen’s agreement to self-fund and bootstrap the idea into a fully fledged product.

Like most good development tools, API Engine was borne out of a need, in this case API documentation. But it didn’t stop there. Davis and his co-founders soon found they needed to glue together disparate systems, and provide consistent API access – things which are not well-served by the API tools that come out of the box in web development frameworks such as Rails and Play.

The result is a composition utility for software developers that has found a market niche: since December last year more than 2,500 people have signed up to be beta testers and Davis says API Engine frequently receives requests from software development companies to use their system.

API Engine plans to launch the beta offering later this year. When it does, the team will target two market segments: internal APIs found in large enterprises (such as financial institutions), and the plethora of RESTful APIs publicly available on the internet. A small group of local developers, bootstrapping a company that aims to take the legwork out of documenting and combining APIs is impressive. But once you get under the hood of API Engine, the real intrigue begins.

I’m not going to lie. I was a bit of a functional programming skeptic. It sounded costly, both in terms of development effort, memory and CPU consumption. And having never really seen or heard of Haskell, the purely functional programming language in a production environment, I had written it off as another academic curiosity.

I was using this very argument against functional programming when I first learned about API Engine some months ago. It was the first time I’d heard of Haskell being used in production. But since then, my ambivalence towards this form of programming has slowly been replaced by a newfound appreciation of its real-world advantages.

Davis says size was one of the reasons behind the decision to use Haskell: “The code base itself is no larger than 5,000 lines of code. In fact, it is probably still less than 3,000.” If they had developed API Engine in a more traditional language such as Java, it could have been as much as five times larger.

Not only is their Haskell code small and well structured, it is also robust. “When you come into a new system, you wonder which bits are robust and which aren’t. And Haskell was new, fairly new to me… As it turns out, I never go looking at the Haskell, because that is the bit that never breaks,” Davis laughs.

Choosing Haskell to power the back end of API Engine was a decision influenced not only by technical considerations, but deep principles held by the founders and a sense of responsibility to the next generation of software developers. In Davis’ case, a sense of embarrassment: “I have children, and I am embarrassed to teach them things that our industry is doing today.”

This embarrassment stems from fundamental development practices that currently permeate the industry – how problems are analysed and the awkward way in which code is composed by multiple developers. These are things the team at API Engine thinks can be partially solved by functional programming and Haskell.

Along with enabling developers to build great APIs, Davis and his co-founders are aiming to create a workplace with a development culture steeped in functional programming that they would be proud to share with their children.

The ultimate goal is to have a workplace that will have a positive impact on the general software culture in Brisbane, by promoting new approaches to problems, and challenging the status quo.

This is a repost of an article I wrote for the The Tech Street Journal.

 

Brisbane’s brain drain - Coen Hyde

24 Sep 2013

Each year, Brisbane generates a surprisingly large, diverse pool of software developers. However, stories about outstanding people who outgrow Brisbane and move onto greener pastures are all too familiar. Some are headhunted, others relocate to start businesses or seek employment elsewhere. While you hear the occasional story of someone who’s returning to Brisbane after a successful stint overseas, on balance it seems the city is losing a large number of experienced developers who would otherwise be pillars of local technology companies. This series explores why all our friends are leaving Brisbane. We profile some of the talented developers who have left on the promise of a better career.

Coen Hyde

I first met Coen a couple of years ago at ActionHack. At the time, Coen was implementing a new layout for the site he co-founded: Wikifashion.

It looked great. I was impressed. Coen casually dropped some very respectable traffic stats (currently pulling in 50,000 uniques a month), but he was nonchalant about future plans: “I dunno, maybe throw up some ads or something?”

Fast forward a few years and the former Blue Dog Frontiers founder and Kondoot engineer now personifies the “California or bust” mentality. At the end of 2012 he packed his bags and moved to San Francisco with no external investment and no incubator or accelerator lined up. He moved simply because he figured that in San Francisco he would get better value for his savings as he burnt through them starting a new business.

What he did have was technical ability, plenty of experience with early stage businesses and a large captive audience with Wikifashion, where advertising supported Popbasic development. Enough to make it work.

In San Francisco, Coen bootstrapped Popbasic, a company that sells limited edition “micro” fashion collections. Popbasic’s service includes a surprise item in each collection, courtesy of one of their partner companies. In return, the partner gets exposure to a fashion-savvy audience and Popbasic’s customers add the latest fashions to their collections. Popbasic is already breaking even — an achievement Coen feels wouldn’t have been possible in Brisbane, where doing business is more difficult. ”It’s a cultural thing,” he says. “The biggest benefit is that other companies are willing to work with you, even though you’re a ‘startup’.”

Popbasic has had greater success dealing with US companies to package its surprise items than with Australian counterparts. And it’s not just working with other companies. The US media is easier to deal with too: ”Newspapers are also more willing to go out on a limb and cover you first, before someone else: we discovered that first with Wikifashion and again with Popbasic.” This seems to reflect a greater willingness in the US to give new technologies and companies a chance — ironic given Australia is supposedly the land of the fair go. It’s a cultural gulf that, for one talented developer at least, is enough to keep him away from home.

When asked about the type of opportunities that would need to exist in Brisbane for him to consider moving back, Coen is frank: “I don’t think that is possible. Maybe if the opportunities existed in Brisbane to begin with I may not have left, but Silicon Valley is unrivalled in the ecosystem it provides for young internet based companies. I don’t see myself starting a mine anytime soon, so I’ll probably stay over here. Though I’ll definitely come back every once in a while and say hi!”

This is a repost of an article I wrote for the The Tech Street Journal.

 

An open letter to women in technology

18 Sep 2013

image

To be honest, I had never really stopped and taken the time to appreciate how difficult things are for women working in technology. I mean, I had always attended co-ed schools and mixed with the opposite gender. But as I got older and more involved with technology, the demographic slowly but steadily changed. It started in high school, when we were gradually empowered to self-select our fields of study. For me, my timetable became increasingly littered with mathematics, physics and chemistry, while fewer and fewer girls shared the same courses.

Initially the change was so subtle, I am embarrassed to say I didn’t even notice. In fact, the gender disparity in the sciences only became apparent on my first day at university. I was crammed into a lecture theatre overflowing with several hundred young minds eager to become mechanical, mechatronic, aeronautical and aerospace engineers. How many woman shared these ambitions? So few they could almost be counted on one hand - six.

I’m ashamed to say that a younger me simply accepted the gender stereotype: girls are more interested in the arts, health, and the humanities, while boys are into maths, science and engineering. This was so profoundly and utterly ingrained in my psyche that it took a life changing event for me to really appreciate how flawed my thinking was at the time.

About nine months ago I became the father to a little girl, lovingly nicknamed ‘The Kins’. Almost instantly, society began to bombard her with images and objects, aiming her away from science and technology and towards those more traditional ‘female’ vocations. It takes considerable effort to keep the gaggle of plastic toys at bay: dolls with pink hair brushes, purple vacuum cleaners and pastel-coloured tea sets. Some days I come home and it looks like a drunken unicorn has stumbled in and vomited all over the place.

I realised that our society had somehow managed to create an environment that didn’t do much to encourage women to take an interest in science, technology, engineering or maths. I realised that those six young women in my university class, who had managed to make their way into a lecture theatre filled with other hopeful engineers, were probably the most remarkable of all of us. The guys had it easy: we were actively encouraged by society into these pursuits, and nobody blinked an eye when we walked into that lecture theatre. The women on the other hand? They were so deeply interested in engineering they were able to ignore social norms and doggedly pursue a career that was alien to most of their female peers.

So to all the women working in technology - the programmers, the engineers, the scientists, the mathematicians - all of you. I can’t thank you enough. You are spearheading a modern woman’s liberation movement. Sure, you are not burning bras or adopting the iconography that popularised the movement in the 60’s. Instead, despite the horror stories, you’re out at conferences and actively engaging with male dominated communities. You build amazing things, and are actively creating an environment that will make it just that little bit easier for other women to follow you into the industry.

I understand that The Kins is likely to be fascinated by areas that are completely foreign to me: strange music, strange movies, strange interests. I guess that means as The Kins gets older I am going to need to try and wrap my head around rugby, interpretive dance, Iggle Piggle or whatever else she finds fascinating. But the fact that it will be just that little bit easier for her to consider a career in technology? I can’t thank you enough.

Clinton Freeman

Dad, Maker, Software Engineer.

 

Cowboy Jukebox, Distributed Musical Instruments and FirefoxOS

18 Aug 2013

I spent last weekend at the excellent CampJS, with my good friend Paul Theriault who works for Mozilla securing FirefoxOS. We spent much of the weekend turning FirefoxOS mobile devices into a collaborative, location-based musical instrument: Cowboy Jukebox.

The premise of Cowboy Jukebox is fairly simple: you and your other ‘band members’ are carrying mobile devices, each a small piece of a larger globally distributed ‘musical instrument’. By tracking the location of each device via GPS, the whole instrument produces different sounds based on band members’ locations. All participants share the same auditory experience, and any band member can twist and change what the others can hear.

We still have more work to round out the build. But, thanks to the insanely awesome work that the folk at Mozilla have put into FirefoxOS (more on that later), it should work on any mobile device with a modern browser.

Cowboy Jukebox is broken down into two main components:

A lightweight server built in Ruby on Rails, and running on Heroku. This setup gathered and distributed the GPS coordinates of all the band members via JSON. You can find the source for the server lurking over on github.

A Javascript client that can poll inbuilt location services using the HTML5 geolocation API, and send this to the server, grabbing the latest locations of all the band members simultaneously.

The locations of all the band members are then processed to obtain the total distance moved (along with changes in latitude and longitude), and used to synthesize sound using the awesome library timbre.js. The source code for the client is also on github

Seeing as FirefoxOS is literally a web browser whereby all apps are identical to applications built for the web, to allow installation we simply created a manifest file. This was dropped into the FirefoxOS simulator and we were away, riding the crimson fox to bleeping and blooping freedom.

In the past, I have done a reasonable amount of Android development building a few different site specific theatrical works using cheap handsets, GPS and a chunk of Java.

Wow. Not only is developing for FirefoxOS a breath of fresh air, but, for handsets in the sub $100 range, FirefoxOS powered devices are simply amazing. The user experience is almost on par with a high-end handset like the Nexus 4 or Apple’s iPhone 5, and completely overshadows a sub-$100 Android phone. I am also a little attached to the brightly coloured cases that I lovingly dubbed ‘borange’.

As with everything else when developing software, it wasn’t all happy days. We ran into a few little stumbling blocks while hacking away on the devices:

  • When ‘pushing’ the app to the device, the icon would often not appear on the homepage along with all the other apps. The app had still been pushed to the device, but still required a search on the phone to find it.
  • Had a few minor jittering issues with buffering sound on the device. We might have been pushing these handsets to the limit, or just need to tweak timbre.js a bit.
  • ‘Curling’ or accessing remote JSON sources felt a little clumsy. Considering everything is running in a browser on FirefoxOS, we had to work on the server a bit to get around the XHR same origin policy. While this makes sense for ‘regular’ browsers, I felt it was a little restrictive considering apps running on FirefoxOS are almost never going to be from the same origin.
  • I missed being able to debug directly on the device. Debugging on the simulator is good, as is being able to use the developer tools built into Chrome and Firefox (P.S. the dev tools in Firefox have really caught up on Chrome). Still, for a few things, like the jittery audio, it would have been great to access handset debug info from within the FirefoxOS simulator / dev tools.

Minor quibbles aside, Mozilla should take a bow. They have done really fantastic work on FirefoxOS, and with awesome handsets available for less than $100, I think they are going to be a great option for new media artists working with mobile phones. I can’t wait for them to become more widely available and to work with them again.

PS - No payment was received from Mozilla in writing this post (though I wouldn’t say no to a handset!)

 

Javascript enlightenment at CampJS

14 Aug 2013

image

Not long ago I was pretty jaded about life as a software developer, and my colleagues could often hear me proclaiming at lunch (in between mouthfuls of leftovers from last night’s dinner), “A commune! We need a coding commune!”.

A commune? Sure, Kombi vans and bearded hippies are cool, but what on earth does that have to do with software development?

What I was looking for was something akin to a monastic engineering experience. I wanted to know what would happen if I removed all extrinsic pressures: things like pleasing stakeholders, KPIs, making money, customer discovery and all that blah. I wanted to know what it was like to bang away on my keyboard and create code, just for fun, without any distractions.

And now, having just returned from CampJS, I can say that Tim Oxley, Nigel Rausch and Geoffrey Donaldson have managed to organise the perfect monastic engineering experience.

Last weekend, over one hundred like-minded software developers charted a course to the Gold Coast, many flying in from interstate and some from as far afield as Vancouver, Canada. Then by bus or car, we ascended into the hinterland, winding our way up into Springbrook National Park (and hopefully, closer towards coding enlightenment).

Arriving at Koonjewarre in the early afternoon on Friday, we spent a couple of hours soaking up the views, socialising and meeting new people. Then, something amazing happened - people pulled out their laptops and began coding. Yes, people were coding - for fun! - on a Friday evening, after the end of the work week. I can’t think of another profession where this would happen - can you imagine plumbers at the pub on a Friday afternoon deciding to fix toilets or lay some pipes for fun?

This really set the tone for the weekend - a completely free form exploration of technology, with no pressure to do anything. For those who preferred their explorations a little more regimented, there was a schedule of amazing people presenting talks on everything from functional programming and Angular, to physical computing using Raspberry PI and Arduino.

But there was no pressure to attend sessions, and some people found themselves instead curled up under a tree reading something from the awesome book swap table. Some wanted to knit, play table tennis or throw a frisbee, while others wanted to go bush walking. Some spent the time coding on a project, and some even took one of the organiser’s suggestion at the outset to lay in bed and nap.

I saw people enjoying all these activities, but the workshop sessions were still packed, and no matter where you turned there were people coding, reading, learning or relaxing. While the general theme of the camp was around Javascript, that wasn’t really enforced either - some of the talks focused on supporting topics like CSS, Vim and MongoDB, and there were people hacking on projects in languages like Haskell and Elm.

Despite removing all external pressures, and with plenty of people attending talks and taking the opportunity to blow off a bit of steam by doing whatever they wanted, a huge amount of code was written over the weekend.

CampJS culminated with a demo night, with 50 or so project demos. Some people were in it for the prizes, while others were building things just because they could. One attendee demonstrated an amazing personalised news site generated from his Twitter feed, proclaiming, “This is just for my friends, because, fuck startups.”

All the demos were videoed, and I hope they turn up on the interwebs soon, but until then here is a random assortment from the night:

  • A multiplayer Snake game running on a LED matrix. Powered by Node, Raspberry PI and controlled by mobile phones.
  • The best arcade cabinet I have ever seen, assembled out of beer cartons and a Raspberry PI. As an added level of difficulty, it was put together with the only spare keyboard available - one with a Russian layout.
  • A remote sensing project, using an Arduino, an assortment of sensors, Johnny 5 and dashing.js
  • A unicorn fart piano for FirefoxOS, hilariously developed by someone in a unicorn onesie.
  • A really polished, live updating simulation of Brisbane public transport.
  • A website designed as an RSVP to a 30th birthday party, inspired by Nintendo pixel art and featuring sounds and even cheat codes.
  • A hyper scrolling, ball bearing mouse that ‘changed the life’ of the creator.
  • A flying sheep battle game written in Haskell, using functional reactive programming.

Plus so much more awesomeness that is just too hard to cram into a single article.

So what does a monastic Javascript experience look like? Have a trawl through the Google+ group (not to be outdone by hippies, some developers also sport impressive beards). You can also follow @campjsnews for news and updates on the next CampJS, and check out Colin Gourlay’s excellent account of CampJS over here.

As for me? I got to work with Paul Theriault, a friend from university who now works for Mozilla securing FirefoxOS. We turned their mobile devices into collaborative, location based musical instruments. Why? Because we could.

 

Using the new Windows 8.1 3D Printing API? You're gonna have a bad time.

28 Jun 2013

image

Two days ago Microsoft announced native 3D printing support in Windows 8.1. As someone who isn’t exactly a rabid Microsoft fanboy, I do have to tip my hat and congratulate them on integrating 3D printing support. You can watch their announcement in the Build Keynote here (skip to 1:03:00)

The only thing is, in their efforts to simplify 3D printing, Microsoft have rendered their API largely useless.

It turns out they shoehorned 3D printing into their existing 2D XPS printing pipeline, adding a common API layer between the 3D content and existing slicer / printing host software (which now gets bundled into a driver). This is a pretty obvious approach to take from a systems architecture perspective: Microsoft can reuse all the existing 2D printing infrastructure (like spooling and queuing), and push all the ‘hard’ stuff (like slicing) onto the hardware manufacturers and their drivers.

This might sound like great news if you are building an application and want to quickly add direct support for multiple 3D printers.

The only problem? This first version of the API is such a horrendously leaky abstraction, in reality the only thing it’s good for is attracting media attention.

The Windows 8.1 3D printing API abstracts away many of the common 3D printing parameters, leaving just the following four:

  • Job3DQuality: Draft, Medium and High
  • Job3DDensity: 0% (hollow) through to 100% (solid)
  • Job3DSupports: Include, don’t
  • Job3DRaft: Include, don’t

That. Is. It. And while I can appreciate that Microsoft are trying to simplify things for the average user, removing the ability to even select the output material is a massive oversight. And don’t get me started on the differences that colour and other additives have on the melting temperature of thermoplastics - quality prints require temperature control.

I’m not sure why, but it seems as though the corporate end of town are using The Oatmeal’s comic “Why I believe Printers Were Sent from Hell” as an instruction manual on how to piss everyone off. The new Microsoft 3D printing API checks off another two in the list: starting the ecosystem that breeds bundled printer software crap, and poop smears. Got PLA in your printer and the driver defaults to ABS? Bad luck Brian, you just printed a poop smear and probably jammed your printer at the same time.

 

How many generations can a RepRap reproduce before there are serious issues with print quality?

10 Jun 2013

When you get a key copied, the locksmith usually asks if they key is an original or a copy. This is because the process of copying a key is lossy; each time you make a copy from a copy, detail is lost. After a few generations the key will not open the lock, and after many generations the copied key will be blank.

image

Recently on the IRC RepRap channel, someone piped up and asked a similar question about self-replicating 3D printers: how many generations can a RepRap reproduce before there are serious issues with print quality? A couple of people chimed in with answers, which basically boiled down to the fact that with the right calibration you can always make a better printer from your existing printer.

This certainly parallels my own experience - the RepRap parts I am able to print today on my printer are vastly better than the first prints I got from my friend’s printer. It was a bootstrapping process: those initial poor quality parts got assembled into my printer, which I used to create higher quality parts to upgrade both our printers.

So it looks like the process of copying a RepRap printer from itself is not lossy at all, but rather the opposite. I don’t even know what that is called; enhancy? Case closed. Almost.

I have always had one tiny problem with my RepRap that no amount of calibration could resolve: ellipses. Circles would always come out a bit elliptical on my printer. I never noticed till I was printing some large disks for a replica hoverboard a friend was building.

image

The problem came down to the printed pulleys I was using. They were not circular, and despite how much calibration or tweaking I did, I could never print a perfectly circular set. I had been using CNC couplers on my Z-axis for a while, and decided it was time to get some decent machined pulleys to drive the X and Y axes.

The machined pulleys do their job really well - print quality is up and circles now get printed, well, circular.

RepRaps are ‘enhancy’ only if you are using high quality, machined parts for power transmission. Then you will be able to reproduce many, many generations of printers without any serious issues with print quality.

However, RepRaps are ‘lossy’ if you are using printed pulleys for power transmission. In my experience, any defects will result in elliptical pulleys that no amount of calibration can correct. These pulleys get worse and worse with each new generation, introducing alignment issues to the detriment of print quality.

So while it is true that RepRaps are self-replicating, there is a caveat - if you don’t want any ‘genetic defects’, you’re going to need some machined parts.

 
Content Ⓒ Copyright 2013 Clinton Freeman