Using Github Desktop with Bitbucket

I have been using Atlasssian SourceTree to manage both my Bitbucket and Github repos for a couple of years now and it basically got the job done. But once in a while I keep stumbling over Github Desktop and wonder if it would work better. But while SourceTree advertises that it can work with both Github and Bitbucket, Github Desktop only ever seem to support – ahem – Github (doh!).

More recently I am spending more and more time in Github and finally took the bait and installed Github Desktop. Oh boy – it easily blew the UX of SourceTree out of the water. The installation, the initial setup and the follow up with a guided tutorial was probably one of the best on boarding experiences I have had in a while. Someone really put thought into the DX (= Developer Experience) of this tool.

In addition Github Desktop works just fine with Bitbucket repos. The only inconvenience is that you need to clone the Bitbucket Repo manually to local. After you did that, simply add the local repo as new repo to Github Desktop and from there on it works like a charm.

Screen Shot 2016-05-19 at 10.32.20 PM

Move existing, uncommitted changes to a new branch in Git

How often do I find myself starting to code but then – as the changes pile up – decide that I better put them on a branch to not destabilize the master. Thanks to StackOverflow I have now found probably the most elegant solution yet

  1. Save current changes to a temp stash:$ git stash
  2. Create a new branch based on this stash, and switch to the new branch:$ git stash branch XYZ stash@{0}

I am completely in awe. To save you the trouble of having to dig through message threads, I am (re)publishing the solution here.

Notes from OSCON Europe 2015

Last week I had the chance to attend the OSCON Europe conference in Amsterdam. I would give the conference an overall good rating, but I was missing a bit the excitement and drumbeat (and some roadmap discussions) of leading Open Source foundations like Mozilla, Apache, or even Open Source companies like Docker. There was an interesting (photoshop’ed) teaser in the keynote on the second day by the energetic founders of Phusion Team about ‘The Business of Open Source’. I would have loved to hear more about it – but maybe it was a clever marketing ploy to see if attendants would actually go out and ask the O’Really folks about that book. Well, it worked for me 🙂

The presentations were literally all over the map and the quality varied greatly. Some of the coding tutorials were interesting but lost me within the first 10 min by excessively scrolling back and forth and not providing the code snippets via Github. I was impressed by Kubernetes but asking me to create a GCloud account with a valid Creditcard as part of the workshop is a blatant attempt in developer trolling which I flat out refuse to follow. Both Google and CoreOS should have known better than trying to do that – and I am still surprised O’Reilly as conference organizer did let that happen. This is maybe appropriate at a sales event, but not at an Open Source conference (regardless if you are sponsor or not).

In general the first day was much stronger than the second one. But attending the ‘Inner Soure’ presentation from Paypal made the second day all worth it. Unfortunately there are no speaker slides  for this talk on the conference site, but you can read the gist of it in this InfoQ article. I will certainly follow up on that one.

Ok, so lets get down to business and my notes. You can find speaker slides – if available – on the OSCON website.

  • Architecture as Code
    • From the developer of (a tech community website)
    • Adding the software developer as major stakeholder of software architecture
    • Drawing pictures vs modeling system
    • ‘Just enough software architecture’ -> there is a model / code gap
    • Only include architectural significant elements -> do not over specify for understanding
      • Analogue is to google maps -> street view vs directions
    • Write architecture as code
      • See
        • Render architecture models from code
        • Architecture map(s) allows you to navigate from different level of abstraction down to source code
        • Java specification on Github
    • If software architecture mode is in the code, it can be extracted from the code
  • Chaos Engineering (Netflix)
    • Resilience is a feature
      • Embrace failure rather than avoiding it
        • Add fail-save patterns
        • Expect the unexpected for fall-back patterns
        • Cost of resilience is accuracy or latency
        • Tell upstream servers to use what you have
        • Use local storage in browser as last fallback
      • CAP theorem applies
        • Sacrifice availability or consistency, you can only have 2 out of 3
        • Best practices
        • Split out the control plane from workload
        • When you use a cloud provider like AWS, make a decision which part you use for either product or control plane
        • DNS and CDN are your best friends
      • Introducing failure to test resilience
        • Failure Friday (play chaos monkey manually)
        • You can not fix a single point of failure unless you know about it
        • Outages are unpredictable, failures can be introduced deterministically
    • Prevent propagation to avoid cascading failures
  • Pull Requests, not just for Code anymore
    • Bad culture ruins good people
    • Difference between implicit vs explicit culture
    • You need to build explicit culture
      • Use github pages as repository for artifacts of culture
        • Engineering processes and policies
        • Architecture notes in RFC style
    • Why is it different to a Wiki
      • You can only change a Wiki, but there is no review or approval
      • Lessons learned
        • Learn git (offering classes and course)
        • Have a moderator to move things along and force closure
        • Be prepared for email flood (teach them how to use filters)
        • Encourage action: encourage pull request over encourage sugestions
        • With merge rights comes merges responsibility (do not leave them hanging)
        • Its not a democracy
        • Analogue to bonsai tree: a small team dedicated to growing and pruning it
        • Designate an entropy fighter (to delete outdated assets)
        • Embrace first day commit !!
        • Even white house IT policy team is using it
  • Inner Open Source (Paypal)
    • Moving from blocking/bottleneck to mentoring
    • Scaling is a result of collaboration
      • Creating an environment which is self-healing and self-growing
    • Build engagement through evangelism
        • 10% of team are trusted committers, not more
        • Focus on the positive of contribution, not necessarily if you agree with iit
        • All reviews are publicly documented
      • Require respectful and constructive communication
    • Check out Inner Source Commons

Anything I missed or misrepresented, please give me shout. I am heading out to Dockercon Europe in two weeks. If you are around and want to compare notes, please ping me via the usual channels. I would love to hear more from you (and networking is part of my job description).

Build your own Docker-ized Reddit clone with

Continuing on my quest to create my own communication stack using open source – here is my experience creating a Docker-ized version of the Telescope community server from

A little bit of background on why I would want to have a dedicated link sharing and commenting site. Our team at the API Academy ( is primarily using Slack for team communication. Not because it is the best, but because it is the most convenient. Our communication patterns seem to fall into two buckets: IRC style discussions (and yes, I will publish a post on how to build your own Slack clone using IRC next) and link sharing. While there are merits to do both in one there are drawbacks too. My personal problem in particular around link sharing is Slack’s presentation of content as a linear (chat) stream.

While looking for an adequate solution I stumbled over Telescope. And I am freely going to admit I have been blown away by its high quality look and feel. Unfortunately getting it to install in Docker wasn’t as straightforward as I had hoped. Telescope is a Meteor application on top of node.js. If you were to install it step-by-step (aka RUN by RUN command) you will end up very soon in installation hell. If you execute the installation steps manually in a docker image it will work just fine. But doing the exact same sequence in a Dockerfile would fail.

Without much ado, here is what I had run into (credit to!topic/meteor-talk/_WFeZUZQCqY)

# the root cause is two fold – node’s fs won’t work across partitions so this problem would probably go away with some
# of the techniques in those links if implemented by meteor. But this is only a problem because Docker creates a new
# “image” for each RUN command, and so each step is isolated from the others.

The solution is as simple as combining the ‘sensitive’ shell commands into a single command line (such that they are executed within the same Docker image layer).

RUN git clone /app/telescope && \
cd /app/telescope && \
meteor update

You can find the working Docker file in my Templates project at If you like Telescope please consider contributing back to the Telescope community.

In search of the perfect open-source Kanban project management tool

1280px-Ruppberg_SonnenaufgangI am going to admit, Docker has me hooked. I had been dabbling with Chef and Puppet, but they always seem to be too powerful and too complex for my simple needs to quickly prototype something. With docker I finally have the ability to script my setups and (re)create them at will. Ofcoure there are gotchas – and I will write about them as I go along my merry ways.

Most of my public docker stuff can be found in my docker-template project on github:

The idea behind the docker-template repo has been to provide a simple way to evaluate some of the Open Source tools I am using or I would like to evaluate. I am certainly not going to start a rant why – in my opinion – open source is where user-led innovation is happening. Others have expressed that way more eloquently than I can. But what I can do is make it dead simple to run some of those tools I like and hopefully learn a thing or two in the process.

The last couple of days I was hunting for a good project management tool, Kanban style. I first came across and the first two statements in their tag line “Free. Open Source. Powerful.” caught my attention. Well, turns out that installing Taiga was far from easy and it became quickly messy with build errors left and right. I ultimately got a working image when stumbling over the “shutit” project at Ian maintains an impressive list of working Docker configurations for popular stacks – you should definitely check out if yours is listed at Unfortunately I ran into some breakage of using shutit’s trunk, but Ian was incredible helpful and responsive getting it fixed and pointing me into the right direction (heck, he even pointed out a bug in my Docker file :)). After I got a local install of up and running I realized that the emphasize should have been on ‘Powerful’. It just did not do it for me – and it tried to do way more than I wanted.

So back to the drawing board (aka github search) and I came across Kanboard at First thing to notice is that he starts with an introduction of Kanban (which is more than just the board). But what really caught my attention was his second bullet: ‘Limit your work in progress to be more efficient’. You see, Kanban is a methodology to guide you towards single piece flow. While it should be immediate obvious that focussing on one thing will lead to faster execution, less waste, and higher quality, the reality is (obvious for everyone who has done Value Stream Mapping), most of our time is spend in context switching and being blocked and working on a multitude of things in parallel without proper focus on what the right thing is we should be doing from start to finish. Now multiply that by n if you are a team without visibility and you end up in the usual project management mess which delivers the wrong thing over budget with inferior quality and over time. Kanban proposes to start tackling this problem by limiting the number of tasks which can be ‘in process’ at any given point in time. Meaning if you want to start a new task, but you already have x tasks in process, you first need to finish one of the on-going tasks before moving the next one from ‘planned’ to ‘in process’. Add to this that the Kanaban board is visible to everyone in the team and you can start pushing for an alignment across the project. And start having the real necessary discussions on ‘The definition of done’ and ‘What is the right thing to do next’. (Just as a side note: for the latter I am intrigued by the Cost of Delay metric. You can read it about here:

So Kanboard hit all the right buttons and seemed to have just enough functionality of what I wanted to have. Plus it was super easy to setup and to navigate. You can check it out by either using my Vagrant file at or build it yourself with my Dockerfile in

If you like it, please consider to contribute such that we can have more ‘really useful’ software like this.

Happy hacking from my wintry mountain retreat in the Thuringia Mountains (picture courtesy of

Using Atlassian’s Sourcetree with Github 2-Factor Authentication on MacOS

If you happen to have a splattering of repos on both and and like the convenience of one graphical tool “to rule them all” you are probably going to end up using Atlassian’s excellent Sourcetree app. If you are not – don’t bother reading further. On the other hand, if you do use it and are blessed with a forgetful mind like I am, you might find the following useful.

I recently tried to check-in some updated presentations to our github repo when I was prompted for a password. Which led me to scratch my head and trying both my normal user password and my two-factor authentication code without much success. I should mention that it used to work without a password ever since I installed it – and I completely forgot what I had done to set it up. This is both a blessing and a curse since it means I often have the pleasure of having to rediscover what made it work in the first place). After some brief search I discovered that other users had complained about disappearing application tokens (yeah, now I remembered that I had to setup an application token on github and use that instead of a password) too. But instead of having to repeat the process of creating new application tokens over and over again I found a reference to a git helper ‘credential-osxkeychain’ to store username and password in the MacOS keychain.  (I am very fond of the DRY – Don’t Repeat Yourself – principle: you might call it laziness, but I prefer to call it efficiency ;).

Without much further ado, just follow the instructions to install it and you should be on your happy ways (and yes, you will have to create a new application token one last time on Github – the default scope settings seem to work for me).

Happy hacking!

Notes from APIconUK 2014 in London

I presented on the Future of APIs in the IoT at the APIcon UK Edition in London last month.

In particular the presentation on the use of Altcoins for API moneytization was fascinating. While I am not yet convinced that this is the perfect use case for block chains it opened my eyes to the bitcoin infrastructure and its possibilities. It made more excited about catching up with Mehdi from who is currently touring the conference circuit with a proposal to decentralize authentication using block chains. He will be presenting at the Nordic API Platform summit this month and our API Academy API360 summit in London in November.

Here are my notes on some of the presentations:

Delightful API Design (Uri, CTO of Mulesoft)

  • ultimate buzz for creative lazy developer is to build your work on top of someone else’s work
  • imperative for success of API
    • need for speed
    • developer rules
    • simplicity
    • agile dev
      • iterative changes vs the need for API to stay constant
  • Get the core right – steps
    • (1) design, document, mockup + live console => work through use cases and iterate
    • (2) generate client factories and server frameworks
  • API specification
    • perfect testing surface
    • easier lifecycle mgmt: allows concise versioning (breaking vs non-breaking)
    • consistency
    • key for API first development approach
      • tells API consumer what to expect
    • tells API implementer what to deliver
    • steps: plan, design and validate, lock-down, implement, deploy, operate, start over
  • UI -> UX = API -> APX (API Experience)
  • RAML
    • steering committee: SOA Software, Mulesoft, Paypal, API Science, Cisco, Intuit, Angular,js, Box
    • very neat feature is the ability to define ‘traits’
      • ability to template query parroters, response codes, etc
      • i.e. pageable, searchable, etc
    • API Designer: left side editor, right side resulting API documentation
      • for designing the API
    • API Notebook: moving API Definition from Designer to Notebook
      • creating SDKs and API mockup from RAML design
      • used for validation of API use cases
      • => enables structured TDD for API design
      • publish and share with API developer for validation

Which API description to use (Laura, SOA Software)

  • if you are an enterprise, you deal with 100’s of services
  • until now no good way to concisely describe REST Services
    • enterprises implement Webservices by default
  • Comparison between Swagger,RAML, and Blueprint
    • Blueprint not used in enterprises
  • Swagger: code first, document second (bottom-up)
  • RAML: design first, code second (top-down)
  • Recommendation:
    • Use RAML if API design with LOB and less technical stakeholder
    • Use Swagger if you want to document code
  • Practical experience
    • swagger-node-express:
      • use swagger spec directly in the code
      • documentation in code, stays in sync with code, but results in code changes when documentation changes
    • separate design docs
      • (+) allows (NLS) i18n for product documentation
      • (+) allows changes in documentation without code changes
      • (-) can get out of sync with implementation
    • gen doc from code
      • (-) need to touch code to change documentation
    • doc in code
      • (+) stays in sync with code
      • (-) changes impact code
  • Thoughts on API platforms
    • should support Webservice as well as APIs
    • no focus on system-to-system integration, but on consuming side

The future of API Moneytization (

  • using APIcoins (based on bitcoins) to moneytize API access and incentivize use
  • APIcoinsare build on top ofBitcoin ledger
    • leveraging security infrastructure of bit coin
    • uses a technique called ‘coloring’: adding additional metadata to ledger to make it distinct from Bitcoins
  • Challenges:
    • Bitcoin bloat for transactions: current limit is 7 transactions per second
    • New project called Factum runs hash over last n transactions and secures those n transactions with one ledger entry
  • APINetwork usesSafenet as storage backend
    • Safenet = incentivised bit torrent like network for distributed file storage
  • Security
    • In order to attack bit coin you need to take over 51% of Bitcoin network
    • has built-in incentive for new compute nodes to be added to Bitcoin network
    • at this point and with current growth pace only major state actors have the pockets to finance attack on Bitcoin network
    • private actors make more money by adding nodes
  • New approaches like ‘prove the stake’ reduce computational power
  • APINetwork incentivizes adding new services through issue of APIcoins

Follow-up floor discussions:

  • Using blockchain algorithm for notaries, public ledgers like land registry, contracts, etc
  • But human interactions are not simply back and white: for instance one can avoid or mitigate bankruptcy by paying of half of the contract (partial transactions)

API Business Models for Open Data (

  • sustainable and scalable bmodell for open data
  • competition: any way a customer is addressing the problem
    • direct competition: 4 P’s (Product, Price, Promotion, Place)
    • indirect competition: education, awareness
  • ways to compete with a service on top of open data
    • adding to open data: 1 + 1 > 2
      • annotations
      • go niche
    • curating open data: 1 – 1 > 0
  • transactional moneytization gives incentive to use less
  • alternative: price of = # users * 0.001
    • incentive to sign up early to pay less

Hackathons Deliver (Braintree)

  • Hackathon (learn, build, share) vs Battlehack (competition)
  • no cheap prices, no cheap pizza, no cheap beer
  • communication is king
  • hackathon is NOT a recruiting event
  • if you want to recruit, inspire them
  • a hack is not an app: motivation is to play around and discover
  • (Major League Hacking)
  • hackathon is part of marketing:
    • define ‘What means success’ before you setup the event
    • track and measure
    • be present at the event and talk to participants (with devs, not marketing or sales)
    • share knowledge and experience