Friday, March 09, 2018

Using Docker on a Remote Host: Why so hard?

I have a windows PC at work and an Ubuntu box which happens to be running dockerd.

I have the docker command installed on the windows box, but it occurred to me I should be able to run the command using the Ubuntu box as the host.

Challenge accepted.

After a bit of digging I find :
>docker -H  xxx
should work.

no luck.. fails to connect...

On Ubuntu I run
> netstat | grep docker -i
and find the 2357 port is not open... so back to google.

Turns out you need to tell dockerd to open that socket.
You can do that from the commandline when running dockerd (with the -H flag)
>dockerd -H tpc://:2357

I have no idea where the command line is being run from, so I keep digging.

Turns out you can also configure this in a config file... daemon.json

I don't need encryption as it's all on my local network.
I add in some hosts...

> sudo systemctl daemon-reload
sudo systemctl restart docker "




Turns out the commandline is also adding hosts... so you can't do both.

I decide to add mine to the command line too and remove it from the json.
> sudo vi  /lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerd -H fd:// -H tcp:// -H tcp://

the didn't help... 
so I added my actual IP address... 

and that works. This isn't ideal for a few reasons. My IP address being dynamic being one.

This all took a couple of hours I won't get back, so I thought I would share.

Friday, August 04, 2017

What metrics should we use for our visual management board?

I sent this response to someone asking "What metrics should we use for our visual management board?". I think the advice is generally useful, so I thought I would post it here. 

Metrics are hard, but they should be driven by your goals.

If you are trying to improve a system, first you need to focus on quality, then predictability, and then throughput.

To improve anything, though, you need to be making changes, so the first metric is "Rate of change" ... "How many things (no matter how small) have we improved this week?". Once you have a focus on 'we continually improve', then you need a metric driving the direction.

For Quality I'd measure the bounce rate (how often is something rejected as needing more work).

For Predictability you need to identify variance in your system and remove it. Measure the time from 'committed' to 'done' for each piece of work. Then based on that define a SLA "We deliver 80% of our work within 2 weeks". Then Pareto anything that goes outside that SLA and fix the root cause of these issues.

You have got rid of variance when the rate your team is delivering work is steady.

Little's Law defines a relationship between the amount of WiP (work in progress) in a system, the TiP (Time in Progress), and the "Throughput" (or Delivery Rate) of your system. In effect the less WiP the better the throughput. This only applies if you system has very little variance, hence we have to make it stable first, then optimize.

For Throughput, you can start by reducing the amount of work in progress (striving for single piece flow).  You can do this by focusing on collaborating on work over starting new work.

The rest of throughput is identifying the point in your process where work piles up. That's probably your bottleneck. Follow the ToC Focusing steps. ( to identify and fix bottlenecks and throughput will improve.

Originally I put 'efficiency' instead of 'throughput' but efficiency is not important, in-fact it can be bad.  Deriving for efficiency typically leads to maximizing 'utilization' so everyone is busy. This reduces the ability of a team to deal with fluctuation in the work loads and increases the work in progress which reduces throughput and introduces bottlenecks. It also has the side effect of removing the opportunity for people to reflect and apply improvements, which is how you ever get better.  As I said... metrics are hard.

Good Luck.

Thursday, December 22, 2016

Re-Discover Agility...

People keep talking about rewriting the software development agility manifesto.  I take this as a sign that people feel Agility has moved on from where it was born.

Several people are taking their own shot at restating agility in new terms.

Thursday, September 10, 2015

Getting TeamCity build status' in Stash

Here is a quick hack to get build statuses from TeamCity appearing in Stash.

In Chrome, add the "Control Freak" plugins (which lets you attach your own javascript and css to existing sites).

Visit the build for a feature branch in TeamCity and note the url:
For me this is http://your_teamcity_server:9090/viewType.html?buildTypeId=Spectra_SpectraCi&branch_Spectra_SpectraDev=lahm-424-sample-tracking-error-handling&tab=buildTypeStatusDiv
Note the 'Teamcity server address', 'buildTypeid' and 'branch_xxx' parts of the url.
Now visit your "Pull Requests" page in Stash.
Open the ControlFreak dialog.
Click "This domain"
In the Libs tab and select "JQuery"  (I picked 2.1.0 (Google CDN)
In the Javascript Tab enter the following code:

$(".source .name").each(function(){ var self = $(this); self.closest("td.source").append($("<a href='http://your_teamcity_server:9090/viewType.html?buildTypeId=Spectra_SpectraCi&tab=buildTypeStatusDiv&branch_Spectra_SpectraDev="+self.text()+"&guest=1'><img src='http://your_teamcity_server:9090/app/rest/builds/buildType:(id:Spectra_SpectraCi),branch:"+ self.text() +"/statusIcon'/></a>"))});

Change the above to reflect your
* TeamCity server address and port,
* TeamCity buildid and branch_xxx name.

Note: This works for me because we are using feature branches in Git and TC is building them. Your config may vary.

Monday, June 23, 2014

Git: Keeping a clean master branch

Since moving to Git, I'm loving the power. I feel safe. Once changes are committed, no matter what I do, I won't lose anything. And within reason I can later edit history to tell the story I want it to tell.

In our team, we use the commit messages from our master branch to automatically generate release notes. To achieve this the master commit history need to stay pretty clean. Ideally one commit per feature.

We therefore have been working on feature branches and squash merging them to master once they pass code review.

We use Atlassian Stash for code reviews. When a code review is accepted it does a merge to your destination branch. Stash doesn't do a squash merge, or use our commit comment, so we can't have it automatically commit to master. If we don't accept the review though, all we can do is leaves the code review open or marks it as abandoned.

To avoid this we make a review branch from master and review the feature onto that. Stash can then merge it once the review passes... so it's happy... then the developer manually squash merges the review branch to master, so we are happy.

This was working fine, until we needed a feature to branch from another feature mid development.

The downside of squash merges, is that the master doesn't know the changes came from a branch. This means :-

  • If you branch from feature A, 
  • then feature A makes more changes to the already changed files 
  • and feature A commits to master first 
  • when you merge to master, it looks like the changes you inherited are conflicting with the committed files. 

This is a real pain.

To get around this... we have started to use a common "integration" branch. (instead of the review branches)

  • A developer branches from integration, 
  • does changes... 
  • then reviews back to integration. 
  • Once accepted and merged, the developer then squash merges integration to master, 
  • and then merges master back into integration. 

This final merge is easy as the code is the same, and tells Git that these branches align at this point. This means that later "squash merges" onto master keep working without merge conflicts. Also any branches committed to "integration" retain all history, so they merge nicely too.

This looks like this:

Happy Coding.

GitHub Projects