Saturday, April 28, 2018

Friday, March 09, 2018

Using Docker on a Remote Host: Why so hard?

I have a windows PC at work and an Ubuntu box which happens to be running dockerd.

I have the docker command installed on the windows box, but it occurred to me I should be able to run the command using the Ubuntu box as the host.

Challenge accepted.

After a bit of digging I find :
>docker -H  xxx
should work.

no luck.. fails to connect...

On Ubuntu I run
> netstat | grep docker -i
and find the 2357 port is not open... so back to google.

Turns out you need to tell dockerd to open that socket.
You can do that from the commandline when running dockerd (with the -H flag)
>dockerd -H tpc://:2357

I have no idea where the command line is being run from, so I keep digging.

Turns out you can also configure this in a config file... daemon.json

I don't need encryption as it's all on my local network.
I add in some hosts...

> sudo systemctl daemon-reload
sudo systemctl restart docker "




Turns out the commandline is also adding hosts... so you can't do both.

I decide to add mine to the command line too and remove it from the json.
> sudo vi  /lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerd -H fd:// -H tcp:// -H tcp://

the didn't help... 
so I added my actual IP address... 

and that works. This isn't ideal for a few reasons. My IP address being dynamic being one.

This all took a couple of hours I won't get back, so I thought I would share.

Friday, August 04, 2017

What metrics should we use for our visual management board?

I sent this response to someone asking "What metrics should we use for our visual management board?". I think the advice is generally useful, so I thought I would post it here. 

Metrics are hard, but they should be driven by your goals.

If you are trying to improve a system, first you need to focus on quality, then predictability, and then throughput.

To improve anything, though, you need to be making changes, so the first metric is "Rate of change" ... "How many things (no matter how small) have we improved this week?". Once you have a focus on 'we continually improve', then you need a metric driving the direction.

For Quality I'd measure the bounce rate (how often is something rejected as needing more work).

For Predictability you need to identify variance in your system and remove it. Measure the time from 'committed' to 'done' for each piece of work. Then based on that define a SLA "We deliver 80% of our work within 2 weeks". Then Pareto anything that goes outside that SLA and fix the root cause of these issues.

You have got rid of variance when the rate your team is delivering work is steady.

Little's Law defines a relationship between the amount of WiP (work in progress) in a system, the TiP (Time in Progress), and the "Throughput" (or Delivery Rate) of your system. In effect the less WiP the better the throughput. This only applies if you system has very little variance, hence we have to make it stable first, then optimize.

For Throughput, you can start by reducing the amount of work in progress (striving for single piece flow).  You can do this by focusing on collaborating on work over starting new work.

The rest of throughput is identifying the point in your process where work piles up. That's probably your bottleneck. Follow the ToC Focusing steps. ( to identify and fix bottlenecks and throughput will improve.

Originally I put 'efficiency' instead of 'throughput' but efficiency is not important, in-fact it can be bad.  Deriving for efficiency typically leads to maximizing 'utilization' so everyone is busy. This reduces the ability of a team to deal with fluctuation in the work loads and increases the work in progress which reduces throughput and introduces bottlenecks. It also has the side effect of removing the opportunity for people to reflect and apply improvements, which is how you ever get better.  As I said... metrics are hard.

Good Luck.

Thursday, December 22, 2016

Re-Discover Agility...

People keep talking about rewriting the software development agility manifesto.  I take this as a sign that people feel Agility has moved on from where it was born.

Several people are taking their own shot at restating agility in new terms.

Thursday, September 10, 2015

Getting TeamCity build status' in Stash

Here is a quick hack to get build statuses from TeamCity appearing in Stash.

In Chrome, add the "Control Freak" plugins (which lets you attach your own javascript and css to existing sites).

Visit the build for a feature branch in TeamCity and note the url:
For me this is http://your_teamcity_server:9090/viewType.html?buildTypeId=Spectra_SpectraCi&branch_Spectra_SpectraDev=lahm-424-sample-tracking-error-handling&tab=buildTypeStatusDiv
Note the 'Teamcity server address', 'buildTypeid' and 'branch_xxx' parts of the url.
Now visit your "Pull Requests" page in Stash.
Open the ControlFreak dialog.
Click "This domain"
In the Libs tab and select "JQuery"  (I picked 2.1.0 (Google CDN)
In the Javascript Tab enter the following code:

$(".source .name").each(function(){ var self = $(this); self.closest("td.source").append($("<a href='http://your_teamcity_server:9090/viewType.html?buildTypeId=Spectra_SpectraCi&tab=buildTypeStatusDiv&branch_Spectra_SpectraDev="+self.text()+"&guest=1'><img src='http://your_teamcity_server:9090/app/rest/builds/buildType:(id:Spectra_SpectraCi),branch:"+ self.text() +"/statusIcon'/></a>"))});

Change the above to reflect your
* TeamCity server address and port,
* TeamCity buildid and branch_xxx name.

Note: This works for me because we are using feature branches in Git and TC is building them. Your config may vary.

GitHub Projects