Friday, August 04, 2017

What metrics should we use for our visual management board?

I sent this response to someone asking "What metrics should we use for our visual management board?". I think the advice is generally useful, so I thought I would post it here. 

Metrics are hard, but they should be driven by your goals.

If you are trying to improve a system, first you need to focus on quality, then predictability, and then throughput.

To improve anything, though, you need to be making changes, so the first metric is "Rate of change" ... "How many things (no matter how small) have we improved this week?". Once you have a focus on 'we continually improve', then you need a metric driving the direction.

For Quality I'd measure the bounce rate (how often is something rejected as needing more work).

For Predictability you need to identify variance in your system and remove it. Measure the time from 'committed' to 'done' for each piece of work. Then based on that define a SLA "We deliver 80% of our work within 2 weeks". Then Pareto anything that goes outside that SLA and fix the root cause of these issues.

You have got rid of variance when the rate your team is delivering work is steady.

Little's Law defines a relationship between the amount of WiP (work in progress) in a system, the TiP (Time in Progress), and the "Throughput" (or Delivery Rate) of your system. In effect the less WiP the better the throughput. This only applies if you system has very little variance, hence we have to make it stable first, then optimize.

For Throughput, you can start by reducing the amount of work in progress (striving for single piece flow).  You can do this by focusing on collaborating on work over starting new work.

The rest of throughput is identifying the point in your process where work piles up. That's probably your bottleneck. Follow the ToC Focusing steps. (http://blog.nayima.be/2009/04/16/the-theory-of-constraints-five-focusing-steps-in-action/) to identify and fix bottlenecks and throughput will improve.

Originally I put 'efficiency' instead of 'throughput' but efficiency is not important, in-fact it can be bad.  Deriving for efficiency typically leads to maximizing 'utilization' so everyone is busy. This reduces the ability of a team to deal with fluctuation in the work loads and increases the work in progress which reduces throughput and introduces bottlenecks. It also has the side effect of removing the opportunity for people to reflect and apply improvements, which is how you ever get better.  As I said... metrics are hard.

Good Luck.

Thursday, December 22, 2016

Re-Discover Agility...

People keep talking about rewriting the software development agility manifesto.  I take this as a sign that people feel Agility has moved on from where it was born.

Several people are taking their own shot at restating agility in new terms.
See: http://modernagile.org/ http://heartofagile.com/

Thursday, September 10, 2015

Getting TeamCity build status' in Stash

Here is a quick hack to get build statuses from TeamCity appearing in Stash.

In Chrome, add the "Control Freak" plugins (which lets you attach your own javascript and css to existing sites).

Visit the build for a feature branch in TeamCity and note the url:
For me this is http://your_teamcity_server:9090/viewType.html?buildTypeId=Spectra_SpectraCi&branch_Spectra_SpectraDev=lahm-424-sample-tracking-error-handling&tab=buildTypeStatusDiv
Note the 'Teamcity server address', 'buildTypeid' and 'branch_xxx' parts of the url.
Now visit your "Pull Requests" page in Stash.
Open the ControlFreak dialog.
Click "This domain"
In the Libs tab and select "JQuery"  (I picked 2.1.0 (Google CDN)
In the Javascript Tab enter the following code:

$(".source .name").each(function(){ var self = $(this); self.closest("td.source").append($("<a href='http://your_teamcity_server:9090/viewType.html?buildTypeId=Spectra_SpectraCi&tab=buildTypeStatusDiv&branch_Spectra_SpectraDev="+self.text()+"&guest=1'><img src='http://your_teamcity_server:9090/app/rest/builds/buildType:(id:Spectra_SpectraCi),branch:"+ self.text() +"/statusIcon'/></a>"))});

Change the above to reflect your
* TeamCity server address and port,
* TeamCity buildid and branch_xxx name.

Note: This works for me because we are using feature branches in Git and TC is building them. Your config may vary.

Monday, June 23, 2014

Git: Keeping a clean master branch

Since moving to Git, I'm loving the power. I feel safe. Once changes are committed, no matter what I do, I won't lose anything. And within reason I can later edit history to tell the story I want it to tell.

In our team, we use the commit messages from our master branch to automatically generate release notes. To achieve this the master commit history need to stay pretty clean. Ideally one commit per feature.

We therefore have been working on feature branches and squash merging them to master once they pass code review.

We use Atlassian Stash for code reviews. When a code review is accepted it does a merge to your destination branch. Stash doesn't do a squash merge, or use our commit comment, so we can't have it automatically commit to master. If we don't accept the review though, all we can do is leaves the code review open or marks it as abandoned.

To avoid this we make a review branch from master and review the feature onto that. Stash can then merge it once the review passes... so it's happy... then the developer manually squash merges the review branch to master, so we are happy.

This was working fine, until we needed a feature to branch from another feature mid development.

The downside of squash merges, is that the master doesn't know the changes came from a branch. This means :-

  • If you branch from feature A, 
  • then feature A makes more changes to the already changed files 
  • and feature A commits to master first 
  • when you merge to master, it looks like the changes you inherited are conflicting with the committed files. 

This is a real pain.

To get around this... we have started to use a common "integration" branch. (instead of the review branches)

  • A developer branches from integration, 
  • does changes... 
  • then reviews back to integration. 
  • Once accepted and merged, the developer then squash merges integration to master, 
  • and then merges master back into integration. 

This final merge is easy as the code is the same, and tells Git that these branches align at this point. This means that later "squash merges" onto master keep working without merge conflicts. Also any branches committed to "integration" retain all history, so they merge nicely too.

This looks like this:


Happy Coding.

Wednesday, April 30, 2014

Making Moq Better...

TL;DR;  (Too Long Don't Read)

I wrote an extension for Moq to assert that no methods are called on a mock object.

Moq is great ...

Moq is my favorite mocking framework for C#, but it's not perfect.

I like it because it supports test Spies (to use Martin Fowlers terms).

Test Spies remember interactions, so you can assert on them at the end of the test instead of setting up your expectations at the beginning of the test. Some people don't like this as your feedback comes after the faulty code has been executed, instead of being at the point the unexpected interaction occurs, so you don't have the same stack trace, What it does, however, is allow you to follow the Arrange, Act, Assert pattern for test construction. This in turn allows you to strictly limit the focus of your tests, making them less brittle and more intention revealing.

But ...

If you want to assert that no interactions occurred with an object, this standard way of doing this in Moq is to use a "strict" mock. A Strict mock will throw exceptions when a method you didn't explicitly set up is called.

Strict mode is set at object construction which is typically in your setup method, so can't be changed on a per test basis. This means all tests make this interaction implicit assertion even the ones where the focus of the test isn't on these interactions.

Even worse, say you "setup" some default behaviour for your Stub object, Now if you want to know that no methods are called on a specific test, then you have to explicitly "verify" each method individually to assert it's not called. This makes tests brittle. If you add a new method to the object you have to add that methods to any test that was asserting that no interactions occurred.

Extension Methods To the Rescue!


To help with this I have written an extension method to verify no methods are called on an object.

This useful, but is only a step in the right direction. Ideally you would be able to specify all the interactions you do want to occur, and finish by saying "and nothing else".

https://gist.github.com/NigelThorne/96badd3ba9a18d084cd8


GitHub Projects