Monday, November 19, 2007

NCoverCop trunk... for coverage

If you grab NCoverCop from svn you can check out a couple of improvements.


  • Refactoring Allowed.. If you delete tested code that is no longer needed, then this will unfortunately have the side effect of dropping your coverage percentage. NCoverCop allows for this by checking the number of untested lines. If it has not increased then the build is passes as you didn't make things worse.

  • Coverage Differences are displayed when a build fails to give you an idea of the lines in the file that have become uncovered, or been added without test.

  • sectionOfFilePathToCompareRegex allows you to specify a section of the document paths in the NCoverResults.xml files to ignore when comparing the files. This allows you to compare the build's file with your local one even if your trunk paths differ.


          coverageFile="${ncover.output.filename}"
minCoveragePercentage="59"
previousCoverageFile="${ncover.backup.filename}"
autoUpdate="${environment::get-machine-name() == debug.buildbox.name}"
sectionOfFilePathToCompareRegex="trunk.*"
/>


E.g. "trunk.*" (as above) will truncate "C:/something/trunk/somedir/file.cs" to "trunk/somedir/file.cs" when comparing with another results file.


Let me know if you find NCoverCop useful.


Note: Big big thanks to VarianInc for allowing me to do some work on NCoverCop on their dime. They have introduced a policy of using opensource code wherever they can, and contributing back to the community.

Varian (Melbourne) are by far the most agile company I have worked for to date. I highly recommend anyone that is keen to be part of a well functioning agile .Net team to get in contact with them. Email me and I'll forward your details if you like.

Unveiling NCoverCop

Duncan Bayne (who I have the pleasure of working with at the moment) had the idea of writing a small exe to parse an NCoverResults file and fail a build if the coverage drops below a set threshold. I loved it... and decided to up the ante.


From that seed I have built NCoverCop.

NCoverCop is a custom NAnt task that not only fails your build if your coverage drops, but lifts the threshold each time a new build passes with a higher threshold.

It works by keeping a copy of the best coverage file.

Here's how I use it..


  
coverageFile="${ncover.output.filename}"
previousCoverageFile="${ncover.backup.filename}"
minCoveragePercentage="65"
autoUpdate="${environment::get-machine-name() == debug.buildbox.name}"
/>


  • coverageFile is the location of the NCoverResults.xml file you want to compare.

  • previousCoverageFile is the file that will be overwitten if a successful build beats the current threshold.

  • minCoveragePercentage is used when you don't have a previousCoverageFile yet or the specified percentage is higher than that of the previousCoverageFile.

  • autoUpdate can be turned off so the previousCoverageFile doesn't get updated in a passing build.[defaults to on]



Here are a few tips...

We have the previousCoverageFile in a location that is visible on the network and referenced in the build file through it's network address. Coupled with being able to turn autoUpdate off for everyone except the build machine means that you can run ncoverCop locally and see if you are going to break the build before you check in.

Friday, November 16, 2007

Get your build fixed faster... broadcast your failures!

We've been talking about setting up a build box light for ages.

The other day my project manager bought a USB Visual Signal Indicator. An hour of hacking .. and we now have a light that's green on a passing build and flashes red on a breaking build.

We are using CruiseControl.Net as our build machine, so I controled the light using a custom build publisher.

The result we are experiencing is the build is gets fixed faster. I have a few theories as to why.

Here is the code...

using System;
using System.Text;
using Exortech.NetReflector;
using ThoughtWorks.CruiseControl.Core;
using ThoughtWorks.CruiseControl.Remote;


namespace FlashLightPublisher
{
[ReflectorType("flashlight")]
public class FlashLightPublisher : ITask
{
[ReflectorProperty("goGreenOnPass", Required = false)] public bool GoGreenOnPass = true;

public void Run(IIntegrationResult result)
{
StringBuilder DeviceName = new StringBuilder(Delcom.MAXDEVICENAMELEN);

int Result = Delcom.DelcomGetNthDevice(Delcom.USBDELVI, 0, DeviceName);

if (Result == 0)
{
throw new Exception("no usb device");
}

uint hUSB = Delcom.DelcomOpenDevice(DeviceName, 0);
switch (result.Status)
{
case IntegrationStatus.Success:
if (GoGreenOnPass)
{
Delcom.DelcomLEDControl(hUSB, Delcom.REDLED, Delcom.LEDOFF);
Delcom.DelcomLEDControl(hUSB, Delcom.GREENLED, Delcom.LEDON);
Delcom.DelcomLEDControl(hUSB, Delcom.BLUELED, Delcom.LEDOFF);
}
break;
case IntegrationStatus.Exception:
case IntegrationStatus.Failure:
Delcom.DelcomLEDControl(hUSB, Delcom.GREENLED, Delcom.LEDOFF);
Delcom.DelcomLEDControl(hUSB, Delcom.REDLED, Delcom.LEDFLASH);
Delcom.DelcomLEDControl(hUSB, Delcom.BLUELED, Delcom.LEDOFF);
break;
case IntegrationStatus.Unknown:
Delcom.DelcomLEDControl(hUSB, Delcom.BLUELED, Delcom.LEDON);
break;
}

Delcom.DelcomCloseDevice(hUSB);
}
}
}


Our ccnet.config file contains the following snippet...


...
<publishers>
<flashlight/>
...
</publishers>


Note: We have few sequencial builds set up so we don't class the build as green until all the builds have passed. I therefore have all the builds able to make the light go red, but only the last build make it go green.

Wednesday, November 14, 2007

Conduit Design Pattern

I love dependency injection. Specifically I like constructor dependency injection.

Sometimes however you have two objects that need to talk to each other. This means you need create each before the other to use constructor injection.

What I have been doing instead is creating a class I call a conduit.

The Conduit class implements one of the interfaces and delegates to another instance of that interface. It also provies a setter injection method to allow you to initialize the conduit.

To use it therefore you do the following:
1/ Create an instance of IXConduit
2/ create an instance of class Y passing in the IXConduit as an instance of IX
3/ create an instance of class X passing in Y
4/ call the setter injection method on the conduit passing X in.

When Y calls a method on the IX it has (the conduit) it passes on the call the the IX it has (the real instance of X), and X can call method on Y directly.

This means the compilcation of instantiation is not part of class X or Y so they don't get polluted by this design issue.

MockingTestFixture makes NMock tests simpler

I just noticed that not everyone is using a helper class when using NMock2. It seriously makes your code cleaner. Here is the one I use.

using System;
using NMock2;
using NUnit.Framework;


namespace Tests.Utilities
{
public abstract class MockingTestFixture
{
private Mockery mockery;

protected IDisposable Ordered
{
get { return mockery.Ordered; }
}

[SetUp]
public void MockSetUp()
{
mockery = new Mockery();
SetUp();
}

protected abstract void SetUp();

[TearDown]
public virtual void MockTearDown()
{
TearDown();
VerifyExpectations();
}

protected virtual void TearDown()
{
}

public T NewMock()
{
return mockery.NewMock();
}

public void VerifyExpectations()
{
mockery.VerifyAllExpectationsHaveBeenMet();
}

protected static void IgnoreReturnValue(object ignored)
{
}
}
}

Derive your test fixture from this class. Override the abstract setup method... there you go. No more need to worry about the Mockery.

Thursday, September 13, 2007

Tip#3: Extend your mocking framework

Currently I am using NMock2 in my C# development. I have however made a few extensions to allow me to test some edge cases. Due to the clean design of NMock extending it is really easy.

IAction


A normal use of NMock looks like this:

Expect.Once.On(someObject).Method("methodName").WithAnyArguments().Will(Return.Value("value"));


In this statement the 'Return.Value' method returns an object that implements the IAction interface. The job of this object is to perform some action at the point the expectation matches. In this example the effect is to set the return value of the method to a known string.

However... any class implementing the IAction interface can be passed to the 'Will' method, so if you have some special need to execute code at the point the method is matched, then here is a great place to put it.

Here is an example...
[Test]
public void Move_Sleeps_UntilUpdatedWithTrue
{
Expect.Once.On(sleeper).Method("Sleep").Will(Execute.Delegate(delegate { mono.Update(false); }));
Expect.Once.On(sleeper).Method("Sleep").Will(Execute.Delegate(delegate { mono.Update(false); }));
Expect.Once.On(sleeper).Method("Sleep").Will(Execute.Delegate(delegate { mono.Update(true); }));
mono.Move(120);
}


Here I have used an IAction class that invokes a delegate at the point your expectation is met. In this case I am using it to test a piece of code that repeatedly sleeps (using a sleeper service) until the object is updated with true. I am intercepting the call to sleep and instead taking the opportunity to update the object.

Here is the code that makes that possible.
public class Execute : IAction
{
public delegate void DelegateAction();
private readonly DelegateAction action;

public static IAction Delegate(DelegateAction action)
{return new Execute(action);}

private Execute(DelegateAction action)
{this.action = action;}

public void Invoke(Invocation invocation)
{action();}

public void DescribeTo(TextWriter writer)
{}
}
So in way of explanation: the invoke method is want gets called when the expectation is met. It just invokes the delegate the 'Execute' object was constructed with. The rest is just static methods to allow you to say Execute.Delegate(...) rather than new Execute(...)

Next post I'll explain how I test firing events using another extension to NMock.

Wednesday, September 05, 2007

Installing simple_helpful without edge Rails

DHH has helpfully moved the simply_helpful Ruby on Rails plugin.
If you are not running edge you can still install, you just have to use:

script/plugin install http://dev.rubyonrails.org/svn/rails/plugins/legacy/simply_helpful/

Tuesday, August 14, 2007

Tip:#2 Smell: Duplicate Tests Indicate a Missing Class

Smell: A class with two public methods on it, both perform the same functionality, or one is a subset of the other. The existence of a private method is a good indication of this.

To fully test these public methods you have to repeat a bunch of tests. There has to be a lazier simpler solution.

Recently I have been getting this smell a lot using MVC in a .Net winforms app.

.Net forms are hard to unit test. It is therefore helpful to keep your Views as thin as possible. Just use them to expose the form fields as a bunch of properties and to catch events and call the corresponding method on the controller. This moves the logic to the Controller where it is far more testable.

This can however lead to methods on the controller like "OnStartButtonClicked" and "OnStartMenuItemSelected". Both are going to perform the same actions. Both are going to need the same set of tests.

The solution is simple. Use the 'Extract Class' refactoring to pull the private method out to another class and use Dependency Injection to pass an instance of this new class back to the original class.

Following this above advice though you end up with another object. In my experience this split makes a lot of sense. I call this new class a Service. I rename the new thinner Controller to Presenter which better resembles it's remaining responsibilities.

(Note: I've read a few things on the differences between MVC and MVP, but I don't really get it. What I have here may be what is meant by the naming. Either way I like this design better.)

I'm really liking the new code. The view is really thin. The presenter translates UI events to service calls, and knows which views to update when the domain changes.

A piece of advice Steve Hayes often gives is "Design your UI layer so you could replace it with a command line and everything would keep working". I have struggled to do this with .Net, even with MVC. Having this new split however I can see keeping the Service and Model layers and replacing the UI would be easy.

So... Look out for duplicates tests... get lazy and write better code!

Friday, July 06, 2007

Tip:#1 Testing Events in C#

If you have a class that causes a .Net event to fire. When writing a test for the event, you can add a handler to the event that sets a flag that you then assert on in your code. Using normal delegates means the variable would need to be a class member and get initialized in Setup. Use of anonymous delegates cleans this up nicely.

[Test]
public void TriggerEvent_CausesEventThatFiresToFire()
{
Customer customer = new Customer("Ben");
string changedPropertyName = null;

customer.PropertyChanged += delegate(object sender, PropertyChangedEventArgs args)
{changedPropertyName = args.PropertyName;};

customer.Name = "Kate";

Assert.AreEqual("Kate",changedPropertyName);
}

Thursday, June 28, 2007

Design for unit testing

Object oriented languages allow lots of ways to solve the same problem. As you get better at design you see some designs as good [loose coupling, high cohesion]. You can view your design skills as a set of filters you use to choose which design to pick.

Doing TDD leads you to learning some new filters. If a design is not testable it is not a valid design. I want to write an article on this.. but for now check this out: http://www.codeproject.com/useritems/DesignPatternsForUnitTest.asp

Monday, May 21, 2007

SVN vs ClearCase

At a current client we just swapped over from using ClearCase and ClearQuest to using SVN with clear check in comments within our team and updating the ClearCase repository daily.

The change in productivity is striking. One team member mentioned today that he felt he was saving around 2 hours a day. If that's accurate then on a team of 8 people it's equivalent to freeing up 2 people every day.

It's not just the time spent waiting for version control, but those pauses people make to 'quickly' grab a drink or check their email so the pair are not there when the tool does finish. Not only that, everyone is having more fun too. No one likes waiting for hour glasses. Everyone likes feeling more productive.

Are any steps in your processes disrupting the flow of your work? Can a change of tools help?

The key is to work out the features of the current process that are actually require and which are just nice to have.

Tuesday, April 24, 2007

Ping Pong Programming

Ping Pong Programming

Aim: To get your pair to do the hard work.

Setup: The game begins with a failing test. Players take turns...

Turns: You have 3 options

1
a/ Make the failing test pass
b/ Do any refactoring you can see
c/ Write or Uncomment a new failing test

2
a/ Comment out the failing test
b/ Write a simpler failing test

3
a/ Comment out the failing test
b/ Refactor to simplify the code
c/ Write or Uncomment a failing test

Hints: If your turn is longer than 3 minutes try option 2 or 3

Notes: Give this a go and notice how frequently you change drivers. Great when you have a pair that hog the keyboard.

Sunday, April 22, 2007

Freelancing on Rails

If you are freelancing as a software developer and especially if you are focused on rails, you should check out Craig Ambrose's podcast series.

I have had the pleasure to work with Craig at both Torus Games and Open Windows, and always find him insightful, pragmatic, and a pleasure to work with.

Check it out at http://www.craigambrose.com/podcasts

Friday, April 06, 2007

Hobo on Rails

Hobo looks great! It's a web framework built on top of Rails. It's structured as a plugin, so you can use as much or as little of it as you want.

I recommend checking out the screencasts to get a good overview of how this all works. I must say, I'm impressed.

I love the way it simplifies updating of several parts of a page with a single ajax call. Their solutions seems to be to tag parts of the page, then in the ajax call pass a list of parts that will need to be refreshed. The page is therefore telling the controller what parts to spit back, so the controller doesn't need to know the 'current page'. I'll have to look into exactly how this works.

I was also very impressed by DRYML, their templating system. It's just so simple! You define custom tags which is equivalent of defining functions, so they have a list of parameters and a body. When calling the tag you can pass in the details as the parameters. There is a nice concept of a context object for evaluation of any part of the DRYML, which simplifies the code in the tags too.

[It still find it amusing when people proposing their framework as the fastest way to develop a web app use php for their own site. What's with that?]

Friday, March 23, 2007

Updating your Textmate Bundles (UTF-8) issue

Assuming you got your bundles from svn... (you did didn't you?) ... then you can update them in the following way.
cd /Library/Application\ Support/TextMate/Bundles
svn up *.tmbundle
Doing this I got an error though
subversion/libsvn_subr/utf.c:466: (apr_err=22)
svn: Can't convert string from 'UTF-8' to native encoding:
subversion/libsvn_subr/utf.c:464: (apr_err=22)
svn: Ruby.tmbundle/Preferences/Completion: ENV[?\226?\128?\166] variables.tmPreferences
This is to do with having a local set that doesn't allow UTF-8.
export LC_CTYPE="en_US.UTF-8"
export LANG="en_US.UTF-8"
will fix it. Add them to your ~/.bash_profile to make the change permanent.

Update Mysql Gem problem

I just tried to update gems and got a problem with mysql (on my macbook)
I got the following errors.

checking for mysql_query() in -lmysqlclient... no
checking for main() in -lm... yes
checking for mysql_query() in -lmysqlclient... no
checking for main() in -lz... yes
checking for mysql_query() in -lmysqlclient... no
checking for main() in -lsocket... no
checking for mysql_query() in -lmysqlclient... no
checking for main() in -lnsl... no
checking for mysql_query() in -lmysqlclient... no

Googling around I found a post with the answer..
$ sudo gem install mysql -- --with-mysql-config 
This worked great.

I have no idea what the error messages meant, or why this fixes it, so I thought I should share it or risk forgetting it :)

Anyone care to enlighten me?

Thursday, March 08, 2007

...you keep treating it like we are in maintenance mode! [All Development is Maintenance Development]

One of my team said recently "This is a new product. You keep treating it like we are in maintenance mode!". He was right.

Eventually every product gets released. Once it is released you need to be in maintenance mode. Agile development focuses on getting this feedback from running software as soon as possible in by releasing early and often.

As soon as the first release is being used by real people (typically within weeks of starting the project) you are in maintenance mode, whether you like it or not. So all development is done in maintenance mode.

Because of this... You have valuable data you need to migrate with each release. You have users that need to be informed of changes to UI or behavior. You will have to prioritize between bug fixes and new features. You also gain valuable feedback from real users about the usability of your solution.

Here are some rules I have found useful for maintenance development:-

Every developer has their own database instance.


This sandboxes your changes from everyone else's. You wouldn't share the same copy of the code on a shared drive for development, would you?

Every change to the schema is done through scripts.


This allows you to share your db changes with the rest of the team (allowing you to make them more frequently) and handle migration of existing data. Like code, other's don't get your changes until they choose to. You must test these scripts to ensure the data remains intact. I used to have copies of a few big customer databases which I used to verify this. [I'll be releasing a .Net tool to help with this process soon]

Every schema change should have a number.

Every database needs a 'sysinfo' table with one row in it that tells you which version the database is. Every script is numbered in the order they are to be run. Running a script updates the database version number. This makes working out what script to run next simple, allows you to skip steps that don't need running, and unlike scripts that check if a change is needed before doing it, run fast and allow you to put re-do changes you have undone in a previous version.

Stored Procedures are code and should be versioned along with the codebase.

Your process of getting latest should be 1/ get latest. 2/ run the database schema migration scripts on your personal db instance 3/ update any changed stored procedures

Every check in should be atomic.

In other words every check in should contain all code, stored procedure and schema changes to leave you in a state you could get latest, build from scratch, generate a database from the scripts, run all the tests and everything will work.

Have no more than one branch (the version in production) and a trunk (the version in development).

Branches mean duplicated effort. If you release every week, you don't need one at all.

If you work out a good way to seamlessly keep the user trained on the latest version, let me know. I have a feeling the correct solution will involve the app automatically training people on how to user it, which would work for new users too. See Basecamp for an example of this.

Thursday, March 01, 2007

Freelancing on Rails

As of next week I am finally taking the plunge and becoming a full-time 'Freelance Rails Developer'.

I have a couple of "warm leads" already so things are looking good.

I've been using Rails for the last couple of years, while commercially working with .Net web development. The frustration I was experiencing with being to achieve more in a weekend with rails than I could during the whole week with ASP.Net finally got too much.

I'll be blogging about my experiences as I go, so stay tuned as I find my feet with freelancing, dealing with clients, quoting on work, marketing my skills, and working from home.

So... as a plug: If you know of anyone that needs some web development done, please let me know! You can contact me though freelancing@nigelthorne.com

Tuesday, February 27, 2007

How to Update Gems from Behind a Proxy

I have been trying to work out how to use install and update rails from behind a proxy that required authentication. The issue being that web browsers prompt you for your proxy login details. The command prompt on windows doesn't.

I finally got it working but it wasn't quite how the Gem FAQ stipulates, so I thought I would share it.

The Simple Solution

Execute the following command from the command prompt.

set HTTP_PROXY=http://[username]:[password]@[proxyserver]:[port]

Your gem commands should now work fine.

The Complex Solution

  • Get hold of the 'NTLM authorization Proxy Server' currently on source forge as ntlmaps.
  • Configure it so it known your login to the real proxy server. (you edit the server.cfg, but see gems faq page for details)
  • Set HTTP_PROXY as above but pointing to your new proxy server
  • If it still doesn't work, try passing your login credentials to the new proxy server.

Note: I tried the gem commands using the -p switch but most commands don't work properly. I was getting "ERROR: While executing gem ... (Errno::EBADF)"

Using the environment variable HTTP_PROXY instead fixed this.

Thursday, February 01, 2007

EnABLE your interfaces

Interfaces exist to explain the abilities of a class.

Good: Enumerable (you can enumerate it), Sortable (you can sort it).
Bad: IDrawing, IGeometricObject, ITransaction

Question: IList... good or bad?


powered by performancing firefox

Saturday, January 13, 2007

'Where is the MVC Split?' for web apps.. or Client Side Partials

MVC is a great way to apply serperation of concerns to an application's archtecture. Just how far should it go?

(Quick introduction to MVC)
MVC stands for Model, View, Controller. It defines a split in responsibilities between the Model (your representation of the business problem domain), the View of the data and the Controller, which knowns how to orchestrate the various actions and queries you can perform on the data Model.

The Controller works like a dvd player. When it is told to play the dvd player queries the DVD and sends the data to the TV. The TV (being the view) then decides how to present the data so you can see it.

In rails the TV would be the view templates and the bit or rails that knows how to renderthose as HTML, the DVD player would be one of the controller classes, and the DVD is represented as a model class.

The advantage of this design is the seperation of concerns. If you swap your TV for a Cinema overhead projector you don't have to change your DVDs or Player. In the same way, if you decide to render your data as XML, or provide a mobile phone version of your site, you don't have to change your controllers or models. This seperation means the graphic designers are able to experiment with the look and feel of your site without having to get code changes made.

Notice, to control the volume or brightness of the picture, you (being the user) tell the TV, not the dvd player or the DVD. These are view related actions. In rails these would be implemented using javascript (like shrinking and expanding of menus of areas of the screen).

(MVC intro. ends)

I had the following situation: I have a screen full of tree_elements, and a sidebar that lists all root tree_elements. So one is showing a subset of the data the other one is showing. If you create a new root_element, both lists need to be updated.

The sidebar is implemented as a 'partial' that is rendered by the application layout, and hence exists on all my pages. There is a 'quick_add' form in the sidebar to allow you to quickly add new root items. The quick_add form uses ajax and RJS templates to tell the page to add items to the lists.

The problem is that if you are on the screen that lists all items and add a new root item, then there are two lists to update with this item. If you are on another page, then there is only one list to update. As a result the RJS template has to have some conditional javascript logic that says 'update the sidebar list of items, and if the list of all items is on the page, then also update that one'

My concern is this... The RJS for the 'add a new tree item' has to know about all the possible places the tree item will appear, so it can tell the view (whichever one the user happens to be on) to update.

This is wrong! If I add a new screen that represents this list in another way, I would have to extend to RJS for the 'quick_form_create' action to handle yet another special case. The code smell here is 'shot gun changes' [you have to make changes all over the place unrelated to the new added thing]. Really the RJS is sending events to the browser. The page should take that event and decide what needs updating.

I am trying to find a nice way to do it. The problem I have at the moment is when the event causes the page to want to render a new item on the list, the template for it is in the partial (server side). The page could request this as a seperate call, but I want to minimize chatter between the page and the controller. Another solution is for the session to hold onto the current view so the controller knowns what to send, but this again blurs the boundry between view and controller. The controller knows too much. What I'll look into next is having the page hold onto a DOM version of the partial, so it can copy, populate it, and add it to the DOM in the right place. This seems the best solution to me but would need some nice wrapper code in ruby to make it easy to set up.

... [watch this space] ...

the story continues... :)

powered by performancing firefox

The GWT way to Define UATs

Alsak Hellesoy (of RSpec fame) was interviewed on the Ruby on Rails podcast recently (10/09/06). One thing that sparked some interest for me was what he called GWT.

GWT is a verbal framework for helping customers express their requirements in a form that can easily be translate into code. Simple put, requirements need to conform to the following sentence structure: Given __ When __ Then..

E.g. GIVEN I am on the search page, WHEN I enter Thorne and click Search THEN Nigel Thorne's website should be displayed in the results.

This certainly seems a lot more accessable than "what are the pre and post-conditions for the feature?" :)

powered by performancing firefox

GitHub Projects