Wednesday, December 21, 2005

www.greedyme.com

Nice site. A free universal wish list site.

Check out me list here

Friday, December 16, 2005

Selenium 'AndWait' timeouts

We are using Selenium at work for our UATs. Our continuous integration server is running our UATs after each commit to verify the app still works as expected.

A reoccuring problem has been button on pages that only sometimes cause postbacks. The issue is that when a postback is caused you have to use the command 'clickAndWait' when no postback occured you just use 'click'. So.. if you use 'clickAndWait' and there is no postback Selenium sits there waiting for a page refresh that will never come. In effect the test runner hangs.

Our build process has a timeout set, so eventually the nant task running the UAT times out.. This has to be set pretty long as currently the UATs are taking 20+ mins. As a result we are not getting fast enough feedback.

My Solution...

I hacked away at the selenium code a bit to introduce a timeout for all 'AndWait' commands.

Here is the code changes:

In selenium-executionloop.js I replaced

/**
* Busy wait for waitForCondition() to become true, and then continue
* command execution.
*/
this.pollUntilConditionIsTrue = function () {
if (this.waitForCondition()) {
this.waitForCondition = null;
this.continueCommandExecutionWithDelay();
} else {
window.setTimeout("testLoop.pollUntilConditionIsTrue()", 10);

}
};

with

/**
* Busy wait for waitForCondition() to become true, and then continue
* command execution.
*/
this.pollUntilConditionIsTrue = function () {
LOG.info("Polling:");
this.pollUntilConditionIsTrueWithTimeout(2000);
};

this.pollUntilConditionIsTrueWithTimeout = function (timeoutChances) {
if (this.waitForCondition()) {
this.waitForCondition = null;
this.continueCommandExecutionWithDelay();
} else {
if (timeoutChances <= 0)
{
var message = "'AndWait' command timed out.";
LOG.error(message);
this.commandError(message);
this.testComplete();
}
else window.setTimeout("testLoop.pollUntilConditionIsTrueWithTimeout("+ (timeoutChances -1) +")", 10);
}
};



It is not tested on anything other than IE, and no unit tests... but it seems to work for us.

Friday, December 02, 2005

Custom Radio Station

I am totally impressed with Pandora. A site that allows you to specify songs or artists that you like and by doing so create a customized radio station.

Nice solution to time sheet problems

I believe that as a professional I must be accountable for how I spend my working hours. I should be able to answer questions like "How long did feature X take to implement?", "Why did Y take so long?" and "What have you been working on for the last week?". My memory is not always up to this challenge.

To solve this I wrote a very simple 'proactive time tracker'. It regularly pops up a message box to ask "What have you been doing for the last X minutes?" and logs the answers. This allows me to automatically record what I got up to in a day.

Another project trying to solve this same problem is: TimeSnapper

It takes regular snapshots of your screen (at configurable interval and resolution) and allows you to later play this back as a movie :) or jump to any point in time.

I reckon it's a nice solution. I am looking forward to replaying my week :)

Tuesday, November 29, 2005

"10 rules for web startups" from the co-founder of Blogger

http://evhead.com/2005/11/ten-rules-for-web-startups.asp

I like reading this sort of content. It gives me hopes for the future of web applications. I strongly suspect that what he says is correct, however I feel the need to prove it to myself.

Another quote I read somewhere. "Set yourself a 10 day deadline and publish no matter how little functionality is complete".

How to make the ultimate paper airplane

I have to try this too at some point.

Free book from Dilbers Author

This has been available for ages, but I only just noticed it. http://www.andrewsmcmeel.com/godsdebris/

Free from the guy that writes dilbert.. it claims to be a thought experiment.

I have been accused several times of not fully qualifying as a geek. This is mainly due to having inadvertantly failed to read several Neal Stephenson books. I am currently attempting to make amends for this oversite by reading "The Dimand Age". So far.. I am loving it. It's all Nano-tech and programmery. Definately my sort of thing.

"Snow Crash" is actually the next on my list.

Any recommendations for must read geek books?

Thursday, November 17, 2005

Round corners on divs using css and javascript

I saw this recently so I thought I would pass it on.

MockiKit - A suite of lightweight javascript libraries.
I love the 'rounded corners' feature. (see http://www.mochikit.com/examples/rounded_corners/index.html)

The round corners code is based on the Rico library ( see http://openrico.org/rico/home.page )

.. which in turn based its implementation on "Nifty Corners: rounded corners without images " see http://pro.html.it/esempio/nifty/

Thursday, October 27, 2005

So what's next... is getting closer

I posted a while ago about how I felt that developing in text files is such a bad idea. That code should be a structure in memory. This would enable features like: being able to instantly evaluate tests as you change the code. The ultimate in timely feedback.

Well I just came across this project today, which is getting a long way towards the future I predict.

Check out the movie demo..

Then imagine duplicating the state each time you set up a test so you can see all tests in your state at the same time. I'd love to try TDD with this tool.

Friday, October 21, 2005

Ajax in action

The best site I have seem (to demo Ajax) is script.aculo.us



It really gives you the feeling that web app clients are starting to get a little fatter again.

Thursday, October 20, 2005

And... we're back!

I had some problems recently with getting my bog working. I was convinced it was a Blogger problem. Turns out I had run out of space on my server, so it was my bad.

Big thanks to Christine at Blogger Support for all her help.

Note: the Atom feed should be working again now too. :)

Tuesday, October 04, 2005

Linus says no to specs

I just read this in slashdot...

Linus Torvalds raised a few eyebrows (and furrowed even more in confusion) by saying "A 'spec' is close to useless. I have _never_ seen a spec that was both big enough to be useful _and_ accurate. And I have seen _lots_ of total crap work that was based on specs. It's _the_ single worst way to write software, because it by definition means that the software was written to match theory, not reality."

I have got to say, I have to agree with the chap :)

see slashdot for more info...

Friday, September 09, 2005

Selenium as part of the build

We are using Selenium as our automated user acceptance test runner at work. I wanted to include it in our continuous integration script. I couldn't find any information online as to how best do this, so here are my solution:-

We are using CruiseControl.Net with NAnt on our continuous integration server.

Selenium is an automated user acceptance test (UAT) framework written in JavaScript. It interprets tests (in the form of HTML) into commands that it uses to drive and validate a website that it has open in an iframe. To run selenium you open up a browser pointing at your selenium test runner page. Our URL is something like this (but on one line obviously)...

http://localhost/selenium/TestRunner.html?
test=http://localhost/acceptanceTests/TestSuite.aspx&
auto=true&
resultsUrl=http://localhost/acceptanceTests/PostResults.aspx

Parameters
  • 'test' - specifies the test suite to run
  • 'auto=true' - run as soon as the page loads
  • 'resultsUrl' - tells selenium a page it can post the results to
The problems I faced were as follows:
  • Run the selenium test from NAnt
  • Detect when the tests have finished running
  • Cause the build to fail if any tests failed
I chose to write a custom NAnt task to help achieve this. My NAnt task fires up Internet Explorer and tells it to navigate to the aforementioned URL. It then listens to the 'load complete' events in the browser until it gets told the final 'PostResults.aspx' page has been loaded.
The PostResults.aspx is a simple ASP.Net page that saves the passed in test results out to a file in a well known location. The NAnt task then checks this file to see if any tests failed. If they did then it throws a Build Failure exception. If not it just completes successfully.

This worked really well.

Here is an excerpt from our NAnt script:


<target name="--runSelenium">
<trycatch>
<try>
<selenium
seleniumUrl="http://localhost/selenium/TestRunner.html"
testsuiteUrl="http://localhost/acceptanceTests/TestSuite.aspx"
resultsUrl="http://localhost/acceptanceTests/PostResults.aspx"
errorLogFile="C:\temp\Log\error-results.html"/>
</try>
<finally>
<exec program="taskkill" commandline='/FI "windowtitle eq Selenium*"' failonerror="false" />
</finally>
</trycatch>
</target>



I have been thinking of making the 'PostResults.aspx' page take the 'errorlogfile' uri as a parameter. Currently this filename is duplication. I would also like to allow the NAnt file to specify a timeout for the selenium process. I could also move the 'taskkill' logic into the NAnt task.

Here, you have a go:


  • Download the project from here .. (Update: Sorry this file was lost a couple of server moves ago. If anyone fancies giving writing it a go, let me know so I can reference it. It wasn't hard to write.)

  • Drop the Selenium.Tasks.dll into your NAnt/bin folder.

  • Put the above NAnt code in your NAnt script..

  • Write yourself a PostResults page (check out the selenium site for details)
  • Away you go.
When I get time I'll post about how we are generating out selenium tests from C#, which is allowing us to remove duplication and make really succinct tests.

Wednesday, September 07, 2005

Wrong Problem

I saw some code the other day that totally impressed and disheartened me.

If the code could be likened to a car it would be a car with the best suspension system every invented able to give the driver a smooth comfortable drive. The car would also be sporting a lovely set of square wheels.

Quite some time and effort had been put into solving how to handle the bumpy ride, but they missed the real problem.

It is far too easy to fall into this trap.

Sunday, August 07, 2005

So what's next?

Code is a document. A document is written in a language and describe something of meaning. A language is a bunch of terms and a grammar that give the author a framework to present their meaning. In programming, the 'document' is intended to describe a set of instructions that the computer can understand. What form does the computer want these instructions? Well as a document in another language. So a compiler just translates the code from one language to another right?

Translation involves two stages, transforming the original document into an 'understanding' or model of the documents meaning, then expressing this again in another language.

When dealing with a human translator you can say anything. Where there is ambiguity the translator can then ask questions to ensure he has captured the meaning correctly. He can then express the meaning in the new language. The process is interactive.

Your IDE/Editor should be just as interactive.

The future of computing is in making the interaction as simple as possible. The smarter the IDE the simpler the interaction. When the translator knowns what you mean you should be able to stop typing. When the translator is confused it should tell you. Working with and IDE should be a conversation.

Unit testing is good! Unit testing is the gateway to example driven programming. 'When I do X and Y, Z should happen'. This is a form of requirement specification.

Talking of which... It is easy to see that any 'requirements' document that completely explains the requirements of the system is code. The code is just in a language that is much harder to create a software based translator for, so we use humans to achieve it. To me this is the least rewarding form of programming.

With OO software development the codebase breaks down into the model, business rules and user interface. There is also persistence and networking etc. but lets ignore those for now.

The model is just a way of extending the terms in the language you are using to allow you to more concisely explain the system behavior(business rules) you wish to explain to the computer.

If you are extending the terms in the language why not the grammar too. You are then describing your own Domain Specific Languages. JetBrains are doing great job in this area. They intend to create a development environment that allows you to create a programming language quickly that represents the problem domain. They also want to provide you with an intelligent 'editor' for your language capable of intellisence, refactoring etc. This is very exciting. The other approach is to use a language like Ruby or Smalltalk where the language lends itself to easily being extended to include your new concepts.

It would be possible to have the IDE aware of unit tests. Evaluate the unit tests against the current codebase. It could then give you realtime feedback on what code is covered by tests. It could remember the code coverage for each test. Any change you make to the codebase from that point on that effects an existing test forces that test to be re-evaluated. If the test then fails the user is notified. 'Your change just invalidated one of your tests'. Faster you get feedback means less bugs.

IDEs should be more intelligent. I don't just mean smart. I mean AI intelligent. A mixture of automatic refactorings, 'good' design heuristics and Genetic Algorithms could lead to a system that automatically generates the simplest code solution that meets all the requirements (unit tests). Programming would then be the process of coming up with examples where the existing solution fails to meet the requirements.

Friday, July 08, 2005

Prevayler for .Net

Prevayler is a persistence engine that is a cool alternative to a database. It boasts performance of about 3000 x faster than MySQL (if I remember correctly).

The basic premise is that RAM is cheap (and if not today it will be tomorrow) so keep all your business objects alive in RAM all of the time. To handle server crashes you regularly make snap shots of the server. You employ a command pattern for all actions that cause changes to the state of the system. The commands are serializable, so they get logged before they are executed. If the server goes down, you load the last snapshot and then replay the commands in the log since the last save and there you have it, the server is back to the same running state.

You don't even need to stop the server to take the snapshot. You just run another copy of the prevayler (loaded from the last snap shot) get it to run up to speed using the log, then stop processing the log and save again.

Since everything is live in memory all the time, asking a customer for all his orders is as simple as accessing the Customer.Orders property.

Check out the Skeptics FAQ for more discussion.

Check out http://bbooprevalence.sourceforge.net/ for a .Net implementation.

New Ruby on Rails release!

A new version of ruby on rails just came out :) with Ajax support ...check out http://script.aculo.us/demos/shop for a demo

Also check out rails-013-225-featuresfixes-in-75-days
for full details. They have made many changes.

Wednesday, June 29, 2005

NMock 2 - new syntax, new features, same problem

NMock are working on a new version of their popular mocking framework.
You can now mock Properties and Events and it has a totally new syntax.

Expect.AtLeastOnce.On(joker).Method("Ha");

You are still limited to using "strings" for method names, so you still have the issue of changing your tests manually when you rename methods. (see EasyMock as an alternative)

Thanks to Roy Osherove's for highlighting NMock's latest efforts.

Monday, June 27, 2005

Comments are deodorant for code

"Code smell" is the rather evocative term used to describe an aspect of a piece of code that indicate a fundamental problem with the current design. Comments are frequently used to explain what is going on in complex code. Therefore they act like deodorant on smelly code.

Most comments I read in code are totally pointless. 'print "hello"; // this line prints hello' or worse 'print "goodbye"; // This line prints hello'. Another common use is as follows...
void someMethod()
{
//add a new user
... Code to add the new user ...
// give the user a new password
... Code to give the user a new password ...

...
}

What you have here are a whole bunch of methods all tied together. My money is that if you look elsewhere in the code you will find another method that also implements the 'add new user' code too. Methods like this make duplication much harder to spot.

Code should be self explanatory. If you have to put a comment in the code to explain some aspect of its behavior then the code is not clear enough.

There are three things you are trying to get across to the reader of your code:
1/ What is it doing?
3/ How is it doing it?
2/ Why is it doing it?

Here is how to address each of these concerns.

What is it doing?
The name of the method answers this question.

How is it doing it?
The body of the method should be an easy to read list of steps.

Why?
Very occasionally reading the 'How' of a method will leave you wondering 'why is it done this way? Usually this is time to refactor. Sometimes, however, this is not the case. Maybe the operations are order dependent or apparently superfluous steps are required for someone API to work correctly. Explaining these situations is what comments are for. It is also a good idea to have unit tests to verify these conditions remain true.

Situations where comments are require should be rare!

So... Next time you are adding a comment to explain some code, ask yourself the question "Is this comment explaining What? Why? or How?" if its not "why?" then it shouldn't be a comment!

Wednesday, June 08, 2005

Tuesday, June 07, 2005

Viral Marketing

I was reminded the other day about viral marketing. There is a great FREE book on the subject at http://www.sethgodin.com/ideavirus/01-getit.html

The basic concept that ideas act like viruses. Some ideas (the successful ones) catch on and spread. The book is about how to design and present ideas so they can be easily spread and hence improve the chances of them taking off.

A good recent example of this is beer commercials. Manufacturers have started creating adverts for release on the internet. If you wrap your message in something funny people will email it to each other. Jokes on the internet move very fast indeed. The adverts spread in this manner reach millions in no time, and costs the company practically nothing.

Tuesday, May 31, 2005

Musical Office

I frequently work with teams that like to listen to music while developing. I hate headphones. I find they are worse than putting each developer in their own office for team communication. Better to have some music you all agree upon listening to and have an Office Jukebox.

Currently we are using WinAmp with WinAmp Web Interface. This allows anyone on the team to change track, change volume, etc.

Mocking actors and critics

Talking with Craig the other day he mentioned the names 'Actor' and 'Critic' when talking about the roles of a mock object.

Looking at the EasyMock API more this week, it seems there may be a way to use the SetDefaultValue() methods to achieve what I want. We shall see.

Saturday, May 28, 2005

EasyMockNET vs NMock

Recently I have been convinced to try easy mock instead of NMock (my mocking framework of preference). Here is how I went...

I started using NMock when all that was available was a dll you could download from sourceforge. There was absolutely no documentation and no source code. I ended up using Reflector (great tool) to work out how to use it. NMock soon became an indispensable tool.

There are a choice of mocking frameworks now (a good list is available on testdriven.net). Most are basically the same. I have been happy with NMock but it does have one problem though. Here is an example of an NMock Test.

[Test]
public void LoadingScriptWithSingleKnownAction()
{
DynamicMock mockActionLoader = new DynamicMock(typeof(IDbActionLoader));
IDbActionLoader actionLoader = (IDbActionLoader)mockActionLoader.MockInstance;

DynamicMock mockAction = new DynamicMock(typeof(IDbAction));
IDbAction action = (IDbAction)mockAction.MockInstance;

DBScriptLoader loader = new DBScriptLoader();
loader.RegisterActionLoader(actionLoader);

string scriptTxt = @"CreateTable
Name: newTable";

mockActionLoader.ExpectAndReturn("HasName",true,"CreateTable");
mockActionLoader.ExpectAndReturn("LoadAction",action,scriptTxt);

IDbScript script = loader.LoadScript(scriptTxt);
Assert.AreEqual(1,script.Length);
Assert.AreEqual(action,script[0]);
mockActionLoader.Verify();
}


Notice the "ExpectAndReturn" calls require you to specify the name of the method you are expecting as a string. The problem with this is that refactoring tools (i.e. Resharper) doesn't know these are method names. If you rename one of these methods you will have to manually rename the string to match. This sort of test maintenance increases the cost of having tests. Anything that increases the cost of tests is bad as it pushes you towards writing less tests and that is where the map is marked 'here be monsters'.

EasyMock has another way of writing tests. The mock object that is created has two modes. It starts in recording mode. To set expectations you just call methods on the object. When you have set all your expectations you then switch all the objects to replay mode. To set the return value of a method call you have to call 'SetReturnValue' just after the method has been called. This is ugly, but luckily there is some syntactic sugar to wrap this up in most cases. Here is the same test again in EasyMock (using the ExpectAndReturn syntax).

[Test]
public void LoadingScriptWithSingleKnownAction()
{
MockControl mockActionLoader = MockControl.CreateStrictControl(typeof(IDbActionLoader));
IDbActionLoader actionLoader = (IDbActionLoader)mockActionLoader.GetMock();

MockControl mockAction = MockControl.CreateStrictControl(typeof(IDbAction));
IDbAction action = (IDbAction)mockAction.GetMock();

DBScriptLoader loader = new DBScriptLoader();
loader.RegisterActionLoader(actionLoader);

string scriptTxt = @"CreateTable
Name: newTable";

mockActionLoader.ExpectAndReturn(actionLoader.HasName("CreateTable"),true);
mockActionLoader.ExpectAndReturn(actionLoader.LoadAction(scriptTxt),action);

mockActionLoader.Replay();
mockAction.Replay();

IDbScript script = loader.LoadScript(scriptTxt);
Assert.AreEqual(1,script.Length);
Assert.AreEqual(action,script[0]);

mockActionLoader.Verify();
mockAction.Verify();
}


ExpectAndReturn in Easymock is the syntactic sugar to make the interface look like NMocks. The following are equivalent...

mockActionLoader.ExpectAndReturn(actionLoader.HasName("CreateTable"),true);

or

actionLoader.HasName("CreateTable");
mockActionLoader.SetReturnValue(true);


In the ExpectAndReturn methods the first parameter is ignored. It is just a nice place to put your method call to the mock object.

I started using EasyMockNET on a new project and quickly found that a simple helper class makes using EasyMock even easier.

public class MockManager
{
private ArrayList mocks = new ArrayList();
public MockControl CreateMock(Type typeToMock)
{
MockControl strictControl = MockControl.CreateStrictControl(typeToMock);
mocks.Add(strictControl);
return strictControl;
}

public void StopRecordingAndStartPlayback()
{
foreach (MockControl mock in mocks)
{
mock.Replay();
}
}

public void VerifyAllExpectations()
{
foreach (MockControl mock in mocks)
{
mock.Verify();
}
}

public void ResetAllExpectations()
{
foreach (MockControl mock in mocks)
{
mock.Reset();
}
}
}


By creating an instance of this class in the SetUp method of the test fixture, and using CreateMock instead of MockControl.CreateStrictControl it allows you to verify, reset or replay all the mocks in one go.

One problem I have with mock objects (in general) is how easy it is to make your test dependent on the implementation details of the class under test. A good unit test should verify the external behavior of the class, not the method it uses. The very nature of mock objects forces you to set expectations on how a class is used. I don't care if the calling class uses my "Has Items" method as long as it doesn't try to access the items when there aren't any. If the calling class decided to call this method two or three times I don't really care either. So... I want to be able to say "method, don't expect to be called, but if you are with these parameters then return this value". I have added such a method to my local version of NMock. I guess I will have to add something similar to EasyMockNET.

When I am writing a test with mock objects I find that I use them in three ways. One is to set up data so I can put the class under test into a known state before I begin my real test. Another is as a simple data object that I verify is passed around correctly (no expectations are set). The other is the actual expectations I wish to validate for the test. Again the first set of expectations I don't actually want to verify for this test (they should be tested elsewhere). Thoughtfully EasyMock provides a Reset method which throws away all the expectations set so far. You can therefore set up mocks to return values, call the init methods on your class under test so its state is set, then call Reset on all mocks. You are not ready to set your real expectations.

So... My conclusion? I'm using EasyMockNET on all my projects from now on. The fact that refactorings tools can keep my tests up to date is a big win for me. If I change my mind later, I'll let you know ;)

Wednesday, May 18, 2005

How to change your life

I have been reading a great book recently "It's your life. What are you going to do with it?" by Anthony Grant PhD and Jane Greene.

It is a book about how to be your own life coach. Every piece of advice is practical and based on research. I recommend giving it a read if you have any unfulfilled goals.

One gem I have taken away from it so far (and it is typical of this book that the advice seems obvious once you hear it) is...

If you want to make a change in your life, there are several phases that you go through.
* Preconception - before you start thinking of making a change.
* Contemplation
* preparation
* Action - start to take steps
* Maintenance - conscious effort required to follow new behavior (leading to relapse or termination)
* Relapse - fall back to old habits (leading back to Contemplation)
* Termination - The new behavior is now part of your personality

So.. You can help make changes be facing each of these phases full on.
When it occurs to you that you should change something evaluate the pro's and con's on paper. This will help you decide if you really want to make the change. This will lead you quickly to the preparation stage. Plan what changes you want to make to achieve your goal. Make them attainable and measurable. This readies you for Action. Put in place your plans. Measure your progress in a way that you can visibly see how you are getting on. By measuring how you are progressing you can use feedback to adjust your goals to ensure they are attainable. This will help you enter the Termination stage. Even if you don't attain your original goal, from this new position your original goal may now be attainable. Relapse is expected and is not failure. Don't beat yourself up for it. Return to Contemplation and reevaluate your original reasons for wanting to make the change. This is where having defined a good list of pro's and con's at the beginning will help. Review them. If they are still correct then this should help get you quickly back to preparation. Take the time to reevaluate the plans you previously made. Adjust them if they are not measurable and attainable.

and repeat.

Skype on PDAs

Can someone do me a favor?

If you have a window CE PDA with speaker, mic and wireless... Install skype on it!
Being able to send and recieve free calls with friends while you are in range of a network sounds cool.

Anyone tried it? I'd love to hear how it went.

Cheers

Refactoring Thumbnails

It is a cool way to view refactorings.
http://refactoring.be/thumbnails/ec-proxy.html

Tuesday, May 17, 2005

How OO are you?

I once introduced a very smart C coder to C++. I had the pleasure of working with the guy, and what impressed me most was how he followed a coding standard of his own invention.

Namely:
* All methods take as their first parameter a structure.
* Only methods related to a structure lived in the same .h and cpp files.

Basically the guy was doing OO and didn't know it.

So... What is the opposite of OO?

I recently had to interview a guy that claimed to know C++, Java, C# and more. In the interview it turned out that he didn't like objects and preferred to just use functions and global variables. In Java, for example, his code would consist of a single class containing all methods and variables.

It seems that Procedural and OO are different ends of some spectrum. Which makes me think 'Where about on this spectrum does my code live?' I hope its up the OO end. That's my aim. I have found that TDD has help me move much closer to the OO end than I thought possible. I am going to make a conscious effort to see if I can get any further away from procedural code.

Basically I am going to try to make all method bodies as small as possible (try to get to one line) and use objects not basic types (try to pass at most one parameter to each method). Also I'll make sure no methods have any side effects. :)

We'll see how I get on.

Friday, May 13, 2005

Hungarian Notation is good? Since when?!?

Wow! It seems that the completely pointless naming standard I was taught years ago called 'Hungarian Notation' was a misinterpretation of a slightly more useful coding standard.

Check out JoelOnSoftware for the details : Making Wrong Code Look Wrong
http://www.joelonsoftware.com/articles/Wrong.html

Thursday, May 05, 2005

Refactoring is optimization

Refactoring is a form of Optimization. You are optimizing the code for maintainability and clarity.

The rule of premature optimization is that you don't do it till you need it. This prevents you spending time optimizing code that doesn't need it. You measure where you need to optimize before optimizing. In reality the optimizations needed are never where you would have guessed. So the question is... In the case of refactoring, what is premature optimization?

In my opinion, premature optimization is when you design the code before you write it. You are guessing at what concepts you are going to need to hold in the code, and not being driven by the actual need of the system. You end up over-designing system that a simple solution would work just a well for. (see YAGNI).

Hence the order of Red, Green, Refactor.
* Red - Add an idea to the requirements.
* Green - Add the concept to the code.
* Refactor - Distill the new concepts for clarity.

Note that refactor is after the idea is in the system, not before!

These thoughts came out of a chat with Simon Harris the other week.

What the hell was MSWord thinking?

Right
Resharper makes an icon appear to say 'I know how to do some stuff with the thing you have highlighted'. The user is not interrupted. The code doesn't change without being asked (well except in Visual studio's VB editor, but that doesn't count).

Wrong
MSWord changes stuff while you type without asking. It then highlights the changes to give you the chance to undo them. I spend more time undoing the mistakes it makes with its guesses than any possible benefit I get from its changes.

Editors should be intelligent, but they should offer assistance when asked, NOT when they feel like it.

You do that a lot! Want to automate it?

Imagine an IDE that watched your key presses/edit actions. It look for duplication in the stream of actions and say "Hey, are you doing that thing you did before? Do you want me to the rest for you?" humm... Sounds horridly like "you seem to be writing a letter..." arrgh.

ok.. Better... You activate a 'I've done this a couple of times, can you do it for me' function. It suggests a list of things you keep doing. You get to store away auto-macros (as I am now calling them) so you can keep them if they are useful.

The list could be ordered by recent, or popular.

Just a thought

Rapid feedback

Ok.. This is a flight of fancy I had a few months ago, but its still in my head, so I thought I would let it free on the web.

Imagine an IDE where as you type in new code you not only get told if the code compiles, but also if the unit tests pass with this new code.

IDEs now hold a parsed version of your code so they can highlight compiler errors as you type. Its not that big a step to then interpret the tests over the in memory parse-tree to see how the code will execute. The IDE could keep track of which lines of code are executed by which test easily enough. Then it would be a case of only re-interpreting the tests that are potentially effected by each edit.

This would be the ultimate in rapid feedback :)

Wednesday, May 04, 2005

.Net on Rails?

Castle.Net is a bunch of libraries for everything from IoP to 'Castle on Rails'. There are a couple of tutorials on Code Project (http://www.codeproject.com/csharp/IntroducingCastleII.asp).

I intend to have a play with it, so expect to see more posts on Castle in the future. :)

Atom Feed

I have an atom feed on this site, but I just noticed that you can't get at the link easily due to the way the URLs are hidden on my site.

Here is a link you can use until the issue is fixed.

http://web.aanet.com.au/nigelthorne/blog/atom.xml

From the guys that wrote Ruby On Rails

Ruby on Rails was an offshoot from a product these guys made for project management. Thoday they released a site aimed at life management.

http://www.backpackit.com/

It's free to join... and has so much potential!

Check it out.

Tuesday, March 29, 2005

Version the file system not the files

Most version control software misses the point. They version the files fine. Most fail to sufficiently version the file system.

When you are working on a codebase you want to be able to write new classes, rename them, move them to other assemblies, etc. A single class (throughout it's life) may have lived in various diffrent directories, in varously named files.

SourceSafe allows you to rename files or directories, but once they are renamed it is as if they have always had that name. If you ask for an old version, it still has the new name. Not what you want.

Subversion is better. It remembers the file's filename and content, so getting a version as of a past date works correctly. However subversion still required you to use subversion to rename the file otherwise it can't track the change. What if you are using a refactoring tool or IDE that does the file renaming for you?

What is needed is a version control system that watches the changes you are making to the filesystem as a whole and tracks these changes for later. Renaming a file on the hard drive should be enough for the verison control software to know the file was renamed.

Last weekend I have started a small application that attempts to achieve this goal. SVNMirror.

SVNMirror mirrors all changes made to a directory (and all subdirectories) in subversion. Here is how it works...

SVNMirror watches a source directory for changes (using the .Net FileSystemWatcher classes). SVNMirror is also given a versioned directory (which would usually be your working directory if you were using Subversion in the standard way). When a change occures the change gets mirrored in the versioned directory.

For example:

* You create a new file in your source directory. SVNMirror copies the file to your versioned directory and tells subversion to add the file to version control.
* You modify a file in your source directory. SVNMirror copies the change to your versioned file.
* You delete a file in your source directory. SVNMirror marks the file as deleted in subversion.
...
etc.

There are still a few issues with renaming files that are not comitted to the database yet, but other than that the proof of concept works fine. :)

I'll post a link to the page once I make a site for the project.

Tuesday, March 22, 2005

What exactly are you estimating?

"How long will it take to get to Sydney from Melbourne?"

To answer this you need to know three things:
* How fast are you going?
* What route are you going to take?
* How accurate are my estimates for speed and distance?

Whenever you estimate a task in terms of time you are joining three independent variables. The Productivity of the team, the Magnitude of the task and the Accuracy of your estimate.

* To drive direct to Sydney is about 1000 km.
Assuming you don't need to take any detours or get lost.

* The average velocity for the trip is around 100km/hour.
Assuming your car doesn't break down, or you have an accident, or the kids need the toilet every 20 mins ... etc.

So.. I could quote my estimate as 13 hours.

If I had done the trip a few time, and it always took between 10 and 13 hours then maybe I would be much happier to make that quote.

But... what if I was gambling $100,000 on being able to get someone to Sydney in that time.
I wouldn't be happy with the estimate any more. I would want to allocate extra time incase the unforseen happened.

What level of risk is your customer happy with? Have you asked?

Thursday, February 17, 2005

How often do customers want new software?

Prompted by a thread on extremeprogramming@yahoogroups.com (How often do
customers want new software?)

Frequent releases are good for development. The more often you can get
the software in the hands of the users, the quicker you can get
feedback. The sooner you get feedback the easier it is to incorporate
your new learning into the software.

Users sometimes shy away from any form of update or upgrade, and
traditionally they have good cause. New release s frequently represent a
gamble. "Will the new bugs be worse than the ones it fixes." This is
however not true for all software.

Virus checkers by there very nature support risk free updates. Users
trust that each update will only make the software better. AVG virus
checker for example allows you to specify the frequency that it checks
for updates. On finding an update it downloads it and prompts you to
accept the change. If you turn off auto update the application still
warns you when the bug data it is using is considered old/out of date,
thereby prompting users to update.

Another product that I have the same trust with is ReSharper from
Jetbrains. Through their 'early access program' (EAP) they frequently
release new builds. By subscribing to watch the download page I get
emailed when a new build is available. I usually download and installing
it that day. I trust Jetbrains to fully regression test each release
(and haven't been let down once yet), so I consider the risk of each
install is very small. I accept the occasional bug in a new feature as
this is the nature of the EAP. The risk is further reduced by the
ability to uninstall the current version in favor of an older build.
Most of the time I just avoid using the broken feature until the next
build (a couple of weeks at most). I am naturally an early adopter, and
so gain value from having the latest version at my fingers.

Jetbrains also caters for more careful customers by updating their main
site each time a new badged version is released. The EAP builds tend to
initially focus on new features, then on bug fixes until the new
features are working in everyones environment. They relying on the EAP
community to feed them bug reports to gain this vital feedback. This
'bug reporting' mechanism therefore needs to be as pain free as possible
to maximize the feedback you get from this process. Once the important
bugs are fixed a badged version is released to the wider community.

Within any user base there are early adopters, hungry for any and every
new feature. These guys are the key to making frequent releases work. If
you can build trust with these people you can gain access to the
valuable timely feedback you require as product developers. This trust
can only be built through maintaining a high level of quality for each
release and being responsive to all feedback. Users stop giving feedback
if its ignored, so always respond quickly and with a human voice. Not
everyone is an early adopter though, so this process needs to be 'opt in'.

What can you do to make more people early adopters?

* Minimize the cost of change
One or no-click upgrade process.
* Make benefits obvious.
Provide access to a list of changes represented by the update
* Minimize the risk.
Guarantee the quality of each release, and provide a mechanism to roll
back an upgrade if required.
* Make giving feedback painless and valued
Let people know that the feedback was received and provide a way for
users to be kept updated with the progress of the issue.


Thursday, February 10, 2005

Find a file in Solution Explorer [updated]

Solution explorer keeps jumping around every time you get latest. Refreshing the tree takes almost as long as getting the files. So.. turn off the syncronisation. Get latest now works faster.

Sometimes you want to find the current file in the tree though, so...

Here is a macro to find the active file in solution explorer.
[updated for vs2005 with changes from Simon R.]
Sub FindFileInSolutionExplorer()
Dim sln As String = DTE.Solution.FullName
sln = sln.Substring(sln.LastIndexOf("\") + 1)
sln = sln.Substring(0, sln.Length - 4)
Dim pi As Object = DTE.ActiveDocument.ProjectItem
Dim endString As String = "Microsoft Visual Studio"

Dim path As String
While (Not endString = pi.Name)
path = "\" & pi.Name() & path
pi = pi.Collection.Parent
End Whilejavascript:void(0)
path = sln & path

Dim UIH As UIHierarchy =
DTE.Windows.Item(Constants.vsWindowKindSolutionExplorer).Object
Dim anItem As UIHierarchyItem = UIH.GetItem(path)
anItem.Select(vsUISelectionType.vsUISelectionTypeSelect)
End Sub
Coupled with the ever useful "Collapse all the open project folders in the Solution Explorer".
Sub HideAllProjects()
DTE.SuppressUI = True

Dim UIH As UIHierarchy =
DTE.Windows.Item(Constants.vsWindowKindSolutionExplorer).Object
Dim aSoltion As UIHierarchyItem = UIH.UIHierarchyItems.Item(1)
Dim aProject As UIHierarchyItem

For Each aProject In aSoltion.UIHierarchyItems
Dim aFolder As UIHierarchyItem
For Each aFolder In aProject.UIHierarchyItems
aFolder.Collection.Expanded = False
Next
Next
DTE.SuppressUI = False
End Sub
I map these to
[Ctrl]+, = Collapse all
[Ctrl]+. = Select currently active file

Very handy.

GitHub Projects