Sunday, November 17, 2013

Handling a Compressed Response in Node.js

So I ran across this problem when trying to consume a RESTful service. I couldn’t figure out why I was seeing nice response bodies in Advanced REST client but in the Node debugger, a bunch of noise. Eventually I saw this in the response header:

'content-encoding': 'gzip'

That’s when it dawned on me what was happening. This response header was telling me that the server was sending back compressed data. So I spent some time chasing down how to gunzip in Node. Interestingly, my requests never asked for compressed http. A brief overview of http compression can be found here.

The example below removes all dependencies on third party libraries and focuses on solution free from distraction. It uses Node’s Http and Zlib objects which are part of Node (v.10.22 at the time of writing this). If you are using Express or some other modules to handle your Http requests, you’ll need to get your data into Zlib in a slightly different way but the approach should be similar.

var zlib = require('zlib');
var http = require(‘http’);
var req = http.request(options, function (res) {
          res.on('data', function (data) {
              debugger;

              // for some reason response from this route is gzipped...
              // see the header 'content-encoding': 'gzip'
              // so unzip it.

              zlib.unzip(data, function(err, gunzipped){
                  if(err){
                      // log the error with your logging utility
                      // respond with an appropriate error.
                  }

                  var message = JSON.parse(gunzipped);
                  // do something here with your gunzipped message body
             // send a response
             });
          });
      });
      req.write(JSON.stringify({<< Some response body here >>}));
      req.end();
      req.on('error', function (e) {
          debugger;
          // log the error
          // send an appropriate response.
      });
  };

The trick lies in the zlib.unzip method. It expects the data from the http.request’s onData event to be passed in. Once gunzipped, we can do what we want with it. In my case I am expecting JSON bodies so I parse it into an object I can use.

In the rest of the example, the req.write, req.end, req.on(‘error’) are standard approaches to using the Http object to make a request to some end point.

It might be a good idea to check the 'content-encoding' header before gunzipping, even when you expect it. The service provider could change the the header underneath you and cause a problem. The service provider shouldn’t make such a change but that discussion is another matter entirely.

Saturday, October 26, 2013

Installing Java on Ubuntu 12.x+

I’ve seen a couple of ways to do this on the internet but the easiest is through apt-get and ppa.

The catch with doing anything in Java on Ubuntu is always the same - Ubuntu doesn't ship with Oracle’s Java installed. Instead, Ubuntu ships with Open JDK installed. Open JDK is an open source version of Java. Some may argue that you can develop with and probably run most things Java with Open JDK. That may be the case, but for me I find the Oracle Java to be more trustworthy for production environment so will be using the Oracle bits. This is simply a matter of opinion.

Apt-get needs to be configured to point to a ppa that will take care of the heavy lifting and then install like anything else via apt.

Open a terminal and execute the following:

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer

Verify Oracle Java 8 is the default java version, the ‘*’ char notes the selected default:

sudo update-alternatives --config java
There are 2 choices for the alternative java (providing /usr/bin/java).

  Selection    Path                                            Priority   Status
------------------------------------------------------------
* 0            /usr/lib/jvm/java-8-oracle/jre/bin/java          1072      auto mode
  1            /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java   1071      manual mode
  2            /usr/lib/jvm/java-8-oracle/jre/bin/java          1072      manual mode

Press enter to keep the current choice[*], or type selection number:

That's it. All done.

Thursday, August 29, 2013

Configuring ssh for Jenkins for Git on Ubuntu

I ran into some problems configuring ssh for my jenkins box. It turns out that when installing jenkins on ubuntu via the package manager that jenkins is setup to run as its own user. So, Jenkins did not have access to my .ssh directory. Of course this made perfect sense once I stopped and thought about it.

Anyhow, I could not find a clear answer in one place so posted here. This assumes that you have an ssh key defined for your Git service of choice. I used the same key for myself and my Jenkins user. You can of course create your own ssh key for the Jenkins User.

1. Setup the Git settings for the Jenkins account - assume the identity of the Jenkins user and configure:

sudo su jenkins
jenkins@lubuntu:~$ git config --global user.email "some email address"
jenkins@lubuntu:~$ git config --global user.name "some name you associate with jenkins"
exit

2. cd to your user home directory and copy the .ssh key info into the Jenkins account.

cd /home//ssh
sudo cp id_rsa /var/lib/jenkins/.ssh/
sudo cp id_rsa.pub /var/lib/jenkins/.ssh/
sudo jenkins:nogroup chown /var/lib/jenkins/.ssh/id_rsa
sudo jenkins:nogroup chown /var/lib/jenkins/.ssh/id_rsa.pub

3. Assume the identity of Jenkins again, cd to the Jenkins directory, create a temporary directory, clone your repository from the Git repository, fix any issues this reveals then delete the temporary directory:

sudo su jenkins
cd /var/lib/jenkins
mkdir junk
cd junk
git clone git clone git@bitbucket.org:butbucket user name/user’s repository.git
cd ..
rm rf junk

4. Kick off your Jenkins build and verify that the Source Code Management plugin works.

Thursday, August 22, 2013

Multi-Part File Upload with Node JS and Express

Several examples exist on the internet that detail how one can use Node.js to handle a multi-part form upload. Well, the scenario I ran into was that I needed to expose the upload functionality in my RESTful service, but then needed to pass that upload along to another service. So I had to construct the request object via code. Typical request objects are not so hard to construct in Node.js but the multi-part form upload was tripping me up. Here’s what I eventually came up with.

This example uses Express. Express is a minimal web application framework. You can find it here: http://expressjs.com/. Express exposes the collection of files via the req.files object. Note that express is going to use the name assigned by request creator - so, part of the contract may require the name be a specific value, otherwise there is no way to glean what the user may have named the file - at least not that I have found so far..

We have to tell Express to use the body parser. Do some googling on the bodyParser, there are some properties for specifying the temp directory for the upload and other things as well. Adding this line will cause Express to grab and expose the files as part of the request:
app.use(express.bodyParser());

And here is the POST handler:

app.post('/me/avatar', function(req, res){
   logger.debug('handling POST /me/avatar');
   logger.debug('files :' + JSON.stringify(req.files));
     
  
   // ToDo: this is a problem. It looks like we have to force people to use 'upload' as the form name for the file.
   // Cannot find away around this after hours of searching. Probably something simple,
   // just can’t find it (node.js noob).
   if(!req.files.upload){
      res.statusCode = 400;
      res.send("File not found in request.");
      return;
   }

   var subServiceUrl = “http://some_service_i_need_to_upload_to/my/file”;
       
   // read the file from disk. It has already been uploaded to a temp directory
   var filePath = req.files.upload.path;
   logger.debug('reading file: ' + filePath);
   fs.readFile(filePath, function (err, fileData) {
      if (err){
         logger.debug("error loading file: " + filePath);
         logger.debug(err);
         res.statusCode = 500;
         res.send('');
         return;
      }
   
   
      // the options setup is the tricky part for the multi-part upload
      var options = {
        uri: subServiceUrl ,
        method: 'POST',
        headers: {
          'content-type': 'multipart/form-data'
        },
        multipart: [{
           'Content-Disposition': 'form-data; name="upload"; filename="' + req.files.upload.name + '"',
           'Content-Type': req.files.upload.type,
           body: fileData
        }]
      };
   
      request(options, function(error, response, body){
         logger.debug('statusCode : ' + response.statusCode);
     
         if((response.statusCode != 201) || (response.statusCode != 200)){
            sendError(body, res, response);
         }
         else{
           res.statusCode = response.statusCode;
           res.send(response.statusCode, body);
         }
      });
   });
});

Saturday, August 10, 2013

Are Static Methods an Anti-Pattern?

Is using the Static keyword on class members and methods an anti-pattern? I think so and hopefully this post will get you thinking about it. Primarily I see Static abused when programmers do not want to go through the work of instantiating a class. This is in circumstances in which instantiation is not strictly necessary. The problem with Static members is that they get in the way of approaches such as Inversion of Control and Test Driven development.

First, lets take a look at an example that illustrates what I am talking about concerning using Static on class members. For the sake of discussion I will be using java, but the same is true in c#. Here is the main object of discussion, a Sentinel. This implementation uses an object with Static methods as a dependent object.


public class StaticSentinel {
    private AiState aiState = AiState.Scanning;
    private long posX;
    private long posY;
    private long posZ;

    public void live() {
        int threatLevel = 0;

        // unit testing here is problematic because of the dependency upon StaticMutantScanner. Since
        // StaticMutantScanner's implementation is static it cannot be overriden. Therefore, this.live() cannot
        // be unit tested in isolation of the dependency upon StaticMutantScanner. A unit test here is no longer
        // a unit test but is an integration test - testing both StaticSentinel and StaticMutantScanner components.
        Mutant mutant = StaticMutantScanner.detectMutant(this.posX, this.posY, this.posZ);

        if (mutant != null) {
            threatLevel = StaticMutantScanner.assessThreatLevel(mutant);
            // do something here based on the threat level.
            if (threatLevel > 0) {
                this.aiState = aiState.Hunting;
            }
        }

        // some other ai related activities, move, sleep, attack, guard, etc.
        doAi();
    }

    private void doAi() {
        // do something here.
    }

    public AiState getAiState() {
        return this.aiState;
    }
}
And here is the class with the Static methods. Our scanner, it looks for Mutants and assess their threat level.
public class StaticMutantScanner {
    public static Mutant detectMutant(long posx, long posy, long posz) {
        // create a mutant here via factory pattern and return it if nearby
        // instead of newing-up this mutant. newing-up is used here just for illustration.
        return new Mutant();
    }

    public static int assessThreatLevel(Mutant mutant) {
        // do something complicated here
        // and return threat level of the mutant
        return 0;
    }
}

What I’ve outlined above in the Sentinel class is not an uncommon thing to see in legacy code bases. In the case of the StaticMutantScanner - it is an object that does not require instantiation. So, the developer creates some kind of Static implementation. It makes sense, why bother instantiating it and all the irritating work associated with the task? This thought works - until we try to test it.

If we don’t care about good software quality and highly testable software, we don’t really need to care that we have a Static member. Some organizations out there still test by releasing or through manual and labor intensive processes. For the rest of us though, we want to know quickly and reliably - if and when we break our software by making inevitable changes. So we test. For us, Unit Testing is not an academic exercise.

Here is the test of the StaticSentintel:

    @Test
    public void liveTest(){
        StaticSentinel sentinel = new StaticSentinel();
        sentinel.live();
        Assert.assertTrue(sentinel.getAiState() == AiState.Scanning);
    }

Unit Testing is where Static methods and members make things go wrong. If I try to Unit Test Sentinel.live() in its current state, I cannot do so without exercising the code that lives in StaticMutantScanner.detectMutant() and StaticMutantScanner.accessThreat(). That is, I test live() by settings the a Sentinel’s coordinates and based on whether or not a Mutant is nearby get some kind of AiState property value to Assert against. So this means I have to guess or know in advance the values that would make StaticMutantScanner behave the way we need it to to cover bother branches of the “ if (threatLevel > 0) {“ statement. Some of you at this point are saying, ‘yes, that is Unit Testing.’

No it isn’t. It is an Integration Testing of the Sentinel and MutantScanner objects and how they work in tandem. In a Unit Test - a test of Sentinel.live() should be done by not actually descending into the the logic exposed by sub-components; thereby ‘testing the smallest unit of code’, a.k.a., Unit Test. Integration Testing is perfectly valid and should be done along side Unit Testing, but this is probably the job a QA person, not a Dev. Regardless of the ‘who’ both Integration and Unit tests should exist and serve distinctly different purposes.

In this scenario, Inversion of Control (IoC), also known as Dependency Injection (DI), becomes extremely important and is one reason why DI has emerged as a best practice. In the case of Unit Testing, DI lets us inject an implementation of a dependent child object to explicitly cause a state to exist in the child component that will cause one branch of code to behave in an expected manner.

For this discussion I implement DI this through an overloaded constructor and without some kind of IoC container. I have created a new Sentinel object named TestableSentinel. This object has an overloaded constructor and now acts more like a true composite object with an instantiated MutantScanner member. Note that the calls to the MutantScanner Interface are now through an instantiated object, not a class with Static members.

public class TestableSentinel {
    private MutantScanner mutantScanner;
    private AiState aiState = AiState.Scanning;
    private long posX;
    private long posY;
    private long posZ;

    /**
     * Overloaded constructor. Ultimately for use with Inversion of Control/Dependency Injection.
     * We can inject an implementation of MutantScanner at run time to properly Unit Test
     * TestableSentinel
     *
     * @param mutantScanner This is an implementation of an interface.
     */
    public TestableSentinel(MutantScanner mutantScanner){
        this.mutantScanner = mutantScanner;
    }

    public void live() {
        int threatLevel = 0;
        // The problem with the dependency upon mutantScanner's implementations are solved here by passing in
        // a new implementation during testing that returns the expected result for each test. This ensures that
        // TestableSentinel.live() is being tested in isolation of the mutantScanner object.
        Mutant mutant = this.mutantScanner.detectMutant(this.posX, this.posY, this.posZ);

        if (mutant != null) {
            threatLevel = this.mutantScanner.assessThreatLevel(mutant);
            // do something here based on the threat level.
            if (threatLevel > 0) {
                this.aiState = aiState.Hunting;
            }
        }
        // some other ai related activities, move, sleep, attack, guard, etc.
        doAi();
    }

    private void doAi() {
        // do something here.
    }

    public AiState getAiState() {
        return this.aiState;
    }

Here is the MutantScanner Interface:

public interface MutantScanner {
    Mutant detectMutant(long posx, long posy, long posz);
    int assessThreatLevel(Mutant mutant);
}

Note that we don’t know what the implementation of MutantScanner does. We just know that if it returns a Mutant and returns a threatLevel > 0 that the Sentinel will start Hunting. This is all we need to know and should know for the purpose of Unit Testing TestableSentinel.live().

The tests are a bit more interesting. I’ve deliberately not supplied an implementation of MutantScanner in the main branch of the source. This is to illustrate that one of the purposes behind the DI in the overloaded constructor of TestableSentinel is that we can supply an implementation specific to our test’s needs.

Here is the first test:

    @Test
    public void liveTest_ExpectThreatLevelHunting(){
        int threatLevel = 1;
        MutantScanner mutantScanner = new TestMutantScanner(threatLevel);
        TestableSentinel sentinel = new TestableSentinel(mutantScanner);
        sentinel.live();
        Assert.assertTrue(sentinel.getAiState() == AiState.Hunting);
    }

Here we set the threatLevel as a parameter in the TestMutantScanner’s constructor. TestMutantScanner is an implemetnation of MutantScanner created solely for the purpose of Unit Testing the Sentinel’s live() method. In this implementation, assessThreatLevel() simply returns the value supplied to the constructor when the TestMutantScanner is instantiated. When the test is run, the branch of code is executed that sets the sentinel’s AiState to Hunting and the test verifies this.

public class TestMutantScanner implements MutantScanner {
    int threatLevel;

    /**
     * For the purpose of discussion here and the associated test,
     * this implementation simply returns the threatLevel that is set in the constructor.
     * This is to give a point in which I can control the branch of code to unit test in
     * TestableSentinel via this implementation.
     * @param threatLevel
     */
    public TestMutantScanner(int threatLevel){
        this.threatLevel = threatLevel;
    }
    public Mutant detectMutant(long posx, long posy, long posz){
        return new Mutant();
    }

    public int assessThreatLevel(Mutant mutant){
        return threatLevel;
    }
}

We test the other branch, the branch that sets the Sentinel’s AiMode to Scanning, just by changing the parameter passed to TestMutantScanner to 0. This gets complete coverage of both branches of code associated with the if (threatLevel > 0) line of code. See the second test:

    @Test
    public void liveTest_ExpectThreatLevelScanning(){
        int threatLevel = 0;
        MutantScanner mutantScanner = new TestMutantScanner(threatLevel);
        TestableSentinel sentinel = new TestableSentinel(mutantScanner);
        sentinel.live();
        Assert.assertTrue(sentinel.getAiState() == AiState.Scanning);
    }

That’s how we get great Unit Test coverage. Unit Testing takes work, but it reduces delivered bugs. It also reveals flaws in design as in the case of Static methods. Some tools and frameworks exist to make this easier - MOQ http://code.google.com/p/moq/in the .Net site of things and Mockito http://code.google.com/p/mockito/ on the java side of things are two examples. The topic of mocking gets pretty big. Essentially, I created a manual mock when I implemented MutantScanner for the purpose of testing.

To reiterate - the problem with Static members lie in inability to test them. We can’t change the implementation. This creates untestable software when the Static members are in a class that is used by another. None of the testing strategies outlined above work for static methods.

Other strategies besides DI/IoC exist for dealing with Static methods, such as wrapper classes. But if we can, we should refactor and eliminate these legacy artifacts. We certainly should not inject Static methods into new code bases without compelling reasons to do so. Using Test Driven Development will reveal the pain associated with Static methods and the need to avoid them.

Tuesday, June 4, 2013

Why Unit Testing Constructors and Properties is Necessary

I have come across developers in the past that see little value in Unit Testing things like Constructors and Property getters/setters. Some devs see these as fruitless exercises just to bump code coverage numbers. The point of some is that setters/getters don’t really do anything and that complex code does not belong in constructors. I agree, this should be the case but not always true. I frequently run across abuse on constructors and properties. We also can not guarantee that ongoing maintenance from new developers or other teams (in big software houses) will follow best practices. Of course the same argument can be said about maintaining Unit Tests, but if you have made it this far I am assuming we are an advocates of TDD and do not need to defend it.

So, I offer a couple of examples of why Unit Testing properties and constructors is necessary. These come from real-world scenarios and clients but have been renamed to protect the innocent. Both examples come from production code which is serving up millions of hits a day.

Unit Testing Constructors

public void MyObject(string property1) :
   this(property1, null){}

public void MyObject(string property1,
   SomeCustomObject property2)
{
   if(object.ReferenceEquals(null, property1))
      throw new ArgumentNullException(“property1”);

   if(object.ReferenceEquals(null, property2))
      throw new ArgumentNullException(“property2”);

   this.property1 = property1;
   this.property2 = property2;
}

Take a close look at the error handling in MyObject(string, string) and the values MyObject(string) passes into it. MyObject(string) will always throw an exception. I’ve seen this in production code in a public library. If the original dev would have unit tested the constructors this issue would have been caught before it was published. As it is now, consumers of a public API can unknowing use a constructor that will always fail.

Unit Testing Properties

public bool MyProperty
{
   get { return myProperty; }
   set
   {
      this.myProperty = value;

      if(myProperty)
         this.myOtherProperty = “Some Value”;
      else
         this.myOtherProperty = “Some Other Value”;
      }
   }
}

First of all, the code around myOther property should not be here. It violates the Single Responsibility Rule for good object oriented programing and should not pass a code review. But here it is, and this kind of thing happens often. Unit Testing this won't throw up a failed test or show some flaw in logic. However, Unit Testing this should throw some red flags - needing to validate another value when testing a Setter or Getter should be a Code Smell.

Now the argument for both examples are that they are examples of bad code - not bad Unit Testing. Agreed, they are examples of bad code. However, TDD will call these issues out before they are checked in.

These are pretty trivial examples and much more horrible examples do exist. Unit Testing helps us identified flawed logic and violations such as the Single Responsibility Rule which is commonly violated in my clients' code bases.  Unit Testing is not reserved only for complex methods but should be used for the mundane as well.

Best rule of thumb for Unit Testing I learned from a mentor:  If it can be Unit Tested - Unit Test it.

Saturday, April 6, 2013

Installing Graphite - Error - The SECRET_KEY setting must not be empty

ImproperlyConfigured("The SECRET_KEY setting must not be empty.")

This a case of the wrong version of django. Graphite is looking for 1.4. So, use pip to get the right version:

pip install Django==1.4


On Ubuntu 12.10, this got the correct version for me. If not, you may have to use wget to get the correct version from djangoproject.com.



References
https://answers.launchpad.net/graphite/+question/224789

Tuesday, March 5, 2013

Publishing Java to Heroku Cloud Service

Heroku is a jvm-based cloud service. They have scaling appliations and many of the features you expect to see in the cloud. The big thing for me is that it is simple setup, simple to understand, simple billing and lastly - simple to deploy to. Since I am very big on minimalism - Heroku is a huge win in the jvm based cloud arena.

You actually publish to the Heroku Cloud using git with just a handful of commands once you get the initial setup out of the way. Since the modern jvm culture loves git, using git to publish is very natural for jvm development.

Prerequisites


  1. Your project must be built via mvn. This can be done outside your ide of choice so won’t impact any notion of project files your ide may have.
  2. You need git. git is a version control system and has versions for all the major operating systems..  If you are running linux you probably already have it. If you need to install it, go here. http://git-scm.com/downloads
  3. You need a heroku account. http://heroku.com



Setup

  1. Install heroku Toolbelt to get Foreman, the Heroku command line client.
    1. https://toolbelt.heroku.com/
  2. Create a .Procfile.
    1. This file contains the command you want Heroku to execute immediately after the deploy is complete. So this is your java -jar command. Be sure to include any required parameters. This file has a single line:
      1. web: java -jar target/sebsApi-1.0.jar server configuration.yaml
    2. Heroku uses the Celadon Ceadar stack to manage jvm polyglot apps. That’s what the ‘web:’ flag is. Find more info here: https://devcenter.heroku.com/articles/cedar
    3. This file should be in your application’s root along side the pom.
  3. Create a system.properties file.
    1. This file is a a typical java properties file that describes unique config information for your app. This is where you can specify the java version:
      1. java.runtime.version=1.7
    2. This file should be in your application’s root, along side the pom.
  4. Store the app in locally in Git (if not done already).
    1. git init
    2. git add .
    3. git commit -m “some message”
  5. Add the Git remote for Heroku
    1. If you do not already have a heroku app, you need to create it.
      1. heroku create
      2. git remote -v
    2. If you already have the app created (available in the Heroku app dashboard) and you need to add a your Git local to the Heroku remote:
      1. heroku git:remote -a <app name>

Publish


  1. git push heroku master
  2. If you are new to git, be sure to commit before pushing. Any changes to local files must be committed to the local git repository before they can be pushed to the remote.

Verify

  1. Scale up a single web process (Dyno).
  2. Heroku calls their processes Dynos. You must have at least one running for your app to be available.
    1. heroku ps:scale web=1
  3. Check the status of the running Dyno:
    1. heroku ps
  4. View the website in a browser:
    1. heroku open

Other useful infos

Viewing the logs: heroku logs

Heroku assigns a port to the app. I don’t know the logic behind how it selects the port, but it is likely that it will be different every time you deploy. This makes sense when you think about scaling and multiple Dynos to support your app. So you will need to pass in an environment variable for the port as parameter in the Procfile if your app supports it:

In my case, for DropWizard:
web: java $JAVA_OPTS -Ddw.http.port=$PORT -Ddw.http.adminPort=$PORT -jar -jar target/sebsApi-1.0.jar server configuration.yaml






References

https://devcenter.heroku.com/articles/java#deploy-your-application-to-heroku
https://devcenter.heroku.com/articles/cedar
http://gary-rowe.com/agilestack/2012/10/09/how-to-deploy-a-dropwizard-project-to-heroku/

Sunday, January 6, 2013

Code Quality - Code Metrics for C# using Sonar

What is Sonar - www.sonarsource.org?

Sonar is an open source tool for analysing, tracking and communicating software metrics for software projects in various languages. Sonar is cross-platform and capable of reporting on a number of languages  including java, c#, c++, javaScript, groovy, etc. The reporting tool is a web-driven application with configurable dashboards. The dashboards communicate code coverage, rules violations, code complexity and can be extended via a large library of free plugins.

Tracking of metrics is accomplished via relational database. Sonar supports the big sql database management systems without much effort. Sonar can easily be integrated into Jenkins/hudson as part of your Continuous Integration process.

A demo of some projects analized with Sonar can be found here: http://nemo.sonarsource.org/ 

More information on sonar can be found here: www.sonarsource.org

Software Metrics Background - Unit Testing in C# and Visual Studio

Software Metrics are a good for managing code quality over time. One of the metrics we hear a lot is Code Coverage - the lines or units of code covered by unit tests.  Many more metrics exist that can analyze code and detect overly complex code. We can also analyze code and use software to enforce code conventions, best practices and detect code smells. Code smells are coding practices that are not necessarily errors but are error prone. These all present opportunities for refactor and increased code quality.

For the .Net world, a number of options already exist and have for some time. FxCop for example, can do the rules analysis for code conventions and some code smells. Visual Studio 2010 Pro and up ships with code analysis tools for detecting overly complex code (Cyclomatic Complexity) and reporting on unit coverage. Other options exist too for Visual Studio integration via plugins. Not to mention what 2012 ships with. Keep also in mind that Team Foundation Server can integrate all this as part of a Continuous Integration process. TFS is clearly documented and very easy to use. It is tightly integrated with Visual Studio.

So before you jump too deep into Sonar - Visual Studio already has most of what you need - the exception is web-based reporting. Sonar is the perfect tool if leadership or upper management needs to see this information. You need to weigh the the time investment in Sonar to get its reporting capabilities vs. using Visual Studio, which with combined with FxCop, will get you all the metrics you need to start monitoring your code quality.

The catch with Visual Studio of course, is that to use its canned options, you may not use NUnit (if you know a way do let me know). If you are writing new code - do not use nUnit. You get tight and free integration using Visual Studio’s unit testing libraries and simple - no config debugging. Reason number two: Code coverage. Getting code coverage for .Net projects using NUnit is not cheap.

For a team this means third party licenses for something like NCover or spending the hours to chase down and learn a free command line tool like OpenCover. Your options are few. Since VS2010 introduced such clean and tight integration of Code Metrics, third party development of unit test platforms, test runners for visual studio has mostly dropped off the radar. Sure there are some products but they are no longer the ‘normal’ development path for new dev projects in .Net. DotCover from JetBrains is a good one if you are looking for a runner for NUnit. I still advise you bite the bullet and convert the NUnit tests to MS Test.

If you already are using NUnit you still have some thinking to do. You need to weigh the cost of converting your unit tests to MS Test Unit Testing. You also need to weigh the cost of future productivity. If you aren’t doing Continuous Integration (why?) and not doing any static code analysis it probably makes sense to convert your unit tests and use the offerings from Microsoft. If you already have a bunch of time invested in Continuous Integration of your nUnit tests - maybe not.

Now having said all that, here’s the nuts and bolts.


Setting up Sonar for C#

First you’ll need some prerequisites.

1. A recent JRE. 
2. mySql (or some other sql server). The docs for Sonar clearly map how to use mySql so it is the path of least resistance.
3. Visual Studio 2010+ or a recent Windows SDK 7. You need access to msbuild.
4. A Code Coverage tool -  OpenCover, dotCover or NCover. I have not had good luck with OpenCover though others have. 
5. Gallio - a test runner. http://www.gallio.org/


Installing Sonar

For use in C#, sonar comes in three pieces. Sonar itself - the web-based dashboards. sonar-runner - a command-line client that launches the analysis. The C# Ecosystem Plugin - this consists of several plugins specific to the C# world and are used for configuring .net, Silverlight requirements, test runner locations and coverage analysis locations.

Sonar

  1. Download Sonar: http://www.sonarsource.org/downloads/
  2. Extract the contents to a location in which Sonar will live and execute from.
  3. Setup the Database parameters:
  4. locate the file and edit the file conf/sonar.properties. You will modify the database connectivity parameters below:
  5. sonar.jdbc.url: the URL of the database
  6. sonar.jdbc.driver: the class of the driver
  7. sonar.jdbc.user: the username
  8. sonar.jdbc.password: the password
  9. Create the schema for your DB. I used mySql. https://github.com/SonarSource/sonar/tree/master/sonar-application/src/main/assembly/extras/database/mysql
  10. Starting Sonar on Windows: Launch bin/windows-x86-32/StartSonar.bat. You can also setup Sonar to run as a service and start automatically using the scripts in the bin directory.
  11. Point a browser to http://localhost:9000/


C# Plugin Ecosystem

  1. The site for this plugin is here and includes the download. http://docs.codehaus.org/display/SONAR/C%23+Plugins+Ecosystem
  2. Download and extract the plugins to SONAR_HOME/extensions/plugins
  3. Restart sonar. (Point a browser to sonar and login to access the menu to do this.)
  4. At a minimum - configure the Gallio Plugin. This is done by logging into the Sonar website.

Sonar-runner

There are two ways to kick off sonar analysis, via Maven and via the sonar-runner client. Since this we are dealing with the .net world it doesn't necessarily make sense to deal with Maven. The biggest trick to setting up sonar-runner is the use of environment variables.


  1. Sonar-runner can be found here: http://docs.codehaus.org/display/SONAR/Installing+and+Configuring+Sonar+Runner
  2. Extract the file to a permanent location.
  3. open and edit the configuration file $SONAR_RUNNER_HOME/conf/sonar-runner.properties where $SONAR_RUNNER_HOME is the path to your sonar-runner directory.
  4. Key elements to change are the database driver and the source directories if your structure is different.
  5. Create an environment variable named SONAR_RUNNER_HOME and point it to $SONAR_RUNNER_HOME
  6. Add $SONAR_RUNNER_HOME/bin to your path Environment variable.
  7. Test sonar-runner by opening a command line interface (Run -> cmd) and typing sonar-runner <enter>. You should see some tips for usage.

Sonar-Project Properties File

Sonar-runner requires the use of a sonar-project.properties file. This file should live in the same directory as the Visual Studio Solution file you want to analyze. Here’s a minimal properties file:

# required metadata
sonar.projectKey=my:project
sonar.projectName=My project
sonar.projectVersion=1.0

 # The value of the property must be the key of the language.
sonar.language=c#


Kicking off Analysis

Analysing a project is pretty simple. Simply navigate to the directory and execute the command: sonar-runner
You’ll want to fire this off at the end of a Build on a build server. Jenkins/Hudson has plugins to assist in this, though I had no luck in actually using the plugins. I used Power Shell to launch it. Those of you using TFS will be able to launch it as well, just with the groovy TFS mechanisms.
Enjoy :)



References

http://sonarsource.org
http://nemo.sonarsource.org/
http://docs.codehaus.org/display/SONAR/Installation+and+Upgrade
http://docs.codehaus.org/display/SONAR/C%23+Plugins+Ecosystem
http://docs.codehaus.org/display/SONAR/Installing+and+Configuring+Sonar+Runner