Total votes: 1
Print: Print Article
Please login to rate or to leave a comment.
Published: 25 Aug 2009
In this article we will take an additional step in the completion of our local development environment by adding various forms of build automation.
The Stack Overflow Inspired Knowledge Exchange Series
TOC Checkout the project homepage of this series to follow our journey from the creation of the famous StackOverFlow website.
In the last article we created a version control repository on CodePlex to support our source code versioning throughout the remainder of this article series. We then discussed setting up a file and folder structure capable of supporting a large project. Next we created our initial solution and related ASP.NET MVC 2 web application. We then got the TortoiseSVN client installed and configured to interact with our version control project on CodePlex. Finally we were able to perform the initial check in of our solution and make our StackOverflow inspired knowledge exchange project public for all to see.
In this article we will take an additional step in the completion of our local development environment by adding various forms of build automation. For build automation I use NAnt as it is well supported, easy to customize (as we will see), and works with many other tools that we may need to integrate with later. We will specifically tackle first time environment initialization, the compilation of all the projects in our solution, unit testing our code, database integration, and environment re-initialization.
What is build automation and why do I care?
There are two ways to explain this concept. On one hand I could jump directly into what build automation is and what it does. Or we can attack this topic from the other direction by bringing up pain points in a software developers day to day workflow and see how build automation directly addresses those issues. I prefer the second approach as you will more readily see its application to issues felt by you on a frequent basis.
Complex build processes
Periodically you will work on a solution that has many projects, external resources, etc. Running a build is not just a matter of clicking “build” from a menu. Instead you need specific orchestration of which project is built first, where certain resources are pulled from, where the result of the build is pushed too, etc. Build automation, regardless of the tool you use to implement it, is perfect for such orchestrations with an endless amount of customization possibilities.
Unit testing as part of the build process
If you are a developer that lives and dies by your unit tests then you are probably aware of a two step process in your current workflow (or already have a work around in place to address it). Do you build your code then run your unit tests? Fix an error, modify a test, etc. – and repeat? Or run a test and see an error on something you thought you fixed only to remember that you didn’t compile your code before running the test? Tying your unit testing into your build process can reduce some of these pains.
Do you use NDepend or NCover or some other code analysis tool to get some metrics out of your code? Do you have multiple TODO comments in your code waiting for someone to address them – but not sure how many? Does your team have the notion of acceptable code debt but have no good way to keep track of it or deal with it down the road? Code analysis is one of those things that are very nice to have but not something that too many teams spend much time thinking about. If this is a topic that is near and dear to your heart then sticking it into the build process is a great way to give your developers access to this much needed data without introducing to much over head to their already lengthy workflow.
Need build capabilities outside of the chosen IDE
Clicking “build” is not an option for people that don’t have the appropriate IDE (in our case Visual Studio). In some larger companies the software developer may not actually be the person that manages all aspects of the software development lifecycle. In this case someone may need the ability to get the latest version of code, perform a build, and do some other task related to their job. In the case that a company has a build team, their first task in their normal process is to pull down all the code to a central server and run a build to see if there are any dependencies back on one of the developer’s desktop environments.
The concept of CRISP is a nice thing to think about when thinking about build automation. CRISP stands for Complete, Repeatable, Informative, Schedulable, and Portable.
While there are many frameworks out there to perform builds with, NAnt, a .net port of the java based Ant product, is the one that I have been using for quite some time. This is a very easy to use tool that plays well with just about all the other tools out there in this space. Let’s get NAnt up and running so that we can see what it is capable of.
To get the latest version of NAnt go to nant.sourceforge.net. Take a look at their releases section and pick the latest stable build. In this case I will be working with 0.85. Download the zip nant-0.85-bin.zip to the binaries directory in the trunk of this project (we only need the binaries, not the full working source code). You can then extract the contents of the zip into the binaries directory. NAnt is now ready to use!
NAnt is a configuration based utility that maps configuration items to task classes. It does this in the form of a .build file where all the possible tasks are defined. You then pass the build file into the NAnt.exe as well as the target that you want it to execute. NAnt then process the specified target one task at a time until all of the tasks specified in a given target are completed.
We will create a .build file that looks similar to any other XML style document. Here is an example of what a .build file might look like (don’t get hung up on specifics…just take a look).
We will walk through the creation of our build script step by step as we move through this article. For now go ahead and create a KnowledgeExchange.build file in the build directory under trunk. Then open your file in your preferred text editor and enter the following into it.
Listing 1: trunk/build/KnowledgeExchange.build
The .build file is simple xml. As such it has the standard
xml line specifying the version and encoding of the document. This is then followed with the
project root which specifies schema of the document, the name of the project, and the default target which in this case will be build. The default target specifies which target NAnt should execute if we pass this build file to it without also specifying a target.
Compiling our code with MS Build
The first most important thing that our build process should be able to do is to build our code. In order to do this we will make a call out to MS Build. While we could add the commands that build our code directly to our “build” target we need to think of good coding habits when creating our .build file. For this reason we will try to create distinct chunks of logic for each task.
Let’s create a new task called “compile”. This task will then use the “exec” command to call out to MS Build. The “exec” command has several parameters of which we will use two, “program”, and “commandline”. The “program” parameter requires the full path to the executable that we want to run. The “commandline” allows us to pass in command line options to the executable. In this case our command line options will consist of the path to the solution file, the target (/t) we want to execute, and the verbosity (/v) level of the compiler output. Our target will be to Rebuild which will clean our solution and then build it. And our verbosity level will be set to quite mode.
Running our first build
We now have enough to run our first build! There are a couple of ways to do this. We could open a command prompt, navigate with dos commands to the directory where NAnt lives, pass in the path to the .build file, pass in the target we want to execute, and then run the build. However, doing this every time we want to build our code would be very painful. Pain to a developer means I am never going to do it again!
For this reason we will go straight to the second path which is to create a batch file to do this heavy lifting for us. We will start by creating a “build.bat” file in the build directory. This file will house the initial commands that NAnt needs to know about for this project. We need to first locate the NAnt.exe file. Then tell NAnt (via the –buildfile argument) about the build file that we want to work with. And then allow additional commands to be passed in later with the
%* options. Lastly this will be followed by a
pause command on the next line so that the command window doesn’t perform its work and then go away without letting us read the output.
With this complete you can now double click on the build.bat file to see a command window open up and ask NAnt to perform your build with the default target of “build”.
Making the build more approachable
While this works great, it still assumes way too much from our developers. Currently there is only one build.bat script. But it is buried in the build directory which requires the developer to find the build file each time they need to build the solution. If done right the average developer on the team shouldn’t need to go hunting for a tool to get their job done. Also, as the project grows we will most likely need to build some additional scripts to handle other tasks. To smooth out the process a bit more lets create a user interface for our build scripts.
To do this we will create a “clickToBuild.bat” file in the root of the trunk directory. This should be the only file in this particular location (usually) which makes it stand out pretty well in what will end up being hundreds if not thousands of other files in a complex solution.
In this clickToBuild.bat file we will enter this code:
Listing 2: trunk/clickToBuild.bat
This script basically says, “Here are the following types of builds you can perform, please choose one.” There is also a little bit of error handling in there in that it won’t allow you to enter anything other than one of the approved options. We only support the “build” target at the moment but I put two other options in there to show you how that might work down the road.
Next we need to modify your build.bat file in the build directory so that it can be called by the clickToBuild.bat script. Update your build.bat code to look like the following (I had to modify the paths to work from the trunk directory).
Listing 3: trunk/build/build.bat
Running the clickToBuild.bat file now presents you with some options and allows the user to interact with the build process in a more knowledgeable manner.
Running NUnit from NAnt
Now that we have a functioning NAnt build we need to extend its power a bit more. The logical next step for a local build would be to run unit tests. To do this we need to get a copy of NUnit. Again there are many unit testing frameworks out there with NUnit being a very popular one. Take a look here for Charlie Poole’s top ten reasons to try the latest version of NUnit.
To get NUnit go to nunit.org and download their latest build. I will be using NUnit 188.8.131.5222. We need to get both the binaries and the msi installer. Download the appropriate files into your binaries directory. Then extract the contents of the bin download into your binaries directory. Then run the installer and install NUnit.
Why do I need both the binary and installer? You don’t really need both. The installer will be fine if you want to go that route. The installer will install NUnit on your system and you can point your builds and your IDE at the assemblies located in c:\Program Files\Nunit but I prefer to have a collection of binaries in my binary directory. This way there is not confusion.
Now immediately go to your NAnt folder in binaries and locate the NAnt.exe.config file. We need to tell NAnt which NUnit assembly to use. To do this, locate the runtime section at the bottom of the config file. Make your runtime section look like my runtime section here:
Listing 4: trunk/binaries/nant-0.85/bin/nant.exe.config
Notice that in the above I pointed the older versions of NUnit to the latest version of NUnit. This way if NAnt tries to run an older version of NUnit it should instead locate our newer version.
Running NUnit locally
Now that we have NUnit downloaded and installed. Lets open up our existing solution and take a look at what we need to do to get NUnit to run in our local environment. To start, lets add a reference to our KnowledgeExchangeWeb.Tests project. Point this reference to \trunk\binaries\NUnit-184.108.40.20622\bin\net-2.0\NUnit.Framework.dll.
Next, we need to convert our default tests provided by the MVC wizard from the Microsoft testing framework to the NUnit testing framework. To do this open up your Controllers directory and then open the AccountControllerTests and HomeControllerTests classes. In each of these files comment out the line that points to the Microsoft UnitTesting framework and add in
using NUnit.Framework;. Then change
TestClass attributes to
TestFixture attributes and the
TestMethod attributes to
Test attributes. Now build your solution.
Now we are ready to test our tests. Do this by navigating to the binaries directory into the NUnit folder. Locate the nunit.exe program in trunk\binaries\NUnit-220.127.116.1122\bin\net-2.0. Open that program. Then go to file menu and open a project. Browse to \trunk\src\KnowledgeExchangeWeb.Tests\bin\Debug and locate the AndrewSiemer.KnowledgeExchange.Web.Tests.dll and click open. You should now see something like this.
Then click the run button to see your tests run. Once all the tests run you should see something like this.
If everything turned green for you (signifying success) you can move on to the next step which is integrating your testing into the build process.
Add a new NUnit task to our NAnt build script
Now that we have successfully running unit tests we can move to the next step which is the integration of NAnt and NUnit. It is important that you know that you have functioning tests before taking this step as it will simplify your debugging if issues arise during our integration!
To get started open up your KnowledgeExchange.build file. We are going to add a new target under our first target that we can use to define the testing of our current web test project. Where you place targets in a build file doesn’t really matter as we reference them by name. This target will be named “test.project.KnowledgeExchangeWeb”.
Listing 5: trunk/build/KnowledgeExchange.build
Before we define the internals of our new target lets quickly discuss what this task will be doing exactly. Our primary reason for this target to exist is to call out to NUnit and run some rules on a specified assembly. We would of course also like to know a little bit about the outcome of those tests, more than just whether it passed or not! So we will also want to log the results to a file. In our case we will have a folder under the build directory called results. In results we will have a folder called KnowledgeExchangeWebTests (go ahead and make those two folders real quick).
The first part we will discuss is a new exec task. For this task we will define a few attributes: program, workingdir, and commandline. The program will point to the fixed path of the nunit-console.exe which is in the NUnit directory we downloaded earlier. The workdir will specify where we want to run NUnit. In this case we want to run it where our test project outputs its built assemblies. And for the commandline attribute we need to specify some parameters for NUnit to know how to operate. In this case we will pass in the assembly that we want NUnit to process. We also want to tell it to log its output in xml format to a specific location.
This task should look something like the following.
Listing 6: trunk/build/KnowledgeExchange.build
Now I was going to bring this up earlier when we create our first target – but wow…doesn’t that look really busy to you? It also doesn’t follow the common programming expression of “don’t repeat yourself” or keep it DRY. The example above is a perfect introduction to the use of maintaining properties in your build file. Properties are very similar to a configuration file or key value pairs in any language. It allows you to define a name with a value. This way you can use the property all of the place in your build file but still have only one place to modify when you need too. A property has syntax that look like this:
It is xml and so you must make sure that all your attributes are quoted and that the node is closed appropriately. Property names don’t really have too many rules around them. I usually use a camel casing syntax. You can also use a period to separate words of importance such as .dir for directories. No spaces in the names though. You can then reference a property almost anywhere in your build file like this:
A quick refactor of our whole build file and I end up with a much cleaner and easier to read script. You will notice that all the paths and dependencies (such as executables and files) have been relocated to the top of the build file and placed in a property. This will help us manage our build file as it grows!
Listing 7: trunk/build/KnowledgeExchange.build
Why aren’t we using the nunit2 task? For those that are familiar with NUnit and NAnt integrations you may be wondering why we are not using the nunit2 task over the exec task? For those that know nothing about the nunit2 task let me say that it was provided when NUnit 2.0 came out as the previous tasks before it were a bit clunky. However, the use of the NUnit2 task, while more intuitive and direct to use, also adds a great deal of complexity as the version of NAnt and NUnit progress over time. For this reason I have chosen to stick with the exec task and the nunit-console.exe version of NUnit as this command line interface generally stays true throughout the new releases of the NUnit software. This is a less fragile, though more manual, implementation of NUnit via NAnt.
With this addition to our NAnt build file we can give it a test by running our ClickToBuild.bat file. Do this by navigating to your trunk directory and running the ClickToBuild.bat. Choose the 1 option to run the build process. Then you should see something like the following.
I am a big believer in providing a new developer with the ability to be able to sit down on a project, get latest from the code repository, run a few configuration tasks via the ClickToBuild script, and get them off and running with performing their job. So far we have provided them with the ability to get latest, build their code in the same way as the rest of the team, and run unit tests on the various projects in our solution. What we need to do next is extend our build script to do a couple database related tasks.
The first task is to provide the ability to install and configure a local database for the project. Next we will want the ability to build up the database based on the various SQL scripts that are checked in over time as people work on the application. And then we will want to create a few processes in our build to make running this sort of logic easy.
Executing SQL in the build
There are a couple of ways to interact with a database from your build process. The easiest way I have found is through the use of one of the command line utilities osql.exe or sqlcmd.exe. osql is an older version and should probably not be used any longer (although it is still included with SQL Server 2008). For that reason we will take a look at how to use sqlcmd in our build.
Let’s start this discussion by first adding a database to our project (I am using SQL Server 2008 Express locally). Open up your SQL Server Management Studio. Right click on databases and select new database. In the new database dialog name your database “KnowledgeExchange”. Then scroll to the right and change the file location for your database (for both files). Place your database files in a localdb folder under trunk/db. Then click OK.
That gives us our working copy database. Now let’s create a backup of this fresh database so that we have a clean slate to always be able to roll back too and start from. (before we do this create a versioneddb directory under trunk/db) Do this by right clicking on your new database, then tasks, then backup. In the Back Up Database dialog make sure that the database is KnowledgeExchange. Set the name of the backup set to be KnowledgeExchange. Then remove the default destination. Next click add to add a new destination that points to our trunk/db/versioneddb directory. Then name the file KnowledgeExchangeBaselineDb.bak. Don’t forget to specify the .bak as for whatever reason the wizard won’t add a file extension for you! Then click OK.
From here we can use the “sa” or windows account to play with our database. The first couple of tasks we will look at are the creation of a login and the user for our database. These will be created as their own targets so that each can be used where needed rather than as one lump task. Here are both of those tasks.
Listing 8: trunk/build/KnowledgeExchange.bat
As you can see each target is identical in its use of the exec command. Both use a property that points to the path of the sqlcmd executable. Both will cause the build to fail if they do not succeed in their execution. And both provide verbose feedback to the build. The commandline options are also identical with the exception of the query that is performed. The queries that are performed are standard tsql commands for SQL Server 2008. Do be aware that since the –q (cmdline query) attribute uses " to quote the query itself as the command line is already itself quoted.
I will then create a new target to call into from our batch scripts called initDatabase. This target will then call the create login and create user targets.
Listing 9: trunk/build/KnowledgeExchange.build
As you may recall we already have an initDatabase handler in our ClickToBuild.bat file. Running this option (option 3) for the first time will create our login and the user as expected using my windows account. Running it a second time will cause the build to fail as the user and login both already exist.
Notice in the images below that initially there is neither a login nor user for our KnowledgeExchange database.
Then we can run the initDatabase build task.
Now you can see back in the Management Studio that our login and user were indeed successfully created for us.
Now let’s take a look at utilizing our backup that we created at the beginning of this section. We want to have a target in the build that allows us to restore from our baseline when no database exists essentially installing the database for us on our local machine. This way when a developer gets latest for the very first time they can run the
initDatabase script and the database will be installed, a user will be created, and a login will be created for that user. Once we have this we will create a simpler script that will restore over the top of a database to take us back to a clean state each time.
This target is again an exec task. I am not going to bother telling you all the details of the sql query itself as I didn’t write it per se. I will tell you how I came to it though so you can repeat the steps. To start I deleted the Knowledge Exchange database in Management Studio. I then went up to the databases node and chose to restore a database. In the “To database” field I entered
KnowledgeExchange. I set the restore to occur from a device and then pointed the wizard to my baseline back up file in the versioneddb directory. I selected this file as the one to restore from. I then click into the options page to verify that the restore as paths pointed to my localdb directory. And here is the magic. I then went to the Script drop down and chose to generate a script for all of the actions I just performed!
Once I had the script I needed to restore from the baseline I was then able to create the following target.
Where did all these build variables come from? We have spent a fair amount of time on NAnt up to this point. So from here on out I am going to show you cleaned up version of the build file using properties instead of full paths and the like. You can take a look at the final .build file to see the nitty gritty if you need it!
Listing 10: trunk/build/KnowledgeExchange.build
Lastly we need to add the db.createFromBaseline target to the start of our depends list for the initDatabase target
Listing 11: trunk/build/KnowledgeExchange.build
With this complete we should now be able to delete our database from Management Studio. We also need to delete the login for our KnowledgeExchange_dev user (to simulate a 100% clean environment). Then run option 3 to initialize our database which should add the database to our localdb folder, create a user, and create a login for that user.
This whole process only needs to be ran when the developer first gets their code. It sets up the database, login, and user. What about when a user already has their environment initialized but needs to roll back their database, say, after performing integration testing? This is a little bit different. In this case we will need to restore the database from the baseline and create a new user that is associated with an existing login. To do this we can use one additional restore target which is a more straightforward restore script.
Listing 12: trunk/build/KnowledgeExchange.build
To integrate this new target we need to add another new target to group some of these tasks. This group can then be called at the end of our build depends list.
Listing 13: trunk/build/KnowledgeExchange.build
Now when we run our build task we are actually compiling our code, running unit tests and/or integration tests, and rolling our database back to a clean state. This is all very good progress so far. But I am sure you are now asking yourself “Ok, I can roll the database back to its clean…aka empty…original state. But won’t that wipe out my entire database schema that I will be adding as I develop?” Yes. Now we need to integrate the database building process into our build.
Custom NAnt task for database integration
In the last section we discussed how to initialize our database when the developer first gets latest. We also discussed how to create a new login and associate a new user to that login. And most importantly all of this allowed us to quickly revert our database to a clean state. The problem with this clean state concept is that any work that was created in the database will be lost when we revert the database. In order for us to not lose our changes we will need to manage them outside of the database. What? No no…this is a good idea. This means that we can version our database artifacts in the same way that we can version our code artifacts.
What this directly translates to is that when you create a table, add columns, create a new stored procedure, a view, or a function, you will also create a .sql file to store those additions and modifications on the file system. This then enables us to keep history about those database modifications which provides us with the power of rolling back or undoing in the same way we can with our other code. This also means that when it is time to push different iterations of a product to a custom we have the database scripted in a manner that will allow us to create an installer for them and push changes upstream. If you don’t like the idea of putting all of your changes into a .sql file on a per change basis you can also do a diff between the current project baseline and your current database (using something like Red Gates SQL Compare) and store all of the changes into one file.
Now that we are in agreement to store all of our sql code in separate .sql files because it will make our lives as developers easier, how do we get those scripts into our empty baseline database? To do that we will need to create a custom NAnt task. And inside this NAnt task we will launch a sqlcmd process to execute a .sql file. But instead of just running one file or passing in a list of files to an exec task we will have our custom task process an entire directory of .sql files. We will do this processing one at a time. And while this one at a time approach may not be the 100% most performant way to do this we will get file by file information about which script failed and which script passed.
This custom task will need a few properties so that it knows how to do its job well. We will need to pass it a connection string or data source name so that it knows what database to connect too. We need to tell it what directory of files to process. And we will need to tell it what server to interact with. The database that we want to perform our operations on will probably help too! At a very low level we are still interacting with the sqlcmd utility. We are just putting a glossier wrapper around it to perform many tasks under the veil of one task.
To create a custom NAnt task we need to add a new project to our Knowledge Exchange solution. Add a new class library to your solution and name it CustomBuildTasks. Then add a reference to NAnt.Core (in the binaries directory inside the NAnt directory we created earlier) and log4net (which is also in the NAnt directory. We will be inheriting from the Task class provided by the NAnt.Core assembly. We will be using the log4net assembly to perform some logging for us. Once that is complete you can enter the following code (or get it from the repository).
Listing 14: trunk/src/CustomBuildTasks/ExecuteSqlFiles.cs
For the most part the code above is very simple C#. The parts that may need some explanation are the NAnt based attributes. You will see items such as TaskName, TaskAttribute, and StringValidator. These tell NAnt how to work with the task you build, what is required, and how the build script maps into the code. While the properties are not required the
ExecuteTask method is. This is the method that NAnt will use to call into your code and perform your task.
ExecuteTask in our case is getting a list of .sql files to process from the passed in directory. It then initializes a Process which is used to communicate with the sqlcmd utility. It then iterates through each file trying to execute the contents of that file against the specified server. Other than that there is logging provided to tell NAnt about the performance of this task. This is mostly to make clean build output.
Do keep in mind that for this task to work we will need a data source name or DSN to be present on the developers system. We can quickly add this to our initDatabase task so that the developer doesn’t have to deal with it. And then we can add this target to the end of the list of depends for our initDatabase target.
Listing 15: trunk/build/KnowledgeExchange.build
In order to use our new custom NAnt task we need to tell NAnt about it. Specifically we need to tell our .build file how to get to the task. Let’s first compile the CustomBuildTasks project. Then we can take the assembly that is created by that project and its dependencies and put it then into our dependencies directory in a new CustomBuildTasks folder.
Once this is complete we will need to add a line to our KnowledgeExchange.build file telling NAnt where to look for our new task.
Listing 16: trunk/build/KnowledgeExchange.build
Now we can actually plug in our new task and see if it totally bombs on us or not! We will do this by creating yet another new target and we will name it executeSqlFiles. And we will add this to the end of the depends list for our build task.
Listing 17: trunk/build/KnowledgeExchange.build
Running our ClickToBuild.bat file and choosing 1 to build runs successfully! Now let’s really put it to the test and toss a sample .sql file into our currently empty update scripts directory. I am going to create a file named 0001_TestingBuildProcess.sql and put
select 1 into it. And then I will run the build again.
As you can see in the image above the build not only succeeded but it also succeeded in executing our (totally bogus) 0001_TestingBuildProcess.sql file.
We now have a fully functioning local build process that we can code, test, and work in. Great!! As time goes by our developers will add code, refactor code, leave
//todo comments here and there to come back to and fix later, etc. Also, as time progresses we will have more and more code patched onto other code.
How do we look into all this code? How do we analyze it in such a way that is useful in real time? How many
//todo’s have been left in there? Does the new guy’s code conform to everyone else’s style of coding? Bigger questions would be what is the cyclomatic complexity of our code? Do we have enough tests for the amount of code in our solution?
We will now cover addressing comment parsing. Then will integrate NDepend to generate a more thorough analysis of our code form a complexity and style point of view. And we will integrate NCover to insure that we are not falling behind on test creation for our code base.
Something that I like to have an agile team be aware of is the concept of how much code debt has accrued in the system. Code debt is a chunk of code that has been created with some compromise. This might be code that was created under a tight deadline so some corners were cut. Or perhaps a feature was created in a certain way to meet a requirement knowing that the way it was created won’t be totally compatible with another upcoming feature.
We call this code debt. Code that is less than perfect and needs to be improved whenever possible. To ear mark code debt in code I rely upon a certain format.
I am not going to take a long time to discuss this custom task. I will just explain what this code does for us in pseudo code terms. This task will take in the RootPathToParse which is the base directory that the spider will start at. It will also take in a list of directories that can should be ignored so that we don’t end up looking at .svn or ReSharper style directories. And lastly we will take in the extension of the file type that we wish to process, in our case we will look at .cs files.
With this information the task will start a StopWatch to keep track of how long the parsing takes. We need to know if this code scanner takes too long so that we can address performance issues down the road or remove it from the build task and make it a separate task. After the processing of files is complete the time it took to process will be displayed.
We will then recursively process all the folders in our root directory. As we look at each folder we will process all of the appropriate files in the current directory. The processing of a file will be a simple file reader. It will iterate through all the lines of code in a given file looking for lines that start with
//codedebt. When a code debt comment is found it will then further parse that line to break out the initials, time estimate, and description. This information along with what file it was found in and the line that it was found on will be reported back to the build process.
Here is the code for this task.
Listing 18: trunk/src/CustomBuildTasks/ParseCodeDebt.cs
Now you need to build your task and place the new assembly into the dependencies directory for our custom build tasks. Then we can add this new task to our build file.
Listing 19: trunk/build/KnowledgeExchange.build
Then add the parseCodeDebt target to the dependency list for our build target. Next I am going to add a test code debt comment to one of our projects to make sure that the new task picks it up as expected. Then we can run the build to see what happens.
As you can see in the image above my code debt comment was found and it took less than a second to process all of the files in our solution!
NDepend is probably one of the better .net code analysis tools. You can get a copy either as a trial, as a student, as an open source developer (that’s you when you are working on this project), or as a professional. This tool is pretty simple to use. You point it at your solution and it will take it from there. This tool can tell you a whole slew of things about your code either in a report style fashion or you can use this tool to do SQL style queries on your code. Very powerful! Too powerful to be covered here.
What I will show you is how to integrate this into your local build. Go grab a copy from ndepend.com (or get latest). Figure out what license works for you. Then download it into your binaries directory and extract it into its own folder (as we have done with other software). Then you need to open up the VisualNDepend editor. Point your new NDepend project at your solution file (in the source directory). Click on the Analysis tab and change the name of the project to KnowledgeExchange. Then click on the report tab and select use your own xsl style sheet to build report. Then browse to the NDepend directory in the binaries folder. Then go into the CruiseControl folder and locate their xsl file (this will stream line your report and output better data for build purposes). Then click save and create a new folder called analysis inside of trunk. Inside the analysis directory create and NDepend directory. Then save your NDepend project to that directory. Feel free to run the report and get used to the output of this tool. There is a lot of information provided by this tool.
Once we have the KnowledgeExchange.xml file in the trunk/analysis/NDepend directory we are ready to add this to our build script. This is actually a fairly painless target to create. Take a look.
Listing 20: trunk/build/KnowledgeExchange.build
Now you can add the ndepend target to the end of your build targets depends list. Running the build will now integrate the running of NDepend. It won’t however show anything about the output of the NDepend report! So we will add a line to our ndepend target to show the path of the report.
With this done you can run the build and see NDepend do its thing. At the end of the build process we have an NDependReport.html created for us at trunk/analysis/ndepend/ndependout/ndependreport.html. I can’t show all of the results here as they are verbose. Instead head on over to hanselman.com to take a look at an article Scott did on “Exiting the zone of pain – static analysis with NDepend”. Also, NDepend.com keeps a great listing on the various metrics that their product offers here. And if you are a visual person and like a quick overview of things go back to hanselman.com to get a one page print out of ndepend metrics (I carry this with me everywhere in printed form. It stays in my laptop along with a few other required pages).
NCover is specifically designed to show the test coverage of your application. This is only useful if you are a big believer in thoroughly testing your code. If you don’t believe in unit test and integration tests then feel free to skip this section.
NCover is no longer free. But if you look hard enough you can locate the 2.1 version that was free. This can be found in various open source projects on CodePlex or other source hosting sites. I will also include a version of it in our binaries directory. I however am going to be using the latest version of NCover. You can grab a free trial of this for your local system if you like.
The quickest way to get NCover going with our build is to use the NCover NAnt tasks included with your download. In order to do this you will need to place the NCover NAnt tasks into your dependencies folder. Then add a loadtasks node to your build file.
Once that is done we can then go into our test.project.KnolwedgeExchangeWeb target and replace our existing exec target where we are currently doing NUnit tests with a call to the NCover NAnt task. I comment the old task out and put in the new task. It looks like this.
Running this upgraded target will now give us coverage metrics as well as running our unit tests. We still get the same NUnit output but with the added metrics. Take a look at all the metrics that NCover offers for an example of what will benefit your project.
In this article we took our development environment an additional step towards the completion of our local development environment by adding various forms of build automation. We discussed the use of NAnt, NUnit, NDepend, and NCover, as well as building some custom NAnt tasks to parse our code and interact with our database. We went over the process of initializing our environment, building our code, testing our code, rolling back our database, and building up the database from a sql script repository on the file system. We also looked at how we could use some batch programming to further simplify the process of making the build work.
In the next article we take a look at build automation on the server side of our development environment. We will specifically look at the concept of continuous integration and how to manage this complexity with the use of CruiseControl.net. Topics such as remote testing, integration, one click deployments, and code roll backs will be covered here. This next article will complete our discussion on the development environment.
I am a 33 year old, ex-Army Ranger, father of 6, geeky software engineer that loves to code, teach, and write. In my spare time (ha!) I like playing with my 6 kids, horses, and various other animals.
This author has published 29 articles on DotNetSlackers. View other articles or the complete profile here.
Please login to rate or to leave a comment.