Published: 10 Sep 2009
By: Andrew Siemer

In this article we will take a look at build automation on the server side of our development environment.

Contents [hide]

The Stack Overflow Inspired Knowledge Exchange Series

  • TOC Checkout the project homepage of this series to follow our journey from the creation of the famous StackOverFlow website.
  • Introduction

    In the last article we took our development environment an additional step towards its completion by adding various forms of build automation. We discussed the use of NAnt, NUnit, NDepend, and NCover, as well as building some custom NAnt tasks to parse our code and interact with our database. We went over the process of initializing our environment, building our code, testing our code, rolling back our database, and building up the database from a sql script repository on the file system. We also looked at how we could use some batch programming to further simplify the process of making the build work.

    In this article we will take a look at build automation on the server side of our development environment. We will specifically look at the concept of continuous integration and how to manage this complexity with the use of CruiseControl.net. Topics such as NAnt integration, remote testing, integration testing, email notifications, web site precompilation, deployment packages, deploying, and one click roll backs will be covered here. This next article will complete our discussion on the development environment.

    What is continuous integration?

    Continuous integration (CI) is a process where each developer on a team of developers is able to commit their code into the main branch of a projects code repository and have it checked in various ways to insure that it is compatible with the current code base. The check for compatibility can encompass many things but the most important part of continuous integration is the immediate feedback that is gained from this process. If a developer checks in bad code or forgets to check in a dependency the entire team will be notified of the “broken build” that has occurred. This then allows the team to address the bad integration and get things back on track.

    A properly set up CI environment allows a developer to commit and integrate often. The more frequently code is integrated the better - no less than once a day but preferably even more often than that. This allows the team to work on totally separate code paths but still stay informed with how their individual projects work together as a whole.

    Continuous integration generally provides a few basic concepts to a team of developers. When a developer checks in their code, the CI process is kicked off. The CI process can either manage the build process itself or rely on the build results of another program such as NAnt. The CI build has many things in common with the local build process in that it needs to build the code, run unit and integration tests, set up and configure an environment, etc. It can then also provide additional features such as notification to the team about the builds result, creation of installers, deployment to various environments, rolling back to a stable state, etc.

    CruiseControl.NET

    While there are many different CI platforms out there this article will focus on the .NET port of the CruiseControl platform. This software is tightly integrated with NAnt and Subversion which will allow us to repurpose our existing work thus far. It also runs on our Windows server with little to no over head and with a very straightforward installation and configuration.

    In addition to being easy to work with it also comes with a couple very nifty tools. CruiseControl.NET has a web dashboard that anyone from the team can check for the current status for all of their projects. You can also initiate various build tasks from this dashboard such as a rollback or deployment of a project. There is also a tool called CCnet which installs on the desktop of those that require immediate notification regarding the build status. This tool shows an icon in your system tray alerting you to the status of your projects and also provides you with the same push button control over each project as the web dashboard does.

    Installing CruiseControl.net

    The first task of setting up your continuous integration is getting the software. The 1.5 version is still in CTP but I am going to grab it for this article as it has some pretty significant features such as integrating with several new source control providers and providing better security support. Get the latest version of 1.5 here http://ccnetlive.thoughtworks.com/CCNet-builds/1.5.0/. I am going to be working with 1.5.0.6184 in this article.

    Once you have the installer on the server you want to run your CI process from go ahead and run through the install process. Leave all of the default options checked in all screens.

    Web dashboard installation

    Note that if you want to use the web dash board for CI monitoring and interaction, the installer doesn’t allow you to choose where to install it directly. So if you are installing this on a server with a handful of sites already configured on port 80 you will have to go hunting to figure out with the application was installed. Then move it to where you want it after the fact.

    Once you installation is complete lets first browse to the web dash board as that is the easiest way to see how things are going. You probably see an error of some form or another. This is because by default the service is not yet started. To address this navigate to Administrative Tools>Services. Then you can set your CruiseControl.NET service to automatically start.

    Refreshing your page should take that error away. An empty page basically means that we don’t yet have any projects configured for our CI installation. Let's change that.

    Before configuring CruiseControl.net

    Before we get started futzing around with CruiseControl.net we first need to make sure that a few things are done on our build box. Ensure that the server you are installing CruiseControl.net has the following things installed and/or configured.

    • Tortoise: We will need to have Tortoise installed on the build server so that we can get our initial build environment set up and interact with our code repository.
    • Projects directory: We will need a project directory that the build server can get the latest code into (same as on your dev box). The build process, unit testing, etc will all happen out of this directory.
    • Get latest into projects directory: Once you have a projects directory set up, create a KnowledgeExchange/trunk directory and get latest from our project on codeplex into the trunk in the same way we did in the previous article.

    Configuring our first project

    Once all of this is complete we can then configure our first project in CruiseControl’s configuration file ccnet.config. Let’s locate the ccnet.config file on your build box. This is generally in the programs directory under CruiseControl.net. Open that file up. It will most likely be pretty much empty. We will start to configure this file one step at a time and see it progress over time.

    The first thing that we need to do is to insert a <project> node. This node has a name attribute which we will specify as “Knowledge Exchange” (which is what will show up in the web dash board and the ccnet client).

    The next node to go into our initial configuration is a labeler block. The labeler block tells CruiseControl how to tag each build. There are many types of labelers provided (see more here: http://confluence.public.thoughtworks.org/display/CCNET/Labeller+Blocks). We will use the Assembly Version Labeler as it will tell us the version of each release based on the source controls current change number. With each successful integration it will increment the build number for us! This block will look like this:

    Next we will add a webURL block which tells CruiseControl where the web dash board is. CruiseControl will include a link to the dashboard in our email notifications (once we configure them). Here is what that node looks like.

    Now that the basics are set up let’s tell CruiseControl how to actually do something important!

    Connecting the project to our source control repository

    The first most important part of our build process on a remote server is for it to be able to get the latest checked in code! To do this we will open up our ccnet.config file and add a <sourcecontrol> node. A tag reference for the <sourcecontrol> node can be found here which will help you immensely in trying to determine what your options are.

    http://confluence.public.thoughtworks.org/display/CCNET/Subversion+Source+Control+Block

    Our <sourcecontrol> block will look like this.

    Now we can take a look at the above configuration by the numbers.

    executable: This node expects the path on the server to the svn.exe executable. This will be in our projects/KnowledgeExchange/trunk/binaries/subversion directory where we will manually get latest with Tortoise initially.

    trunkUrl: This node expects the url to your subversion respository. In our case this is the path to our project on codeplex.

    workingDirectory: The workingDirectory is the path to our projects folder that we created earlier. This is where we checked out our local copy of the source code repository. CruiseControl will get latest into this directory and performs its work.

    username: This is the username to access codeplex

    password: This is the password to access codeplex

    inclusionFilters: There are two types of filters to know about, inclusion and exclusion. The names are obvious but their purpose may not be. In our inclusion filter we are saying that any modification to our code base will trigger a build. We don’t yet need to specifiy an exclusion filter but if we did it would basically state that if a file that conforms to a specified exclusion filter is modified, don’t trigger a build! There is more about this at: http://confluence.public.thoughtworks.org/display/CCNET/Filtered+Source+Control+Block

    Here is the completed project so far.

    Now you can go to your web dash board and “force” a build to occur. This will make CruiseControl execute your project which currently just says to get latest. Not very useful just yet but it is important to make sure that this step at least works!

    Once you have forced a build you will either see that all is ok of you will see a BUILD EXCEPTION. If you saw a BUILD EXCEPTION that says something along the lines of “Server certificate verification failed: issuer is not trusted…” then you will have to manually accept the certificate with the account that the CruiseControl service runs under. The reason that we get this error is because we are interacting with codeplex over an SSL connection and codeplex doesn’t provide a valid certificate to operate under!

    This issue is easy to rectify by manually accepting the certificate for the CruiseControl service account. I generally prefer to create an account for CruiseControl to run under (instead of the LocalSystem account that is used by default) named something like ccnet or buildmaster. If you operate in a Domain then you may want to create a user in the Domain so that CruiseControl can touch and interact with many different computers on the Domain. If you are just running everything on one server then a ccnet account on the local computer will suffice (or whatever you want to call it!).

    Once you have created a ccent account (or assigned an account under your control to the CruiseControl service) then you can log in as that user on your build server. Then you can manually accept the certificate by interacting with svn directly via the command prompt. Do this by executing the line in the previous BUILD EXCEPTION that says “Process command: ….”

    Make sure that you remove the two arguments at the end of this command --non-interactive and --no-auth-cache as those two things are the exact opposite of what we want to do – interact with svn and cache our authorization!

    Open up a command window and navigate to the trunk/binaries/subversion directory (CD {path}). Then enter the command as identified by the arrow in the above image. There is a long trailing command in there. Copy and paste it into your command window. Then run it. It will basically say (in this case) that Microsoft is not a trusted certificate issuer and subversion needs to know if we can trust them or not. Enter “p” to permanently accept the certificate!

    With this completed you can now log back in as yourself. Then restart your CruiseControl service. Then navigate to your web dashboard and force the build. If all of your paths are correct in your configuration then you should be able to successfully run your project and get latest!

    CruiseControl.NET Configuration Validation tool

    If you are having issues with your configuration there is a handy tool called the “CruiseControl.net Configuration Validation” tool. You can open up your ccnet.config file and it will tell you whether everything looks correct or not!

    Restart CruiseControl service with each change to the configuration!

    As an FYI, you must restart your service each time you change your configuration. This is the only way that CruiseControl will know to load your new changes. First timers are generally not aware of this and chaise their tail wondering why on earth their configuration changes refuse to load!? Don’t forget!

    While on the subject of creating an account to run the CruiseControl service under we also need to discuss permissions. Coming up her pretty soon we will start to integrate our NAnt build process. Keep in mind that the build needs to be able to interact with the file system, interact with SQL Server, etc. If for some reason your build works when you run it manually but doesn’t work for some strange reason via the CruiseControl service keep permissions in mind! Go ahead and add your ccnet account to SQL Server now (create a Windows login for ccnet and give it full control for now) as we know that we need to be able to connect to and perform jobs against SQL Server.

    Automatically trigger the integration

    So far we have a working project under CruiseControl that is very capable of getting latest for us. However it only does this when we tell it too. That is not automated by any means! We have to configure our project to automatically detect changes to our source control so that CruiseControl will automatically kick off our project for us. We do this by adding a triggers block to our configuration.

    There are many forms of triggers but we are mostly interested in two of them at the moment. We will use the intervalTrigger which tells CruiseControl to periodically poll our source control to see if there are any changes that it needs to be aware of. If it finds changes CruiseControl will execute the project. The other one we are interested in is the scheduleTrigger. This trigger can be used to kick off nightly builds. We can tell CruiseControl to force our build at a specific time on a specific day or days. Here is that configuration block.

    The important thing to note here is that the build condition for the interval trigger is set to IfModificationExists and the schedule trigger’s build condition is set to ForceBuild. You can read more about your options for triggers here: http://confluence.public.thoughtworks.org/display/CCNET/Trigger+Blocks.

    Sending email notifications

    Now that we have some automation and are able to successfully “get latest” when the code changes we need to let the team know about it! CruiseControl can communicate with the team via the email block. The email block is pretty easy to grasp so here it is:

    The only part of this that might require some additional explanation is the groups section. In this section we are able to define custom group names and set how frequently the members of those groups are sent an email. Not every person on the team will want to receive an email every time a build occurred. An example of this is the people that use the ccnet client (which sits in the status tray). They rely on the tray icon to turn yellow, orange, green, and red. For those people a notification when things fail is usually enough. Read here if you have questions regarding the email block:

    http://confluence.public.thoughtworks.org/display/CCNET/Email+Publisher

    Utilizing our NAnt build file on the server

    Now that we have a functioning project that is able to get the latest source code and that emails us with each build, let's take a look at actually doing something useful on the build server! We will largely be able to re-purpose our existing NAnt build file. If you think about it, we still want to build, run tests, build up the database, run some analytics, etc. What we do with a successful build is really the only difference from our local build in that we might want to deploy the code out to a development server for further testing. Lets first get the build to run under CruiseControl.

    First things first. We need to make sure that all the programs that we want to use will run on the server in the same way that they do on our dev box. In order to do this we need to make sure that license files are where they need to be (NDepend), that everything is installed as required (NUnit, NCover), etc.

    Once that is done we need to address another big problem. On our local dev box (at least on mine) I have mapped everything to an external hard drive which is labeled as my P drive. On my server I have a C and E drive. If things were as simple as assigning a drive letter to a folder and mounting a virtual drive then we could consider using the Visual Subst program (found here: http://www.techmixer.com/how-to-mount-windows-folders-as-virtual-drives-using-visual-subst/). But in my case in particular things are more difficult. On my local dev box I manage article projects in a path such as this: P:\Projects\DotNetSlackers\Articles\KnowledgeExchange\trunk\. On the build server my path might look like this: E:\data\projects\KnowledgeExchange\trunk. Nothing alike! So Visual Subst isn’t going to cut it.

    Instead we need to take a look at swapping property values in our NAnt build file based on what environment we are running our build on. To do this we will move the trunk.dir property out of the directories section of our build and put it under an Environment Specific section. Then we can define a trunk.dir property for each of the environments that we want to support. In my case I am going to have my local dev trunk.dir be the default value for this property. I will then specify entries for devserver, qaserver, and prodserver. If you have developers with radically different paths on their local box you can take care of this issue here as well. Let's take a look.

    Notice that the key to this trickery is that the new trunk.dir properties have an if statement added to them that checks the state of a non-existent property called environment. This property is passed in via the command line. We can now alter our ClickToBuild file to pass in the appropriate parameter as well and also add a “cruise” option and an additional “initialize database” option to our list of possible build functions. Here is how the ClickToBuild script looks now:

    Notice that I added the environment property to the end of our various target options. Now when we run the build script we have control to do certain things based on each of the environments.

    Now that we have this control we can address another issue that is directory specific by naming specific. In this case we run SQL Server Express on our local boxes but SQL Server on our actual servers. These generally will have different names, IP Addresses, etc. For this reason we will need similar environmental level control so that we can reassign specific values. Let’s move the database.server property up under our Environment Specific section.

    We are finally to a point where we can plug NAnt into our ccnet.config file. We do this by first adding a <tasks> node under our <sourcecontrol> node. Inside the <tasks> node we can add a <nant> node. Here is what the nant configuration looks like.

    1. executable: The path to the NAnt executable
    2. baseDirectory: The path to the source directory where NAnt will work
    3. buildArgs: Any arguments you want to pass through to NAnt
    4. buildFile: The path to the build file
    5. target: The target that you want NAnt to execute
    6. buildTimeoutSeconds: The number of seconds before the build should timeout

    There are of course many other options which you can find here: http://confluence.public.thoughtworks.org/display/CCNET/NAnt+Task

    Here is what our <nant> section looks like:

    The final step to this is to add a new “cruise” target to our KnowledgeExchange.build file so that our ccnet.config has something to communicate with! This target will be the same as the “build” target to start us off. Why do we want to create a copy of the “build” target? This provides us with some flexibility for down the road. We want to be able to extend our local build separately from our server build. Here is the cruise target.

    You should now be able to force a build in CruiseControl and have it compile the code, run the tests, roll back the database, rebuild the database from the scripts, parse the code debt, and analyze the code using NDepend.

    Pre-compiling your website

    Most of us are aware that we can publish our site as is onto our web server. We can copy over all the .aspx and .cs files we like. The first person to request a page in our site will also be the person that has to wait for that page to get compiled by the server before it can be hosted. Also, I am sure that most of you at one time or another have logged into the web server to make a “quick change” to one of the pages on the site. You open up a file in notepad, make the change, and save the file. The change is immediately seen on the server and you are off and running.

    Why is this bad? I can think of two reasons. Why would you want your clients to wait around while a web page compiles? Also, why would you want the security risk of having your source code deployed off in the wild where anyone with access to the server can make changes to your web site? And, just because you can make quick changes to your site on the server is that really a good idea? Did you check that quick change into your source control? Probably not.

    Deployment should be a one way process. All the code we deploy should be compiled. And deployed code should be obfuscated and hidden away to the best of our ability. For that reason once our integration is complete we want to immediately create a pre-compiled version of our site. By pre-compile we simply mean that we are doing the compiling before deployment so that IIS doesn’t have to. The easiest way to figure out how to pre-compile your site (for some reason the aspnet_compiler is very finicky about what you feed it) is to grab Rick Strahl’s ASP.NET 2.0 Compiler Utility from here: http://www.west-wind.com/tools/aspnetcompiler.asp. It is a visual compiler configuration tool that allows you to more easily generate the command line options.

    The gist of precompilation is that you feed your web site source code files into the compiler and it spits out a compiled website. You can even take it further by using the aspnet_merge tool to cram all of your web pages into a complete assembly. This makes packaging and deploying your site considerably easier. And in the future provides you with roll back capabilities.

    First, create a webDeployment folder insider your build folder. Then open up your NAnt build file so that we can add a new target named “precompileWeb”. This will go under the coding tasks section. In pseudo code terms we want to delete our webdeployment directory if it exists (so that we start from scratch). Create a new webDeployment folder. Then run our aspnet_compiler to generate the first compilation. Then run the aspnet_merge tool to take all of the output and stuff it into an assembly. Here is how we do that.

    Do be sure to use the utility I mentioned earlier as if you don’t get the paths exactly right the compiler will fail. Things to watch out for are trailing slashes, proper usage of spaces, and correct switch placement. The compiler is very finicky! Once you have this working you can run it. Then go into the webDeployment directory to see what you have left. You will see placeholders for your files but once you open one you will quickly realize that things are different. Your aspx files now contain…

    ... and nothing else! Also notice that you have a new assembly in the bin directory (if you chose to use the merge option) which contains all the guts of your web pages. We are now ready to use this output to deploy our site to our servers or to create installation packages.

    Things to keep in mind

    Keep in mind that this precompilation is only for the .net side of things. There are other things that you might consider down the road in and around this step. You might want to “minify” your javascript. Or you might want to obfuscate it for security. You may consider minifying your style sheets. Etc. Anything that you would want to do prior to deploying should be done around this same time.

    Go ahead and add the precompileWeb target to the end of your cruise depends list. Make sure that your CruiseControl build still works before moving to the next section.

    More permissions issues?

    I am building this site and writing this series on a new server to make sure that I catch all the gotchas and uh ohs possible. Like everything else you might find that your build account has issues writing to the temporary asp.net files folder. I had to give my ccnet account rights to write to that directory to get this part of my build functioning!

    Creating deployment packages

    Now that we are to a point that we are successfully pre-compiling our web site on the server side let’s take a quick look at deployment packages. Having deployment packages are important for two reasons. First and most noticeable is to be able to deploy our site when the time comes in an easy manner. But even more important is the ability to perform a roll back on our site if need be. If we create a package for every build that we perform and save it into an archive directory we can easily push up or take down code as need be.

    A deployment package in a web sites case can be as simple as creating a separate folder for each iteration of code with a unique name (usually a name that corresponds to the tag number in the source control and the date it was created). A slightly more complex package comes in the form of an archive (.zip or .rar) that contains all of the compiled web files. And an even more complex method comes in the form of creating an actual executable file used to install a web site on a server via a graphical wizard based installer.

    Regardless of the environment that we were performing our builds in, I never prefer to manage packages as loose files scattered about in a folder structure. In our case since we will be performing all of our deployments and roll backs via NAnt we will stick to working with single file archives for each package. We won’t need the added complexity of creating a true installer.

    This step will be introduced into our build process after our precompilation step. We will create an archive of all the compiled files named LatestBuild.rar. We will also create an archive with the latest version number from the source control. And if a LatestBuild file exists when we start this process and the revision number has changed, then we will also create a PreviousBuild file. The PreviousBuild file will be an archive of the last archive. We will use this to roll back too if we need to.

    In order to get this task started we need to first make sure that WinRar is installed on the build server (get this from rarlabs.com). Next we need to set up some variables in our build file to maintain some data that we will get from our SVN repository. We need an svninfo variable to hold the results of a status call to the repository. Next we need a variable to hold our current repository version and our last repository version. These will come in the form of svncurrentrevision and svnlastchangedrevision. Add the following empty properties (they will be populated dynamically later) to the list of the variables section.

    Next we need to create a new target called “svninfo”. This target will be responsible for communicating with our SVN repository to get the latest statistics. Once we have these statistics in hand we can then use regex to read the needed data into the variables that we just created. Let’s start by creating an exec task to communicate with svn and output the results of an info call to a local file.

    Notice that we make a call to the svn executable in the exec task. This is a new property that points to the path of the svn executable in the binaries directory. We then specify that we want to create an svn.info file to store the output to. We then pass an info argument to svn to get the general info about the state of the respository.

    Here are the contents of the svn.info file after running this task.

    Now that we are able to get the latest information about our repository we have something to build on. Next we want to read this files contents into a variable so that we can parse out the value that we need. We do this with a loadfile task like so.

    Notice that we are loading the file that we just created. We then store the data in the svninfo variable. Now we can attempt to get the latest and last revision. We will do this with a regex task. Two regex tasks specifically.

    Notice that we specify a pattern that looks for a specific bit of text. Inside a normal regex pattern you will also see (?’{variablename}’\d+) syntax. This essentially says to put the word that is found at this point in the pattern into the specified variable name. In this case we are sticking the revision number into our svncurrentrevision variable and the last changed revision into the svnlastchangedrevision variable.

    We can now add the svninfo target to the depends list of the cruise target. This means that after the precompileWeb target runs we will load our variables regarding our revision data. If you intend to use these variables prior to the call to svninfo then you will need to move this target up the chain where appropriate.

    With these tasks complete we can now create a createPackages target. This target will be responsible for creating our deployment packages. We need to create a LatestBuild package, a package stamped with the current revision, a SqlScripts package, and possibly a PreviousBuild package.

    The PreviousBuild package is only created when the current revision is different from the last revision. It is also only created if a LatestBuild package already exists. The reason for this is that a build can be run for lots of reasons other than the repository receiving updates. Also, if there is no LatestBuild then we don’t yet need a PreviousBuild! In order to achieve this slight complexity we will incorporate a few simple tasks.

    In the if statement we are checking to see if a file doesn’t exist. Specifically we are checking to see if there is an archive already created for the current revision of code. If there is then we don’t need to create a latest build as that most likely already exists. Next we attempt to create a copy of the LatestBuild.rar archive as a PreviousBuild.rar archive. We can only do this though if the LatestBuild.rar archive already exists (as we can’t copy a file that doesn’t exist!).

    The next step is to create a SqlScripts.rar archive. We want to do this because in order to truly be able to deploy or roll back we need to be able to have the state of both the web site and the database for a given revision. The only way to do this is to include the state of the sql scripts in the archive with the precompiled web site. The easiest way for us to include this in the LatestBuild archive is to create a SqlScripts.rar file directly in the web deployment directory where we created our precompiled web site. Then when we create our LatestBuild.rar archive the SqlScripts.rar file will automatically be included.

    Here we are making a call to the WinRar program. And this might create a bug if you don’t pay close attention to the fact that we are referencing the rar.exe file instead of the WinRar.exe file. The rar.exe file is the command line utility for WinRar. In our exec task we point to the rar.exe file. Then we specify that the workingdir is our web deployment directory. Next comes the command line arguments for the rar utility. Here we specify the a command which means that we want to create an archive. The -ed flag states that we won’t put any empty folders into the archive. And the -m5 flag states to use the best compression method available when creating the archive. We then specify where we want the SqlScripts.rar file to be created which in this case is in the web deployment directory inside the build directory.

    With this out of the way we can then create the LatestBuild.rar archive. Technically we don’t need to do this every time…but it won’t hurt us if we do. We only need to do this when the revisions change! Here is the target for this task.

    Notice that the workingdir has changed. Also the name of the archive and location that it is saved too are different. Other than that the use of the rar.exe itself is the same in that we are creating an archive with the same options. Notice that inside the LatestBuild.rar we should also have a SqlScripts.rar archive.

    With the LatestBuild.rar archive created we can then move to the next step which is to create a tagged archive. We will do this by creating a copy of the LatestBuild.rar archive which is then renamed with the current revision number. This is another task that can be performed every time – but doesn’t really need to be. We use the ${svncurrentrevision} variable that we created and populated earlier to rename the copied archive.

    The file property specifies which file to copy. The tofile property specifies the new file name (and path if you choose to relocate it).

    Deploying after a successful build

    Now that we have a working archival process we can move on to an automatic deployment after a successful build. This may sound a bit complex but realistically it is just a matter of extracting the precompiled files from the LatestBuild.rar archive into the root of the web site directory. Given that the contents of the website may change over time we also want to delete and recreate the web site directory.

    As you can see above we first defined a devwebsite.dir property that points to the base of our development web site. Then we created a new target called deployToDev. Inside this new target we delete the web directory (to make sure we are installing to a clean site). Then we recreate it. Next we shell out to the rar.exe utility to extract all the contents of the LatestBuild.rar archive with directories intact. We also specify the –y flag to automatically answer yes to any queries the rar utility may ask us about (such as overwriting files). Then we delete our SqlScripts.rar archive as we created our database from scripts in a previous target.

    One click roll backs

    Now that all of this fancy automation is in place to get latest, compile, test, build up the database, etc. we now need some way to roll back the changes just as quickly. We can do this by adding a new project to our CruiseControl.net configuration. And inside that project we will only add a call to our NAnt build configuration.

    Notice that the target that was specified is rollbackToLastRevision. This target will live in our NAnt build file and is in charge of performing almost identical tasks as our deployToDev target up to a point. The first biggest exception is that it has two database related dependencies. In order for us to truly roll back we also need to roll back the database! Another exception is that this target will first check to see if a PreviousBuild.rar file exists as there is the possibility that it does not (although a rare chance). If we do have that file then we delete our devwebsite.dir directory to ensure our website starts out clean. Then we recreate it. Then we extract the contents of the PreviousBuild.rar archive to include our SqlScripts.rar archive. Next we create a sql folder in our devwebsite.dir folder so that we have somewhere to extract our sql scripts too. Then we extract our sql scripts to the sql folder. Then we use our custom ExecuteSqlFiles task to run our previous builds sql files. And finally we have to clean up after ourselves by deleting the sql folder and the SqlScripts.rar archive.

    Once that all of this is complete and functioning we can take one additonal step and add this new target to our ClickToBuild.bat file. Do this by adding this line to our ClickToBuild.bat file.

    Summary

    In this article we took a look at what continuous integration is and how to achieve it using CruiseControl.net. We created a CruiseControl.net project and connected that project to our SVN code repository. We then took a look at how to trigger our project and how to send out email notifications each time a build occurred. Next we took a look at how we could utilize our NAnt build file on the server. We then added web site precompilation to our build configuration. With that complete we took a look at how to create deployment packages in such a way that we would have versioned packages as well as a current and previous package. Following this we added automated deployment to our process. We then created a new project to handle the concept of a one click roll back. Finally we added the one click roll back to our ClickToBuild.bat file to make this functionality even more accessible.

    In the next article we will take a look at the new SketchFlow tool released in the latest version of Microsoft Expression. With this tool we will attempt to create a sitemap and functional mock up of our site. This will tell us in an interactive way how each page looks and feels as well as how they link to one another. At the end of the next article we should have a fully functioning demo of the site that we will create in the remainder of this series.

    <<  Previous Article Continue reading and see our next or previous articles Next Article >>

    About Andrew Siemer

    I am a 33 year old, ex-Army Ranger, father of 6, geeky software engineer that loves to code, teach, and write. In my spare time (ha!) I like playing with my 6 kids, horses, and various other animals.

    This author has published 29 articles on DotNetSlackers. View other articles or the complete profile here.

    Other articles in this category


    Code First Approach using Entity Framework 4.1, Inversion of Control, Unity Framework, Repository and Unit of Work Patterns, and MVC3 Razor View
    A detailed introduction about the code first approach using Entity Framework 4.1, Inversion of Contr...
    jQuery Mobile ListView
    In this article, we're going to look at what JQuery Mobile uses to represent lists, and how capable ...
    Exception Handling and .Net (A practical approach)
    Error Handling has always been crucial for an application in a number of ways. It may affect the exe...
    JQuery Mobile Widgets Overview
    An overview of widgets in jQuery Mobile.
    Book Review: SignalR: Real-time Application Development
    A book review of SignalR by Simone.
    Top
     
     
     

    Please login to rate or to leave a comment.