Published: 18 Nov 2009
By: Andrew Siemer

In this article we are going to look at how distributable our current code base is. We will find that even with all the refactoring and modifications that we have done we are still pretty married to a fairly hardwired infrastructure. If one piece of our code requires more resources than any other we can’t simply scale out that bit.

Contents [hide]

The Stack Overflow Inspired Knowledge Exchange Series

  • TOC Checkout the project homepage of this series to follow our journey from the creation of the famous StackOverFlow website.
  • Introduction

    In the last article we discussed how we can enhance the power of the Dependency Injection pattern by implementing an Inversion of Control container (IoC). We then discussed what an IoC container could be used for and how it works. Then we looked at implementing an Inversion of Control container in our code. We specifically implemented the IoC container StructureMap. Once StructureMap was plugged in and working we refactored our code to take advantage of the new features and functionalities that StructureMap provides us with. And with that refactoring completed we then addressed the Controller issue that we found in the last article by creating a custom ControllerFactory that took care of locating and returning a controller but also took care of instantiating our dependencies in an automated fashion using StructureMap. We then wrote unit tests around our HomeController demonstrating that our controller was just as flexible as the rest of our code base.

    In this article we are going to look at how distributable our current code base is. We will find that even with all the refactoring and modifications that we have done we are still pretty married to a fairly hardwired infrastructure. If one piece of our code requires more resources than any other we can't simply scale out that bit. In the next article we will implement a WCF client and service layer that will allow us to break up our Core tier in a distributed manner. The nice part that you will see is that our code has become considerably more flexible than when we first got started. These changes can be implemented without impacting the rest of our existing code by making the client side conform to the same interface as our current PostService does (IPostService).

    Why should I aim to build a distributable application?

    If you think about a standard website where all of the files are in one root directory operating under one process there are not a whole lot of options for scaling our application. The first place we usually think to scale our application is by standing up many copies of our application all running under their own processes. As we all know this makes the design a bit touchy in that the use of a centralized session is no longer feasible. This means that we then need to also centralize our session using a single state server or centralizing the session into a database. What happens though when the capabilities of your state server are outgrown? Or, what happens when the use of a SQL server which you have to connect too and read from is no longer a good approach? Well, the next obvious approach is to use a product similar to MemCached (or MemCached Win32 for windows environments) or the new Microsoft Velocity. These platforms provide the same "farming" approach to your session and caching as scaling out the single server web site does when pushing the site out to multiple web servers does. Usually this is enough! But what happens when it isn't enough?

    Take a complex order processing pipeline or large insurance claims application. These types of applications might have a large amount of their processing happening in the business tier or the data access tier. Another approach to scaling out your application might be to push the physical tiers on to their own platforms as well. This would put you in a situation where the presentation tier lives in a farm, the business logic tier lives in a farm, and the data access tier lives in a farm. This approach to me tends to add quite a bit of complexity without the same overall reward as a slightly different approach might. In the case where you push out a whole tier onto its own server or multiple servers you may have provided a segment of your application with a whole hearted robustness in the case where only a sliver of that segment may have actually needed to be scaled.

    For that reason we will look to keep our physical tiers intact from a conceptual approach but we will build into our application a mechanism that will allow us to distribute slivers of our application where we need too. This will allow us to run our application on one server and when we need to, we can push out a specific portion of our code. In this case we will assume that our PostService which we have been working on through this article series is taking a lot of CPU cycles, but the rest of our code is performing normally. If we can push the PostService to its own server or a farm of servers then the rest of our application will perform without issue and the application will continue on without issue.

    What do we need to add to make this happen?

    If you think about our current application we are already set up for pluggable components. By that I mean since we are using interfaces to define our dependencies and we are using StructureMap to inject those dependencies then technically we can create another IPostService implementation that will handle our distributable implementation in a manner that can be simply plugged in. So then if we know that we are working a predefined implementation then we don't really have anything to discuss there. This leaves us with one consideration, the implementation of our disconnected environment.

    Now where as our PostService is a single class that inherits from IPostService and then implements the methods defined by that interface, our disconnected scenario will need to be a bit more complex. We are going to need to have a server side service or set of services that will expose our business logic. We will also need a client that knows how to interact with those services. The client portion is the code that will implement the IPostService interface and be exposed to our controller (in this scenario). The service installed on the remote server will use our current PostService implementation to interact with our data access layer, etc.

    The technology that we will use to create this communication channel between the client application (our ASP.NET MVC site) and the server application (the services exposed on the remote server) will be WCF (Windows Communication Foundation). This will allow us to quickly and easily stand up this communication channel. Let's see it.

    Creating the service on the server

    The first thing we need to do is create a new WCF project. This project will hold our services. Right click on the solution and add a new project. Then add a new WCF Service Application and call it RemoteServices. Then navigate to the projects properties so that we can change the namespace and assembly name. We will set it to AndrewSiemer.KnowledgeExchange.RemoteServices (at least I will!). Then I will delete the Service1 that is generated as part of the project. Then I will add a new WCF service to the project and call it PostRemoteService.

    This new PostRemoteService will have the same methods as the PostService does. In this case that means that we need to create a GetAllPosts method that returns an array of Post objects. To return a list of Post objects we will need to add a reference to the Domain project. Then we can write a quick little method called GetAllPosts(). This method will need to return an array of Post objects (as you can't return a list over the wire).

    And once we have this method in place we also need to update our IPostRemoteService interface (might have done that first...but oh well).

    Now we are ready to add the guts to our service. We now have a moral dilemma. We could just spin up a new instance of our PostService and our PostRepository and feed out the data from there. But after having gone through all this work to disconnect all the moving parts of our application it doesn't seem right to give up now. For that reason let's take a look at what it takes to get StructureMap to work with a WCF service.

    Making StructureMap and WCF play nice

    Now that we have decided to hook up StructureMap instead of going straight to the implementation (good for you!) we can take a look at what it takes to get things working. To some degree I am going to work off of a post by Jimmy Bogard entitled "Integrating StructureMap with WCF". However, towards the end were we need to wire in SturctureMap's default types we will start to side step the post a bit to follow our current method of wiring up StructureMap in other sections of this project.

    In order to get StructureMap and WCF working we will need to create a custom instantiation behavior using IInstanceProvider which has two methods that we will need to implement: CreateInstance and ReleaseInstance. In order to do this though we will also need to create a custom implementation of IServiceBehavior first. Here is the code for that:

    Next we need to create a custom implementation of IInstanceProvider which the IServiceBehavior references.

    With these classes created we then need to plug them in to WCF. We can do this via Attributes, a custom service host, or configuration. If we use the attribute route then we would have to decorate any service that we wanted to use this implementation. If we use configuration then we would still have to specify each services configuration. Instead we will use the custom service host option.

    With a custom service host created we then need to create a host factory. If you are following along with Jimmy Bogard's post then this is where we change course. In his implementation you will see that StructureMap's configuration is performed directly in the StructureMapServiceHostFactory(). However, we are going to change that part. If you read through the previous articles in this series then you will be familiar with creating custom {NameSpaceName}Registry and Register{NameSpaceName} classes.

    Let's get started by creating a Structure folder in our RemoteServices project. In there you will need to create a new classes named RegisterRemoteServices which will take care of wiring up the appropriate default types for StructureMap to use.

    Then we can create a StructureMapServiceHostFactory which will look like this.

    From here you should be able to build your project to ensure that everything is working. Finally we need to tell our service to use this new custom StructureMapServiceHostFactory. We will do this in the .svc file for our service.

    Creating a service client to consume the WCF service

    Now that we have our WCF service defined to allow us a way to get all posts and we are using structure map in a manner that all of our dependencies are plugging in in the same manner as the rest of our code base, we can start to look at creating a client to consume the PostRemoteService. To do this we will need to add another new class library project (don't worry...this will be the last one - short of some test projects). I named my project RemoteServiceClient. Of course we need to update the namespace and assembly name in the same manner that we did our other projects. I renamed my assembly and namespace to AndrewSiemer.KnowledgeExchange.RemoteServiceClient.

    Next we need rename the class that was added for us to PostServiceClient. This is the code that will conform to our IPostService interface. So we need to make this class inherit from IPostService. In order to do this we need to add reference to this project to the Core and Domain project. Then we can implement the GetAllPosts method which the IPostService interface defines.

    With this complete we can then add a reference to the RemoteServiceClient project to point it to our RemoteServices project. Do this by right clicking on our RemoteServiceClient project and selecting Add Service Reference. This will open a window for configuration. Enter the local URL in the address box. Then select PostRemoteService. Then change the namespace to PostRemoteService and hit OK.

    Once this reference is in place we can then access our to the proxy that was created for us called ironically the PostRemoteServiceClient. From an instantiation of that we can then get a list of posts.

    Plugging your client into the web site

    Now that we have a RemoteServiceClient created which conforms to the IPostService we should be able to simply plug it into our application right? Well sort of. We need to add a reference to StructureMap. Then we need to create a couple of classes. We need to create {Type}Registry and Register{Type} classes for the client.

    And...

    Now that we have all the plumbing in place we should be able to register this class in our web application (via our Register{type} class) and by adding the appropriate configuration to our web site. Let's see what that takes.

    First off we need to add a reference to our RemoteServiceClient. Then we need to open up our Global.asax file and instantiate our RegisterRemoteServiceClient class which will tell StructureMap what the default type is for a given request.

    What happens when I specify two defaults for a given type?

    Simple rule - last one configured wins!

    Then we need to copy our service endpoint definition from our app.config in the RemoteServiceClient project into our web site's web.config.

    And then we need to copy our connection string from our web site's web.config to our WCF service project's web.config.

    Lastly, we need to ensure that our WCF service project initializes our application in the same way that the website does. Currently we are missing the AutoMapper initialization. To handle that we will need to add a Global.asax file to our RemoteServices project. Inside of the Application_Start method we need to add our AutoMapperBootStrapper.Initialize() call.

    OMG - that is a lot of work!

    It may appear like it is a lot of work for little return. However, if you were to list all this work out in short form you would find that there are just a lot of moving pieces to get wired up. But let's discuss what this sort of functionality gives to you now that we have it in our application.

    Image that you are working on your local box, developing as usual. You can quickly plug in your local PostService implementation by changing your Global.asax file. Instead of calling new RegisterRemoteServiceClient() you can call RegisterCore(). That's it. Then when you go to deploy to production you can also leave it configured that way. Then down the road when you find that this sliver of code is causing a real performance bottle kneck in your application you can simply flip the switch in your Global.asax to point back to your remote services which immediately offloads the stress of that code to a different box.

    Is it really that simple?

    No. No it is not really that simple. Obviously there is more to this option than meets the eye! If you created a service for every path through your business layer you would then have a whole new tier of complexity to manage with the addition of every new feature. Also, you probably don't want to swap the code in your Global.asax file all the time so you would probably need to update your auto build process to take care of this as the code is deployed through different environments.

    The key to this article is to show you that your code did indeed have one more level of enhancements that could be done to it. It also demonstrated that the previous article's code base was immediately flexible enough to allow the concept of distributable components to literally be plugged in with very little code added to our existing code base. Think about it. Other than the new projects and the new code in those projects, all we had to do was to add some new references to existing projects, add some configuration values here and there to config files, and add a line to our global.asax file. The adding of references was probably the most intrusive thing that happened. And more importantly, this demonstrated that if you developed all of your code in the manner illustrated in the last article, it would literally take you a handful of minutes to write the code needed distribute an ailing service to its own server (and some time to set up the new server, install the web services, etc. - there is more to this than meets the eye). Not bad though.

    Analysis

    As you can see in the dependency graph we have a considerable amount of added complexity by adding WCF and distributed concepts to the application. But at the same time this distributed model also cleanly severs the tie from the front end to the back end. The only reason that we even have a reference to the Core project is because that is where the IPostService interface lives. If we moved all interfaces up into the domain project all ties to the back end could go away.

    Pros

    Your code is fully pluggable and any and all aspects of your middle tier can be configured as a distributable service.

    Cons

    This does add a large amount of complexity to your system. However, if you need distributed computing this approach can't be beat!

    Comparison Chart

    Table 1: Comparison Chart

    Coding Concepts

    Yes

    Sorta

    No

    Fast/Easy to develop: Can we generate the end product quickly?

    X

    Testable: Can we write tests around the majority of the projects code?

    X

    Flexible for refactoring: Can we easily refactor the code base to add new concepts?

    X

    Well abstracted: Do the users of your code only know what they need too?

    X

    Well encapsulated: Can you change the internals of code without impacting the users of that code?

    X

    Separation of concerns? Is your code well compartmentalized and only doing what it needs to?

    X

    DRY? Does your code follow the "Don't repeat yourself motto?"

    X

    SOLID? Does this code comply with the SOLID principles?

    S: Single Responsibility Principle - there should never be more than one reason for a class to change

    O: Open Closed Principle - should be open for extension but closed for modification

    L: Liskov Substitution Principle - functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it

    I: Interface Segregation Principle - clients should not be forced to depend upon interfaces that they do not use

    D: Dependency Inversion Principle - high level modules should not depend upon low level modules. Both should depend upon abstractions. Abstractions should not depend upon details. Details should depend upon abstractions.

    X

    X

    X

    X

    X

    X

    Swappable? Can you swap out an entire layer down the road?

    X

    Distributable? Can you push complex processing of logical areas to a separate server easily to offload computing cycles and scale?

    X

    Summary

    In this article we discussed what distributed computing is and its importance to an application's scalability. We then discussed what it would take to bring our application into a distributable form. Then we made our application distributable by implementing a WCF based service as well as a client that could consume that service. We then discussed the pros and cons of such an approach.

    References

    <<  Previous Article Continue reading and see our next or previous articles Next Article >>

    About Andrew Siemer

    I am a 33 year old, ex-Army Ranger, father of 6, geeky software engineer that loves to code, teach, and write. In my spare time (ha!) I like playing with my 6 kids, horses, and various other animals.

    This author has published 29 articles on DotNetSlackers. View other articles or the complete profile here.

    Other articles in this category


    Code First Approach using Entity Framework 4.1, Inversion of Control, Unity Framework, Repository and Unit of Work Patterns, and MVC3 Razor View
    A detailed introduction about the code first approach using Entity Framework 4.1, Inversion of Contr...
    jQuery Mobile ListView
    In this article, we're going to look at what JQuery Mobile uses to represent lists, and how capable ...
    Exception Handling and .Net (A practical approach)
    Error Handling has always been crucial for an application in a number of ways. It may affect the exe...
    JQuery Mobile Widgets Overview
    An overview of widgets in jQuery Mobile.
    Book Review: SignalR: Real-time Application Development
    A book review of SignalR by Simone.

    You might also be interested in the following related blog posts


    Silverlight SVC Web Service problems on IIS read more
    Defining SOA - Part III - Layered System read more
    Implementing WCF read more
    ServiceHostFactory vs ServiceHostFactoryBase read more
    Top
     
     
     

    Discussion


    Subject Author Date
    placeholder Nice article Gulab Chand Tejwani 11/19/2009 11:03 PM
    RE: Nice article Andrew Siemer 11/20/2009 2:11 AM

    Please login to rate or to leave a comment.