Published: 31 Oct 2008
By: Patrik Löwendahl

Patrick Loewendahl talks about modeling domains with the Entity Data Model.

Contents [hide]


With Visual Studio 2008 SP1, Microsoft released it long awaited-for Entity Framework and Entity Data Model. The Entity Data Model lets developers create an object oriented model of the data their applications need and Object Oriented Programming really becomes a first class citizen in the Microsoft data access technology stack.

The Entity Data Model (EDM) is a powerful modeling tool and as usual; with great power comes great responsibilities and we as developers need some initial guidelines and best practices to lead our way into this new object oriented data access modeling.

I for one look a lot to Eric Evans Domain Driven Design [Evans] and the ideas that he has put forward in how to model a domain effectively. How one can create models based on Evans’ ideas is what this article will be about. We will examine a couple Domain Driven concepts and look at how the EDM supports (or doesn’t support) them and we’ll also try to derive some EDM modeling best practices from his conclusions.

Model boundaries – the aggregate

One of the most frequently asked questions I see when I hover the forums on EDM topics is - “my model is slow when I drag my 150 tables into the designer”. For these developers I would like to share a first advice of best practices: “Avoid huge models in the first place”.

Huge models have a tough disadvantage, it’s really hard to tell where the scope of a transaction starts and ends. With interconnected relationships and lots of classes in a hierarchy, how can you be certain that you actually have set the right boundaries for the current transaction? How can you be sure that you aren’t touching more than you need?

When you create a model what you should strive for is to capture a well defined and limited concept that you know will fit into a business transaction of the case you are currently trying to solve. In modeling terms what you want is an Aggregate [Evans] with a root entity and a boundary. What I’ve found is that an EDM never should be larger than one, maybe two closely related, such aggregates to be maintainable. So instead of one huge EDM, go for several smaller ones that each capture a specific concept of your application.

Figure 1: An aggregate

An aggregate

An added benefit is that the model will be easier to grasp and you’ll be able to understand and focus on single concepts much quicker.

Figure 2: Picture of Small conceptual EDM’s in a Solution tree

Picture of 

Small conceptual EDM’s in a Solution tree

Another common mistake that I see developers do is to assume that every sub-part of a project looks at a model the same way but the fact in the matter is that most don’t. The shipping sub system of your application might use and need an Order, but more often than not it’s a slightly different Order than the same concept in the Order management part of your applications. It’s the same data, agreed, but it’s often looked upon in a different way.

The shipping subsystem might need the order address and what products that it should send, but it’s not necessarily interested in billing information or financial information. Re-using the order model in both subsystems clearly violates the “Interface Segregation Principle” that says: “Clients should not be forced to depend upon interfaces that they do not use”. ( ). Evans also recognizes this and in the part “Strategic design” of his book he extensively talks about “Bounded context”.

A bounded context is a boundary for a model that applies for a certain context or concept. So when building our model we should not chase after re-use to in extreme. If a concept looks at a piece of data differently than another concept, chances are that you will be better of creating two separate models than trying to squeeze in as much of both concepts as possible into a single model.

Normalizing the model

When we work with databases we usually closely follow normalization rules that are there, amongst other reasons, to minimize structural and data duplication. When we create Object Oriented models we should try to normalize for maximum reuse as well. The current version of the EDM supports two such modeling techniques - value objects and inheritance - but unfortunately the designer doesn’t.

So to be able to get to the more advanced modeling we need to fire up the good old XML editor and write the EDM conceptual, structural and mapping files by hand.

Value objects

One way to normalize an object oriented model is to re-use classes. In the domain driven world this is called a “Value object” and in the Entity Framework it is called a “Complex type”. The idea is simple; you got a concept that doesn’t have its own ID and a concept that is shared through multiple models; a classic such example is an Address. Address should probably be handled, as a concept, in the same manner for an Order shipping address as for a Customer billing address. The basics are the same: You got a street, zip code and country; and validation is probably the same as well. So why spend time in writing several address representations? Why not just reuse the one you already got?

As mentioned, in Entity Framework you utilize this with the notion of a “Complex type”, like the following example:

Listing 1: A complex type mapping

Which results in something like:

Figure 3: Reusing a Value Type

Reusing a Value Type

What we see in figure 3 is the possibility to have one conceptual address and the reuse it in multiple aggregates. The benefit of this is that now I can reuse my validation logic for all addresses in my application.


What we’ve seen is that the conceptual model in Entity Framework really is, and should be, a separate model from the storage ditto. It enables advanced object oriented design and lets you model your application code in a way that best suites you.

<<  Previous Article Continue reading and see our next or previous articles Next Article >>

About Patrik Löwendahl

Patrik works as an Architect at Sogeti in Sweden. He spends his days helping teams, projects and developers excel in the .NET space with a focus on architecture, code design, and the backend bits of the .NET world like WCF, NHibernate, LINQ to *. He is a MVP, member of Microsoft Extended Experts Tea...

View complete profile here.

Other articles in this category

A Feature-driven Comparison of Entity Framework and NHibernate-2nd Level Caching
Where would you place caching in your layered solution? Two main patterns exist for caching in appli...
A Feature-driven Comparison of Entity Framework and NHibernate - Queries
Let's explore what Entity Framework and NHibernate has to offer when it comes to their query capabil...
A Feature-driven Comparison of Entity Framework and NHibernate—Fetch Plans
This article is about fetch plans-a recognized and common way for developers to instruct the O/RM ab...
A Feature-driven Comparison of Entity Framework and NHibernate - Self-tracking entities
In this article, Dino Esposito introduces self-tracking entities.
.NET type generation for NHibernate mapping collections
An overview of .NET type generation for NHibernate mapping collections.

You might also be interested in the following related blog posts

ASP.NET 4.0 Dynamic Data and Many to Many Entity Framework Entities read more
New Entity Framework Feature CTP for VS2010 Beta 2 read more
Introducing Versatile DataSources read more
How to display data from different tables using one data source read more
"Oslo" at the 2009 PDC read more
Spec Explorer: A Model-Based Testing tool read more
What needed to be monitored to get better Governance read more
Adhoc testing of .NET RIA Services read more
EF4: Lazy Loading on By Default but what about pre Beta 2 Models? read more
Silverlight, MVVM, Prism and More at VSLive Orlando read more

Please login to rate or to leave a comment.