Dane Morgridge

Programmer, Geek, ASPInsider
A blog about code and data access

  Home  |   Contact  |   Syndication    |   Login
  43 Posts | 0 Stories | 67 Comments | 0 Trackbacks


INETA Community Speakers Program


Wednesday, September 26, 2012 #

It’s been something I’ve wanted to do for quite some time and decided it was finally time. Yesterday, I launched a new web comic “Code Monkey Kung Fu”. After being a programer for more than ten years, I’ve come across quite a few hilarious situations and will be drawing on them for inspiration. I also have a four kids, so they will probably produce a lot as well. My plan is to release on Tuesdays with additional comics mixed in on occasion.

I hope you enjoy!


Wednesday, February 23, 2011 #

As some may have noticed, I have taken a liking to Ruby (and Rails in particular) quite a bit recently. This last weekend I spoke at the NYC Code Camp on a comparison of ASP.NET and Rails as well as an intro to Entity Framework talk.  I am speaking at RubyNation in April and have submitted to other ruby conferences around the area and I am also doing a Rails and MongoDB talk at the Philly Code Camp in April. Before you start to think this is my "I'm leaving .NET post", which it isn't so I need to clarify.

I am not, nor do I intend to any time in the near future plan on abandoning .NET.  I am simply branching out into another community based on a development technology that I very much enjoy.  If you look at my twitter bio, you will see that I am into Entity Framework, Ruby on Rails, C++ and ASP.NET MVC, and not necessarily in that order.  I know you're probably thinking to your self that I am crazy, which is probably true on several levels (especially the C++ part). I was actually crazy enough at the NYC Code Camp to show up wearing a Linux t-shirt, presenting with my MacBook Pro on Entity Framework, ASP.NET MVC and Rails. (I did get pelted in the head with candy by Rachel Appel for it though)

At all of the code camps I am submitting to this year, i will be submitting sessions on likely all four topics, and some sessions will be a combination of 2 or more.  For example, my "ASP.NET MVC: A Gateway To Rails?" talk touches ASP.NET MVC, Entity Framework Code First and Rails.

Simply put (and I talk about this in my MVC & Rails talk) is that learning and using Rails has made me a better ASP.NET MVC developer.

Just one example of this is helper methods.  When I started working with ASP.NET MVC, I didn't really want to use helpers and preferred to just use standard html tags, especially where links were concerned.  It was just me being stubborn and not really seeing all of the benefit of the helpers.  To my defense, coming from WebForms, I wanted to be as bare metal as possible and it seemed at first like a lot of the helpers were an unnecessary abstraction.

I took my first look at Rails back in v1 and didn't spend very much time with it so I dismissed it and went on my merry ASP.NET WebForms way.  Then I picked up ASP.NET MVC and grasped the MVC pattern itself much better. After this, I took another look at Rails and everything made sense.  I decided then to learn Rails. (I think it is important for developers to learn new languages and platforms regularly so it was a natural progression for me)

I wanted to learn it the right way, so when I dug into code, everyone used helpers everywhere for pretty much everything possible. I took some time to dig in and found out how helpful they were and subsequently realized how awesome they were in ASP.NET MVC also and started using them.

In short, I love Rails (and Ruby in general).  I also love ASP.NET MVC and Entity Framework and yes I still love C++.  I have varying degrees of love for them individually at any given moment and it is likely to shift based on the current project I am working on.  I know you're thinking it so before you ask the question. "Which do I use when?", I'm going to give the standard developer answer of: It depends.  There are a lot of factors that I am not going to even go into that would go into a decision.  The most basic question I would ask though is,  does this project depend on .NET?  If it does, then I'd say that ASP.NET MVC is probably going to be the more logical choice and I am going to leave it at that.  I am working on projects right now in both technologies and I don't see that changing anytime soon (one project even uses both).

With all that being said, you'll find me at code camps, conferences and user groups presenting on .NET, Ruby or both, writing about .NET and Ruby and I will likely be blogging on both in the future.  I know of others that have successfully branched out to other communities and with any luck I'll be successful at it too.

On a (sorta) side note, I read a post by Justin Etheredge the other day that pretty much sums up my feelings about Ruby as a language.  I highly recommend checking it out: What Is So Great About Ruby?

Monday, January 24, 2011 #

I have been working on a Refcard for DZone over the past months and it is finally published. The refcard gives you a intro into the basics of the Entity Framework 4 and can be used as a quick reference. 

Please take a moment to download it and check it out.  You can download it at: http://refcardz.dzone.com/refcardz/adonet-entity-framework

Monday, December 20, 2010 #

There are several attributes available when using code first with the Entity Framework 4 CTP5 Code First option.  When working with strings you can use [MaxLength(length)] to control the length and [Required] will work on all properties.  But there are a few things missing. By default all string will be created using unicode so you will get nvarchar instead of varchar.  You can change this using the fluent API or you can create an attribute to make the change.  If you have a lot of properties, the attribute will be much easier and require less code.

You will need to add two classes to your project to create the attribute itself:

   1: public class UnicodeAttribute : Attribute
   2: {
   3:     bool _isUnicode;
   5:     public UnicodeAttribute(bool isUnicode)
   6:     {
   7:         _isUnicode = isUnicode;
   8:     }
  10:     public bool IsUnicode { get { return _isUnicode; } }
  11: }
  13: public class UnicodeAttributeConvention : AttributeConfigurationConvention<PropertyInfo, StringPropertyConfiguration, UnicodeAttribute>
  14: {
  15:     public override void Apply(PropertyInfo memberInfo, StringPropertyConfiguration configuration, UnicodeAttribute attribute)
  16:     {
  17:         configuration.IsUnicode = attribute.IsUnicode;
  18:     }
  19: }

The UnicodeAttribue class gives you a [Unicode] attribute that you can use on your properties and the UnicodeAttributeConvention will tell EF how to handle the attribute.

You will need to add a line to the OnModelCreating method inside your context for EF to recognize the attribute:

   1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder)
   2: {
   3:     modelBuilder.Conventions.Add(new UnicodeAttributeConvention());
   4:     base.OnModelCreating(modelBuilder);
   5: }

Once you have this done, you can use the attribute in your classes to make sure that you get database types of varchar instead of nvarchar:

   1: [Unicode(false)]
   2: public string Name { get; set; }


Another option that is missing is the ability to set the precision and scale on a decimal.  By default decimals get created as (18,0).  If you need decimals to be something like (9,2) then you can once again use the fluent API or create a custom attribute.  As with the unicode attribute, you will need to add two classes to your project:

   1: public class DecimalPrecisionAttribute : Attribute
   2: {
   3:     int _precision;
   4:     private int _scale;
   6:     public DecimalPrecisionAttribute(int precision, int scale)
   7:     {
   8:         _precision = precision;
   9:         _scale = scale;
  10:     }
  12:     public int Precision { get { return _precision; } }
  13:     public int Scale { get { return _scale; } }
  14: }
  16: public class DecimalPrecisionAttributeConvention : AttributeConfigurationConvention<PropertyInfo, DecimalPropertyConfiguration, DecimalPrecisionAttribute>
  17: {
  18:     public override void Apply(PropertyInfo memberInfo, DecimalPropertyConfiguration configuration, DecimalPrecisionAttribute attribute)
  19:     {
  20:         configuration.Precision = Convert.ToByte(attribute.Precision);
  21:         configuration.Scale = Convert.ToByte(attribute.Scale);
  23:     }
  24: }

Add your line to the OnModelCreating:

   1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder)
   2: {
   3:     modelBuilder.Conventions.Add(new UnicodeAttributeConvention());
   4:     modelBuilder.Conventions.Add(new DecimalPrecisionAttributeConvention());
   5:     base.OnModelCreating(modelBuilder);
   6: }

Now you can use the following on your properties:

   1: [DecimalPrecision(9,2)]
   2: public decimal Cost { get; set; }

Both these options use the same concepts so if there are other attributes that you want to use, you can create them quite simply.  The key to it all is the PropertyConfiguration classes.   If there is a class for the datatype, then you should be able to write an attribute to set almost everything you need.  You could also create a single attribute to encapsulate all of the possible string combinations instead of having multiple attributes on each property.

All in all, I am loving code first and having attributes to control database generation instead of using the fluent API is huge and saves me a great deal of time.

Friday, December 17, 2010 #

I have been using EF4 CTP5 with code first and I really like the new code.  One issue I was having however, was cascading deletes is on by default.  This may come as a surprise as using Entity Framework with anything but code first, this is not the case.  I ran into an exception with some one-to-many relationships I had:

Introducing FOREIGN KEY constraint 'ProjectAuthorization_UserProfile' on table 'ProjectAuthorizations' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints.
Could not create constraint. See previous errors.

To get around this, you can use the fluent API and put some code in the OnModelCreating:

   1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder)
   2: {
   3:     modelBuilder.Entity<UserProfile>()
   4:         .HasMany(u => u.ProjectAuthorizations)
   5:         .WithRequired(a => a.UserProfile)
   6:         .WillCascadeOnDelete(false);
   7: }

This will work to remove the cascading delete, but I have to use the fluent API and it has to be done for every one-to-many relationship that causes the problem.

I am personally not a fan of cascading deletes in general (for several reasons) and I’m not a huge fan of fluent APIs.  However, there is a way to do this without using the fluent API.  You can in the OnModelCreating, remove the convention that creates the cascading deletes altogether.

   1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder)
   2: {
   3:     modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>();
   4: }

Thanks to Jeff Derstadt from Microsoft for the info on removing the convention all together.  There is a way to build a custom attribute to remove it on a case by case basis and I’ll have a post on how to do this in the near future.

Monday, June 28, 2010 #

I have moved this project from CodePlex to GitHub at https://github.com/danemorgridge/efrepo

I have a posted a project on Codeplex at http://efrepository.codeplex.com.  It is a T4 template to give you a data layer that follows Repository and Unit of Work patterns that is also ready for Dependency Injection (DI).  DI frameworks allow you to build code that is more testable and allows for a greater separation of concerns (SoC).  This is not the only use for them, but it is a big one and what they are commonly popular for.  You don’t have to use DI to get a good separation of concerns, but it certainly can help.  It is one of the most important things that developers can do to make their code more maintainable but is unfortunately one of the things that is commonly overlooked.

Most applications these days work with data at some point and a large amount of them use relational databases.  Many developers use Object Relational Mapping (ORM) tools to make that process easier. Entity Framework is Microsoft’s ORM and data access strategy moving forward.  LINQ to SQL also exists but Entity Framework is where the new innovation will happen.  As such, it is easy to use the ORM tool as your data access layer and not just a data access technology.  While this works, it doesn’t lend it self to an application with a good separation of concerns. 

When I build an application, I normally try to separate the code out by assemblies into logical pieces.  If you don’t separate by assemblies, at they very least separate by namespaces.  This will give you the ability to separate by assemblies later if you need to.  One common thing I like to separate is the data access layer.  There are several reasons for this, but one of the biggest is simply for code reuse.  I rarely write an application that doesn’t have multiple pieces that need to access data so having it separated reduces duplication and thus lends to a more maintainable and sustainable application.

So what does this have to do with my T4 template project on CodePlex.  This template will generate files necessary to give you a persistent ignorant, testable data layer without having to write a great deal of code.  I am huge fan of code gereration where it makes sense and this is one area where I think it does.  I also feel that you should know what code generation is doing.  This makes debugging much easier.  I am going to focus on this post on what files are generated and how to use what is there to build your data layer.  I will have additional samples available in the future with different DI frameworks and more advanced samples. 

The template looks for an .edmx file in the same directory with it and will generate several files based on your model.  Most of these files will be re-generated when the T4 template is executed but there are few that will not.  These will allow you to put custom logic in those files without having to create your own partial classes.


IRepository.cs : Interface for the Repository portion.  This is one of two interfaces that are generated.  It is structured so that when mocking for testing, you only need to mock the two interfaces and everything wil fal into place properly.

   1: public interface IRepository<T>
   2: {
   3:     IUnitOfWork UnitOfWork { get; set; }
   4:     IQueryable<T> All();
   5:     IQueryable<T> Find(Func<T, bool> expression);
   6:     void Add(T entity);
   7:     void Delete(T entity);
   8:     void Save();
   9: }

You can see there is a property for an IUnitOfWork which will hold the actual context.  Basic CRUD functions are exposed here as well.  This will likely be expanded as the project grows and matures.  Feel free to give me feedback on anything you would like to see added here.

IUnitOfWork.cs : Interface for the Unit of Work

   1: public interface IUnitOfWork
   2: {
   3:     ObjectContext Context { get; set; }
   4:     void Save();    
   5:     bool LazyLoadingEnabled { get; set; }
   6:     bool ProxyCreationEnabled { get; set; }
   7:     string ConnectionString { get; set; }
   8: }

The ObjectContext in the UnitOfWork is the only dependency on Entity Framework in the project and is set to the real concrete ObjectContext in the EFUnitOfWork.  The other properties are used to set properties on the object context without having to directly access the context itself.  Since this interface is the gateway into the Entity Framework, this is the one that you will want to mock when building tests.

EFRepository.cs : Concrete class for actually working with Entity Framework classes

   1: public class EFRepository<T> : IRepository<T> where T : class
   2: {
   3:     public IUnitOfWork UnitOfWork { get; set; }
   4:     private IObjectSet<T> _objectset;
   5:     private IObjectSet<T> ObjectSet
   6:     {
   7:         get
   8:         {
   9:             if (_objectset == null)
  10:             {    
  11:                 _objectset = UnitOfWork.Context.CreateObjectSet<T>();
  12:             }
  13:             return _objectset;
  14:         }
  15:     }
  17:     public virtual IQueryable<T> All()
  18:     {
  19:         return ObjectSet.AsQueryable();
  20:     }
  22:     public IQueryable<T> Find(Func<T, bool> expression)
  23:     {
  24:         return ObjectSet.Where(expression).AsQueryable();
  25:     }
  27:     public void Add(T entity)
  28:     {
  29:         ObjectSet.AddObject(entity);
  30:     }
  32:     public void Delete(T entity)
  33:     {
  34:         ObjectSet.DeleteObject(entity);
  35:     }
  37:     public void Save()
  38:     {
  39:         UnitOfWork.Save();
  40:     }
  41: }

There is a lot going on here but the key thing to note in this file is the IObjectSet<T>.  This is how the ObjectContext contained inside the Unit of Work knows which classes to work with.  This class is the basis for all of our actual data layer classes as you will see shortly.  Using a combination of the methods exposed here you should be able to do anything you need to do to query and work with entities.

EFUnitOfWork.cs : Mostly properties that interface with the context.

   1: public partial class EFUnitOfWork : IUnitOfWork
   2: {
   3:     public ObjectContext Context { get; set; }
   5:     public EFUnitOfWork()
   6:     {
   7:         Context = new DataModelContainer();
   8:     }
  10:     public void Save()
  11:     {
  12:         Context.SaveChanges();
  13:     }
  15:     ...
  16:    }

In the constructor the actual concrete ObjectContext is set.  This is added directly by the T4 template and is extracted directly from the .emdx file. The ObjectContext is exposed as a public property so you can access it directly if need be. 

There are two more classes that get gererated for each entity.  They are {Entity}Repository.cs and {Entity)Repository.gererated.cs.  The {Entity}Repository.generated.cs file contains the code that interfaces with the IRepository<T> and IUnitOfWork.  The {Entity}Repository.cs classes are where you will set your own custom data layer logic.  This is where this template will really help.  If we have a Person entity we will have PersonRepository.cs and a PersonRepository.generated.cs. 

PersonRepository.gererated.cs : The glue that makes it all work

   1: public partial class PersonRepository
   2: {
   3:     private IRepository<Person> _repository {get;set;}
   4:     public IRepository<Person> Repository
   5:     {
   6:         get { return _repository; }
   7:         set { _repository = value; }
   8:     }
  10:     public PersonRepository(IRepository<Person> repository, IUnitOfWork unitOfWork)
  11:     {
  12:         Repository = repository;
  13:         Repository.UnitOfWork = unitOfWork;
  14:     }
  16:     public IQueryable<Person> All()
  17:     {
  18:         return Repository.All();
  19:     }
  21:     public void Add(Person entity)
  22:     {
  23:         Repository.Add(entity);
  24:     }
  26:     public void Delete(Person entity)
  27:     {
  28:         Repository.Delete(entity);
  29:     }
  31:     public void Save()
  32:     {
  33:         Repository.Save();
  34:     }
  37: }

The first thing you will likely notice is that the class doesn’t inherit from anything. This is by design as I would still have to implement all of the methods if I inherited directly from IRepository<Person>.  I have some plans to make a base class that will reduce the total amount of code, I’m just not finished with it yet.  Be looking for that in the coming weeks.  The PersonRepository class now has access to all of the basic methods from IRepository<T> because they are re-implemented here.  The Repository property is what gives you the access to this. 

We have here basic CRUD operations to Add (Create) Read (Find) Update (Save) and Delete (Delete).  All are very self explainitory with maybe the exception of Find.  The Find method takes a lambda expression just like the Where clause of an IQueryable.  In fact, it passes it directly to the Where clause of an IQueryable.  While the methods in this class are great, they are not really meant to be used directly but rather through data access methods.  You certainly can use them directly, but I would recommend another approach.

The PersonRepository.cs file won’t get overwritten when the template runs so that is where you would put your logic.  Let’s start by creating a method to find a person by their primary key.  We will simply call it GetPerson(int personId).  We will then call Repository.Find() and query for the personId:

   1: public Person GetPerson (int personId)
   2: {
   3:     return Repository.Find(p => p.PersonId == personId).FirstOrDefault();
   4: }

We use the Repository property that is in the generated file to gain access to the Entity Framework.  We are doing a basic query here to find the person based on their associated personId.  we are calling FirstOrDefault() to return teh first entity that is in the result set.  If there are none, it will return null.  We could just use First(), but it would through an exception and I would rather do a null check personally.  Now that we have a method to use, let’s use it.

The first thing you need to do to create PersonRepository instance so you can access the newly created method:  The PersonRepository has a constructor that does takes in a couple of properties:

   1: public PersonRepository(IRepository<Person> repository, IUnitOfWork unitOfWork)

It looks for an IRepository<Person> and an IUnitOfWork.  These can be satisfied by using an EFRepository<Person> and an EFUnitOfWork. To create our instance we will use the following:

   1: var personRepository = new PersonRepository(new EFRepository<Person>(), new EFUnitOfWork());

We can now call execute our method:

   1: int personId = 1;
   2: var person = personRepository.GetPerson(personId);

If person comes back null, then we know it wasn’t found.  Otherwise, we have our entity to start working with.  It’s as easy as that.

Now if you notice, I created a new EFUnitOfWork.  This is fine if you are only using one repository class. If you need to actively use more than one, you will need to create a separate unit of work and pass it in.  We also have an AddressRepository so it would look like this:

   1: var unitOfWork = new EFUnitOfWork();
   2: var personRepository = new PersonRepository(new EFRepository<Person>(), unitOfWork);
   3: var addressRepository = new AddressRepository(new EFRepository<Address>(), unitOfWork);
   5: // Do some stuff requiring saving
   7: unitOfWork.Save();

Here is where the unit of work pattern comes into play. You can perform multiple action on the unit of work within multiple repositories and then call Save at once.  the Save() call inturn calls the SaveChanges() method on the ObjectContext which will by default wrap the entire unit of work in a transaction.  If anything fails, the whole operation is rolled back.

Now these methods aren’t as easily testable as they could be.  In the next post we will look at how it works with Dependency Injection and specifically StructureMap. StructureMap makes the methods more testable and at the same time make the whole process easier without adding unecessary complexity.

Go to efrepository.codeplex.com to download!

Sunday, June 27, 2010 #

I attended my first CodeStock this year and in short it was awesome.  Like 100 billion hot dogs awesome.  The travel there was crazy to say the least, but I met lots of new people, had a session go well and recorded 3 podcasts.  So that is the short version.  If you are on twitter and either follow me or followed the #codestock hash tag, you probably saw my airline craziness in Philly.

It all started on Monday.  We had a client deliverable on Wednesday morning and due to some things that happened I only got about 4 hours of sleep between Monday morning and Wednesday afternoon.  This included one 32 hour work day.  I left work at noon on Wednesday and was able to get a few hours of sleep, but not a lot.  I came in Thursday for a couple of hours to finish up a couple things and have a quick meeting.  One of my developers was going to take me to the airport to make my 3:40pm flight to Knoxville.  About 10:30, I got an email that the flight was delayed an hour.  I wasn’t really worried since this would give me a little more time to make it through security and such.  We left about 11:30 from work and got some lunch, before going to the airport.  I haven’t flown in 13 years since I left South Korea when I was in the Air Force, and a lot has changed.  I wanted to get there a little earlier since I didn’t know what to expect. Several people at work told me it would take me about an hour to get through security. 

I got through security really fast.  It literally took me longer to take my shoes off than it did to get through the process, so there I got lucky.  However, that is where my luck ended.  I get to the gate and get out my laptop, get setup with wifi and get an email that my flight had been delayed again, this time 5:18.  Then almost instantly, I get another saying 5:40.  While this wasn’t ideal, it was only a 2 hour flight at max so I would still make it to the before party.  About a half an hour later, I get another email that it was delayed until 6:03. Finally about 4, the flight gets cancelled. There was some bad weather and I am assuming that it was the cause.

I get to the gate check-in where they told us to go and they put me on another flight leaving at 8:30.  I had pretty much given up on making the before party at this point, but at least I would get there early enough to still catch up with some people before having to go to bed.  I get my new boarding pass and get an email as it is being handed to me that the new flight I was on got delayed, by a little more than an hour.  I get 3 more delay emails as the night progressed and the last time was 11:54pm.  Needless to say, I wasn’t really happy.  I managed to find a seat in the gate area that had power so I could keep my laptop powered up.  This was probably the single thing that kept me sane the whole time. 

Finally we find out the that the flight we were waiting for was actually coming and when it finally showed up at the gate, everyone started cheering.  The plane was pretty small but at least I was on my way, but I was getting into Knoxville at 2am, with a podcast session at 9:50 in the morning.  Combined with little to no sleep the rest of the week, I was getting a bit worried that I would be able to pull it off.  Andrew was prepared to do the podcast solo if necessary, but I really wanted to make it on time.  I get to the hotel just before 3 and get a sleep, but not what I needed.

I made it to CodeStock a few minutes into the first session and the second session was the Community Megaphone Podcast session Andrew and I were doing.  We did a “Speaker Horror Stories” panel and it went extremely well.  We had a good group of speakers sharing craziness.  We will have the show up in a couple of weeks and it is one that you will not want to miss.  I had my “Getting Started with Entity Framework 4” session at 12:30 so I got some lunch and found my room. 

Presenting on very little sleep usually has one of 2 outcomes:  Great success, or complete and total failure.  I was also giving my presentation on my new macbook pro, which I had not done a presentation on before.  I had my slides setup in Office for Mac and was running Win7 and VS2010 in a virtual machine.  It’s how I do my development on a day to day basis, but just hadn’t presented on it since I had only had the new laptop for about a week or so.  It was risky but I had gone over everything about 50 billion times in the Philly airport so I was prepared for what ever may come. As long as I didn’t fall asleep I was going to be fine.  I had about 50 people show up and the session went really well.  I only got through about half of my material since I tend to go into more detail about some stuff than I probably should when I’m tired.  I got some good feedback so I am calling a success.  Thanks to all who came out, it was a blast.

I got out of my session and begin to start looking for people that I had been following on twitter, but hadn’t had a chance to meet.  Since I am in Philly, I know a lot of people on the east coast, but not many past there so it was fun.  Since I had tweeted all night about the airport the night before and hash tagged it, several people knew who I was just from that. 

Andrew and I then went back to the hotel and in the lobby we recorded a podcast episode with Alan Stevens which went well, except for some of the noise in the lobby.  It’s kinda cool to have that ambience of where you are, but when Starbucks starts making frappichinos, it can get pretty loud.  We will have that episode up in a month or so.  Alan talked about going independent and it will be well worth a listen for anyone who is thinking about doing that.

The keynote was Friday evening in a really cool old theater, complete with the “muppet” box seats on the side.  It was the box seats where Statler and Waldorf would sit and heckle.  Michael Neel tweeted that the muppet seats were for speakers and were on first-come-first-serve basis so I skipped supper to make sure I got one.  My friend, Rachel Appel, did the keynote and did a wonderful job.  Having the keynote at the end of the day was really nice since there weren’t as many time constraints.  One of the highlights was when Andrew Duthie got on stage during the keynote, he took Rachel in his arms like he was going to give a big kiss and Joel Cochran stood up from the front row and shouted: “SHE’S MINE!!”.  Of course it was all staged and part of joke that started on twitter quite a while ago. Most of the people in the audience had no clue what was going on, so if you where one of those, let me explain:

First you must know that the very first rule of a code camp or any developer conference is:

Never, ever under any circumstances, leave your laptop or phone unlocked when not using it.

Someone (like Rachel) will come along and use your laptop to tweet for you and such.  This happened to Joel on multiple occasions and in cases he professed his undying love for Rachel via twitter. Along with various rashes and such.  Rachel has done this enough, that when ever that happens to someone, we call it “Getting Rappeled”. So since Joel has professed his love for Rachel, that was the source of the joke.  So now you know.  Joel also hasn’t seemed to learn his lesson as he has been “Rappelled” multiple times. Since it was actually done by Rachel.

So if you ever come across Joel’s laptop, you not only have permission but an obligation to tweet something from it.

After the keynote, I had supper with Bethany Vananda and Jeremy Likness from Wintellect, who I both met at Devscovery earlier this year.  That was a blast and I will forever have some great inside jokes from that conversation.  I spent the rest of the night hanging out in various locations before going to get some sleep.

I got some much needed sleep (finally) and actually slept through the first two sessions on Saturday. Saturday was another day of going around and chatting with people.  I tend to try to do more networking at conferences than I do going to sessions, as I usually get more out of it that way.  I spent a little bit of time in Open Spaces and went to a session or two before recording another podcast with Michael Neel, the founder and all around awesome guy behind CodeStock.  Afterwards, I went and got some food and headed to PostStock, the after party at Alan Stevens house.  There I found some people I hadn’t had a chance to chat with and spend a few hours there doing just that.  No crazy stories from Saturday, just a really good day of connecting with people.  The trip home went well, I even got to hang out with Gary Short and Rachel Hawley at the airport.

In summary, CodeStock 2010 was truly awesome.  Like 100 billion hot dogs awesome. (If you google or bing that, you will find the reference).  Michael and his crew put on an excellent event and it is one that I would recommend anyone go to.  The sessions I went to were some of the best I have attended.  CodeStock isn’t a free event, but its worth every penny.  I will definitely be going back next year.

Tuesday, May 4, 2010 #

If you are doing any work with Entity Framework and custom WCF services in EFv1, everything works great.  As soon as you jump to EFv4, you may find yourself getting odd errors that you can’t seem to catch.  The problem is almost always has something to do with the new lazy loading feature in Entity Framework 4.  With Entity Framework 1, you didn’t have lazy loading so this problem didn’t surface. 

Assume I have a Person entity and an Address entity where there is a one-to-many relationship between Person and Address (Person has many Addresses). In Entity Framework 1 (or in EFv4 with lazy loading turned off), I would have to load the Address data by hand by either using the Include or Load Method:

var people = context.People.Include("Addresses");



Lazy loading works when the first time the Person.Addresses collection is accessed:
   1: var people = context.People.ToList();
   3: // only person data is currently in memory
   5: foreach(var person in people)
   6: {
   7:     // EF determines that no Address data has been loaded and lazy loads
   8:     int count = person.Addresses.Count();  
   9: }

Lazy loading has the useful (and sometimes not useful) feature of fetching data when requested.  It can make your life easier or it can make it a big pain. 

So what does this have to do with WCF?  One word: Serialization.

When you need to pass data over the wire with WCF, the data contract is serialized into either XML or binary depending on the binding you are using.  Well, if I am using lazy loading, the Person entity gets serialized and during that process, the Addresses collection is accessed.  When that happens, the Address data is lazy loaded.  Then the Address is serialized, and the Person property is accessed, and then also serialized and then the Addresses collection is accessed.  Now the second time through, lazy loading doesn’t kick in, but you can see the infinite loop caused by this process.  This is a problem with any serialization, but I personally found it trying to use WCF.

The fix for this is to simply turn off lazy Loading.  This can be done at each call by using context options:
context.ContextOptions.LazyLoadingEnabled = false;

Turning lazy loading off will now allow your classes to be serialized properly.  Note, this is if you are using the standard Entity Framework classes.  If you are using POCO,  you will have to do something slightly different. 

With POCO, the Entity Framework will create proxy classes by default that allow things like lazy loading to work with POCO.  This proxy basically creates a proxy object that is a full Entity Framework object that sits between the context and the POCO object.  When using POCO with WCF (or any serialization) just turning off lazy loading doesn’t cut it.  You have to turn off the proxy creation to ensure that your classes will serialize properly:

context.ContextOptions.ProxyCreationEnabled = false;

The nice thing is that you can do this on a call-by-call basis.  If you use a new context for each set of operations (which you should) then you can turn either lazy loading or proxy creation on and off as needed. 

Monday, May 3, 2010 #

In almost every talk I have done on Entity Framework I get questions on how to do custom SSDL or SQL when using model first development.  Quite a few of these questions have required custom changes to the SSDL, which of course can be a problem if it is getting auto generated.  Luckily, there is a tool that can help. 

In the Visual Studio Gallery on MSDN, there is the Entity Designer Database Generation Power Pack. You have the ability to select different generation strategies and it also allows you to inject custom T4 Templates into the generation workflow so that you can customize the SSDL and SQL generation. 

When you select to generate a database from a model the dialog is replaced by one with more options:


You can clone the individual workflow for either the current project or current machine.  The templates are installed at “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\Entity Framework Tools\DBGen” on my local machine and you can make a copy of any template there.  If you clone the strategy and open it up, you will get the following workflow:


Each item in the sequence is defining the execution of a T4 template.  The XAML for the workflow is listed below so you can see where the T4 files are defined.  You can simply make a copy of an existing template and make what ever changes you need.

   1: <Activity x:Class="GenerateDatabaseScriptWorkflow" ... >
   2:   <x:Members>
   3:     <x:Property Name="Csdl" Type="InArgument(sde:EdmItemCollection)" />
   4:     <x:Property Name="ExistingSsdl" Type="InArgument(s:String)" />
   5:     <x:Property Name="ExistingMsl" Type="InArgument(s:String)" />
   6:     <x:Property Name="Ssdl" Type="OutArgument(s:String)" />
   7:     <x:Property Name="Msl" Type="OutArgument(s:String)" />
   8:     <x:Property Name="Ddl" Type="OutArgument(s:String)" />
   9:     <x:Property Name="SmoSsdl" Type="OutArgument(ss:SsdlServer)" />
  10:   </x:Members>
  11:   <Sequence>
  12:     <dbtk:ProgressBarStartActivity />
  13:     <dbtk:CsdlToSsdlTemplateActivity SsdlOutput="[Ssdl]" TemplatePath="$(VSEFTools)\DBGen\CSDLToSSDL_TPT.tt" />
  14:     <dbtk:CsdlToMslTemplateActivity MslOutput="[Msl]" TemplatePath="$(VSEFTools)\DBGen\CSDLToMSL_TPT.tt" />
  15:     <ded:SsdlToDdlActivity ExistingSsdlInput="[ExistingSsdl]" SsdlInput="[Ssdl]" DdlOutput="[Ddl]" />
  16:     <dbtk:GenerateAlterSqlActivity DdlInputOutput="[Ddl]" DeployToScript="True" DeployToDatabase="False" />
  17:     <dbtk:ProgressBarEndActivity ClosePopup="true" />
  18:   </Sequence>
  19: </Activity>


So as you can see, this tool enables you to make some pretty heavy customizations to how the SSDL and SQL get generated.  You can get more info and the tool can be downloaded from: http://visualstudiogallery.msdn.microsoft.com/en-us/df3541c3-d833-4b65-b942-989e7ec74c87.  There is a comments section on the site so make sure you let the team know what you like and what you don’t like.  Enjoy!

Friday, April 2, 2010 #

For those who don't already know, yesterday I received my first Microsoft MVP Award in Data Platform Development.  With less than 5,000 MVPs in the world overall and about 20 in the Data Platform category, saying I am honored would be an understatement.  From the first time I spoke at a code camp, I was totally hooked and have had a blast travelling around the east coast speaking at code camps and users groups. 

I'd like to take the time to thank Dani Diaz (@danidiaz) for the nomination and everyone who supported me, especially my wife Lisa for letting me travel and speak as much as I have and putting up with me for late nights and such.  Roska Digital, my employer, also deserves a shout out for supporting me and giving me the necessary time off to get to speaking engagements.  With any luck, the next year will be at least as fun if not more than the last one has.  I hope to see you at a code camp or user group meeting soon!

I would also like to send a congratulations to the other new Philly Area MVPs: John Angelini & Ned Ames (@nedames)

You can find out more about the Microsoft MVP Award at https://mvp.support.microsoft.com/