Choose your ORM: Runtime, code generation or build provider?

Selecting the right object-relational mapper is a tricky decision with many factors to weigh up.

One of the basic decisions is whether to go with a dynamic run-time (“black-box”) or a code generator.

I’m not a fan of the run-time approach – the discovery at run-time negatively impacts performance as it often uses reflection (or failing that post-compilation byte code modification) whilst robbing you of compile-time checking, IntelliSense support against your database objects, deployment and potentially licensing issues. In effect, it’s not that much better than a typed dataset.

Code generation provides for a much finer granularity letting you tweak the templates for the performance and features you need whilst also providing full compile-time checking and IntelliSense support.

Tools such as CodeSmith (my personal favorite), MyGeneration (free) do a good job of letting you write these templates and create the necessary ORM code but require being re-run every time you change the schema. During the starting phases of a project this could be quite often and goes against the whole concept of RAD.

So step in SubSonic and it’s build provider approach.

The idea here is that you modify your .config file to include the SubSonic build provider and it’s connection string, drop a simple text file in that lists which tables to work with and you’re done.

SubSonic now goes off to your database via the connection and generates all the code for tables you need and it’s magically there to be used like any other classes. Check out the demo to see just how easy it is.

SubSonic supports a large number of databases, has support for Enterprise Library, is open source and also provides simple “scaffold” pages that let you throw a basic web add/edit/update/delete table maintenance page by just throwing a table name attribute onto an empty page’s form element.

The only downside at this point is that it uses the ActiveRecord pattern for the ORM. If I manage to get some time to spend with it and can knock up a Domain Object + Data Mapper version I’ll let you know.

[)amien

1 responses

  1. Avatar for steve

    We use code generation (and our project is several years old tech now), but lately projects like Hibernate have made a strong case that you don't really need it. Yes, they need reflection, but since you pre-declare your mappings it is an initialisation-time task to build a serialisation plan, so at runtime it can be very efficient if the code & data metadata structures are well thought out. It also saves memory when you have very large entity sets, and there are arguments that a smaller core set of serialisation routines with well-organised mapping data can actually be beneficial, in that they're cache-friendly and it's easier to add features like cluster awareness.

    I've never played with Hibernate myself since we have our own code generation option, but I know people who have used it and swear by it (not about it, luckily). Not sure how good the NHibernate version is, I don't know anyone who uses that at the moment, but perhaps worth a look.

    steve 31 August 2006