Separation between Data Transfer Objects and Entities, and Automapper to the rescue

One question we had a little while ago. When you return objects from a controller, should they be the same objects you get out of your data layer?

That is, if your data layer has an object like this;

public class Project
public int ProjectId { get; set; }
public string Name { get; set; }

Should you return this direct from Controllers, like so;

public class ProjectController: ApiController
public Project Get(int id)
return this.db.GetProject(id);

And the answer Christian Weyer gave in his talk was no — use a Data Transfer Object instead. This is a pattern which has you translate from the database entity down into a very plain object designed exclusively for transferring across the wire. We’d started doing this and named them transmission objects — we would have a DB.Project class designed for inside the system, and a TX.Project class designed for transmitting over the wire, converting to JSON, etc. We had agonised a bit over the added complexity, but it’s good to see that it’s a recognised good pattern, and I’ll sleep better tonight knowing we’re doing the same as the thought-leaders.

Where his solution was much tighter than ours is that he’d used Automapper — a library which allows for very straightforward mappings between comparable types. So if DB.Project and TX.Project share similarly-named members, you end up being able to translate one to the other using calls like;

DB.Project dbProject = ...;
TX.Project txProject = dbProject.Map(); // auto-mapper kicks in here

As an adjunct to that — there is also a JsonValue class which is designed for converting to JSON. Might be worth investigating in more detail…

Enabling OData through WebAPI

Here’s a very quick bit of code which Christain Weyer’s talk leads me to believe gives you easy OData support. OData support is enabled by (1) calling .EnableQuerySupport() on the HttpConfiguration object you get in global.asax.cs, and (2) Returning IQueryable<T> from the .Get() method of your ApiController subclass, and (3) marking that method up as [Queryable]. Something like;

// in Global.asax.cs

// in your Api Controller
class ToDoController: ApiController
public IQueryable Get()
return this.repo.Todos;

Not sure about the details, or whether this is perfect code — it’s copied out of a desperate scribbling session during the talk — but this might be enough for someone to start searching down the right track.

Hosting options for WebAPI

So you’ll probably be using IIS to run your WebAPI apps, but you really don’t need to. WebAPI ships with a class which starts a web server handling WebAPI requests, in System.Web.Http.SelfHost..dllHttpSelfHostServer.

Now, this is really interesting, for two reasons.

1) You can spin up a new server, run some tests against it, and shut it down. That means you can test all your HTTP traffic without having to set up a development server. I struggled with this all last week — how do you write a test to prove all your HTTP code is working, but not require all your developers to set up a web server like http://localhost:3000/myapp just so that you’ve got somewhere to act as an endpoint for calls? This short-lived server becomes your endpoint for the lifecycle of the tests.

2) It allows you to create a host outside IIS, on a non-server machine. For instance, imagine you were writing a document search application. It has a front end written in Windows Forms, and the backend is written in WebAPI. When you start the app, it starts an HTTP server at http://localhost:1234/mydocumentsearch. The front end loads its data by making GET requests to that server, and saves changes by POSTS and PUTS and DELETES. Now let’s say you want to search the corporate network using the same app. It would be simple to change the URI to http://corporateserver/documentsearch and now the same app, with no code changes, has become a corporate intranet-based application. Change it again to and it’s now an internet-enabled cloud-whatsit buzzword-filled bundle of awesome. And you don’t need to recompile to switch between these seriously different modes.

This approach says — code for HTTP first, and in fact only code for HTTP. You don’t need any other kind of public API. You don’t need an API that you compile into your app, but instead you really code in separate tiers communicating through HTTP. You can now code everything as if it were cloud-enabled, and then just switch to the cloud when you’re ready.

For me, it’s the testing scenario that’ll be most useful in the short term, but the other one is making me think about API design. After all, there are now even database engines (like CouchDb) which only offer HTTP/JSON interfaces. HTTP has become a universal communication mechanism. Will this become the widespread pattern of the future?

Prerelease Feature — Automatically-generated help

Search NuGet for the WebAPI test client (Search `testclient` and make sure you allow prerelease software) and you’ll add a really rather sexy feature to your site. It’s an automatic help generator for your site, so that if you go to `http://myapp/help` you get an HTML site allowing you to see all your URLs, their formats, what methods and parameters they take. It also allows you to execute the methods, giving you an in-browser form for building requests. In a recent project, I built something similar — I recognised the need — but this is just miles ahead of mine.

This is something I’ve only seen in a demo, but it looks like a very sweet addition to your site, at least in debug mode. It means you should be able to write a method then see it work in the browser without having to build a ‘proper’ client for it. Unlike the familiar testing pattern where you new up a controller and execute its methods, this tests the whole architecture of your site, including all the serious bits you need to test about model binding, error handling, etc.

Content Negotiation And Response Types

Type an address into the browser and the HTTP request from the browser contains a header like this;

HTTP GET /foo/bar
Accept: text/html,application/xhtml+xml,...

and this `Accept` header is a signal to the application of the kind of content that it’s expecting back. If the browser responds with a web page, it might have a response header like so;

Content-Type: text/html
<html lang="en">

So there’s a conversation going on here; the client says ‘I’d like the HTML at /foo/bar’ and the server says ‘Here’s a document!’ That’s the basic pattern repeated over and over during normal web browsing. But this system is fairly flexible. What if /foo/bar represents a resource and you don’t want HTML, but want JSON instead? You can request this;

HTTP GET /foo/bar
Accept: application/json

And if the server can respond with JSON, it will. This is the sort of thing jQuery does when you write


This is really interesting, because it means that the same URL, with the same verb, can return two totally different representations. This process, this interaction between the `Accept` and `Content-Type` headers, is known as *content negotiation*.

So this little mechanism has some interesting applications. Imagine you are doing a project management app, and you decide to use a URL like this;


This URL represents the resource of ‘Project #1’. But it doesn’t imply anything about the form of the response. If we request

GET /api/Projects/1 HTTP/1.0
Accept: application/json

Then we might expect the response body to contain

"ProjectId": 1,
"Name", "Adveriting Campaign"

But if we make this request;

GET /api/Projects/1 HTTP/1.0
Accept: application/xml

Then we might expect to recieve this body;

<Name>Advertising Campaign</Name>

Now, this is what you see in the OData specification — the same resource, at the same URI, with two different representations. And you’re not just limited to two. Why not return an HTML description if the request asks for `text/html`, or YAML if that’s what the client asks for?

WebAPI lets you add new representations to your system. In your Global.asax, you’ll have an `HttpConfiguration` instance floating around — it’s the object you add routes to. But it also has a member called `Formatters` which is a list of message formatters. The default list includes formatters for JSON, XML, and Form Url Encoding. The first two are for Web/HTTP/REST-style apps; the third is to interpret HTML Form posts. These objects do bi-directional translation; so that if your controller method looks like this;

public Project Get(int id)
... do work here...

It’s the formatter which determines how the `Project` instance returned by your method is converted into the JSON or XML returned in your HTTP response. Add a new formatter, and your system is now capable of returning a new representation without you having to change your controller method at all. Sweet!

And this also works the other way. The same formatter will take an HTTP request and turn it back into object instances, so that something like this;

public void Put(Project project)
... update your project here ...

Has the same process — the `Project` instance passed as a parameter has been constructed by the object in config.Formatters. This process is known as Model binding. if you want to look into it further. An example — binding using the Google protobuf format — exists on gitHub in the WebAPIContrib project — https://WebAPIContrib/WebAPIContrib.

First-class HTTP programming is available in WebAPI

Typically, a WebAPI method might look like this;

public Project Get(int id)
return this.repo.GetProject(id);

And what we’re seeing here is a method that returns a Project object — or rather, a Project object that is later converted into a JSON object…

So it may be that you need to do a whole lot more — say, in sniffing HTTP headers for a particular purpose, or returning more interesting things in the response. So here’s how you would alter the above method;

public HttpResponseMessage Get(HttpRequestMessage request)
if (request.Headers[...])

And now you’re able to do anything you like with the request — you’ve got full control if you need it. It’s the ‘escape hatch’ out of the sometimes-too-helpful set of conventions. Not something for everyday, but something to use when all else fails, I think.

Lightweight Architeture for everyone with ASP.NET Web API — Christian Weyer

This was by far the most pragmatically useful of the sessions so far. We’ve just started using WebAPI and it’s clear that there’s loads more to it than we’re using. Of vital importance to me was an understanding of just how the magic works.

It seems to me that when you get into a new technology, it’s always pretty fraught. Or it should be, because you should be always asking yourself, ‘Am I doing this the right way? Is this the way that a pro would do it, or is it the way an amateur hacker would do it?’ Christian Weyer helped point me down the right path. Luckily, much of the things we’ve been doing are right, and the rest are easy to address.

First up, then — what is WebAPI and why do we care?

Well, WebAPI is a way to build HTTP APIs for your services. The HTTP API is designed to live between the client app — be it web page, console app, Windows Forms app, or iOS app — and provide all the business logic and database logic.

There are a raft of tips, so rather than smashing them all into one big blog post, I’m going to post things one at a time. I’ll tag them all `WebAPI` and `DevWeek` so you can search them out more easily.

Devweek Day 1

I’m at Devweek this week. Yesterday was a workshop on SharePoint by Sahil Malik, an excellent speaker who took us on a whirlwind tour of the new version. That’s going to be the subject of a longer post — there was a massive amount of info and a lot of things to think about, so that might not appear for a few day.

Today’s talks are all about making sure we’re up to speed on some modern patterns and technologies. Today there are three talks:

9:30 Technical Keynote: There’s a storm coming…

11:30 SQL Azure overview – how to develop and manage it.

14:00 The new world of HTML5 and CSS3

16:00 Light-weight architectures for ‘everyone’ with ASP.NET Web API.

Creating Node packages in NPM

npm is the package manager for Node.js, the server-side JavaScript thingy. And it’s lovely. On a day-to-day basis, doing C#, I use NuGet in Visual Studio. While NuGet is a welcome addition to Visual Studio, npm seems to make it much easier to create and reuse packages. 

For instance, I’ve just pushed my parsing library, canto34, up to this location on npm, and the process was remarkably easy. The steps are –

1. create a user on the npm website.

2. Link your computer to it with npm command line;

npm adduser


3. Create a single file (package.json) which describes the package, its dependencies, and its location in git

4. Publish the package using npm again;

npm publish

And that’s it — package deployed. 

Now, that’s pretty sweet — anyone in the world can install the package using this command line;

npm install canto34

And get going with the library. But what about me? I want to continue to develop canto34 here on my dev machine, and use it in another project (in my case, a little program called Mettle that’s beginning to take shape.)

So I want to make changes to canto34 as I make changes to Mettle. I don’t want to have upload changes to canto34, and then download them in Mettle. I just want to develop both. 

npm lets you do this by CDing into canto34’s directory and typing

npm link

And then CDing into Mettle’s directory and typing

npm link canto34

Magic! npm ‘installs’ canto34 on my machine using a shortcut to my development copy, so that I now have this structure on my disk;


And I can now develop both projects together.

The thing here isn’t that the command line support is good — on its own, that’s pretty dull. The nice thing is now the npm system seems really well designed to get you sharing code, and doesn’t punish you for doing so. You can share libraries as open source, or use them in proprietary projects, or mix the two together, and it feels like npm is your buddy — a little like, I guess, an author feels about an editor; the technical guy that gets your artistry out to the world. 

All in all, a very good developer experience


Introducing Canto34.js, a lexing and parsing library in JavaScript

I mentioned it in passing in my last post, but I’ve been working on a project called Canto 34 which helps you write lexers and parsers in JavaScript. Here’s the current from the GitHub repo


Canto 34 is a library for building recursive-descent parsers.

When it comes to writing a parser, you get two main choices. Write your own, or use a parser generator like PEGJS or ANTLR.

I’ve never really had much success with parser generators, and it’s always seemed pretty easy to write a recursive-descent parser yourself, if you have some basic tools like a regex-based lexer, and some basic functions for matching tokens and reporting errors. Canto34.js gives you the functions you need to write a regex-based lexer and a recursive descent parser yourself.

A Simple Example

Here’s a simple example of a language which just defines name-value pairs;

foo 1, bar 5, baz 8.

We’ll see how to parse this language. First, you write a lexer which identifies the different kinds of token — names, integers, commas, and periods;

var lexer = new canto34.Lexer();

// add a token for whitespace
    name: "ws",       // give it a name
    regexp: /[ \t]+/, // match spaces and tabs
    ignore: true      // don't return this token in the result

// add a token type for names, defines as strings of lower-case characters
lexer.addTokenType({ name: "name", regexp: /^[a-z]+/  });

// bring in some predefined types for commas, period, and integers.
var types = canto34.StandardTokenTypes;

And here’s how you use it;

var tokens = lexer.tokenize("foo 1, bar 2.");

which returns a set of tokens;

    { content: "foo", type: "name",    line: 1, character: 1 },
    { content: 1,     type: "integer", line: 1, character: 5 },
    { content: ",",   type: "comma",   line: 1, character: 6 },
    { content: "bar", type: "name",    line: 1, character: 8 },
    { content: 2,     type: "integer", line: 1, character: 12 },
    { content: ".",   type: "period",  line: 1, character: 13 }

Now you feed these tokens into a parser. Here’s a parser for our language;

var parser = new canto34.Parser();
parser.listOfNameValuePairs = function() {
    this.result = [];
    while (!this.eof() && this.la1("comma")) {

parser.nameValuePair = function() {
    var name = this.match("name").content;
    var value = this.match("integer").content;
    this.result.push({ name:name, value: value });

And it’s used like this;

var tokens = lexer.tokenize("foo 1, bar 2, baz 3.");

parser.result now contains the value;

    {name:"foo", value:1},
    {name:"bar", value:2},
    {name:"baz", value:3},

And we’re done! That’s a basic lexer and parser, written in thirty lines of code.

What’s canto34.js good for?

Canto 34 is designed for quickly writing parsers for straightforward domain- specific languages (DSLs). Please don’t use it for writing parsers for general-purpose programming languages — it’s not designed to be that fast or powerful.