Simple reads with Entity Framework

This works nicely for quick reads.  Requires no mappings or any of that junk. Just a DataReader shaped to a type.

Edit: Because this is my first day to ever work with EF, my first attempt at this was more complicated (you can see this evolution at the gist).  After removing my useless code, you can see it’s just Entity Framework.  Sadly I can’t take any credit.  Still – this IS cool.  Fast, super easy, typed queries with only framework dependencies.  I was surprised to see it supported the mapping-less projections.  Perfect for all of those fancy persistent view model scenarios all the cool CQRS kids are doing.

using System;
using System.Collections.Generic;
using System.Data.Entity;

public static class Effin
{
    public static IEnumerable<T> Query<T>
        (this string connectionStringName, string sql, 
        params object[] parameters)
    {
        return new DbContext(connectionStringName)
            .Database.SqlQuery<T>(sql, parameters);
    }
}

public class Example
{
    public void UsingEffin()
    {
        IEnumerable<Name> names = 
            "data".Query<Name>("select FirstName from Person");

        foreach (var name in names)
        {
            Console.WriteLine(name.FirstName);
        }
    }

    public class Name
    {
        public string FirstName { get; set; }
    }
}
Posted in Tools | 2 Comments

Reuse is overrated, then came package management

Reuse is overrated

True story: a team building a system.  They decide that one chunk of code might be valuable in another project or two.  So they extract it into a library and open source it.

Project complete, team reorganizes and begins a couple new projects.  Both new projects depend on that library.  For awhile everything is great.

Sooner or later someone needs to make a change.  It’s sort of a pain, because the source code is in another IDE configuration.  Actually it’s been awhile since anyone worked on it and it’s hard to find a developer on the team who has that version of the IDE installed.  But the change happens, painful as it may be, because it has to.

Someone later gets lazy.  They build it locally and copy the assemblies directly into the lib folder.  Now the library was compiled from source code that’s not in the version control system.  This is spinning out of control.  Or maybe that was intentional – because the other team is using the library too, but they have different needs and we’ve sort of been in a mutually-assured-destruction paralysis about what changes to make.

Eventually someone realizes the solution is not to depend on this library, the compiled bits, but to just copy the source code over.  We need to make our tweaks, they need to make their tweaks.  It makes sense.   So we lose our code reuse; does anyone care? 

This dirty-feeling code copying can be a good idea for a non-technical reason:  it emphasizes that the important element of reuse is the human expertise to understand, create, use and modify a solution that solves a particular problem.

Turns out NuGet is great for this.  I noticed that Rob Conery packaged his Massive library as just a file.  A source code file.  Not a .dll.  Great idea, and I immediately thought of all the code that I want to share among many projects but also have the flexibility to change, now – without context switching or fear of repercussions.

You just put the files in a Content subdirectory and the NuGet process will shove them in the project root or App_Code or wherever.

I did the same with a little utility class that we use a lot, and I’m going to do it some more. 

For many teams, a hosted NuGet feed with content files is the new helper library.

Posted in Agile, Tools | 7 Comments

Type systems and their advantages

Typing is not the bottleneck.  Neither is typing.

If you’ve been a programmer sometime in the last fourteen billion years you’ve noticed that there’s this conversation about type systems, specifically about comparing static type systems to dynamic type systems.

Dynamic typing doesn’t mean you can change the type of a thing (even though you sometimes can) it means you can change the shape of types themselves at run time.  It means that you can tell a variable to write itself some new, strange, dynamic behavior, but you don’t tell it what to write, you tell it how to decide what to write.  And you can eliminate a lot of duplication this way.

But static type systems don’t give you that ability.  You have to define the shape and behavior of a type at compile time.

I know what you’re thinking: that really stinks!  I want the ability to muck about with my variables, and have them run my code against their own internals and have them write new methods for themselves, and decide how they handle calls against methods that don’t exist, and even crazier things.

The static type system folks can do something that’s really powerful.  They can use the type system itself as a global registry.

You see this in a lot of the C# written today, especially the stuff that makes heavy use of an IOC tool and generics.  They’ll ask the system to get all the types that implement some interface, or for the types that close some interface and another type.  Then they take these types and instantiate one instance of each of them, and use those instances in some way.  Each instance runs in a chain or pipeline, or to apply a series of filtering functions to a data set.

The type system becomes a big dispatch table, with type names as global keys, but it’s more like a dispatch tree, because of polymorphism and generic type parameters and all that.

It’s freakin’ cool, once you start doing it – it’s elegant and easy compared to the old stodgy way of programming.  And the IOC tools they use keep getting more interesting features that let them query the type-system-as-dispatch-tree in novel ways.

The dynamic type system folks will shrug and suggest implementing dispatch table with a hash like they’ve been doing in Perl for 15 years.

The static type system folks keep losing because their arguments are bad – Compile Time Checking and Tooling just aren’t cutting it.  But they may be onto something if they could articulate the benefits of this global registry better than I can.

Posted in Deep thoughts | 8 Comments

Easy ways to maximize the value of TDD

Before we’ve made it pass the first time, a failing test tells us what code to write.

When it fails later, it can tell us what we broke.  Failing tests alert us to regressions before we integrate.

A test is most valuable when it fails.

This is why the number-one-most-important rule of TDD is to maintain One Logical Assertion Per Test.  If you have more than one, a failing test becomes ambiguous.  This helps in writing the test and in understanding its failure, also known as diagnostics. 

When a test fails you should know exactly why.


Before we’ve written a test we must articulate the rationale behind the whole deal in the first place, and this process helps us reason abut the production system and its design.

After we’ve made it pass the first time, it becomes one in a suite of passing test cases we regularly run.   Now it can tell us what the system is doing in a way more understandable than the implementation code. 

A test is most valuable when it describes the behavior of the system rather than the implementation.

This is why the number-one-most-important rule of TDD is No Implementation Semantics.


Almost every time I’ve found myself struggling to deliver value with TDD it’s because I’ve failed to follow these two, coequal, number-one-most-important rules: One Logical Assertion Per Test and No Implementation Semantics

Posted in TDD | Leave a comment

Focusing on the controller's responsibility

The following is an excerpt from ASP.NET MVC 2 in Action, a book from Manning appearing in bookstores in May.  The early access (MEAP) edition is available now on http://manning.com/palermo2.  Authors include Jeffrey Palermo, Ben Scheirman, Jimmy Bogard, Eric Hexter and Matt Hinze.  Technically edited by Jeremy Skinner.

This selection is from chapter 19, Lightweight controllers.  All hyperlinks were added for this post.


A quick way to lighten the controller’s load is to simply remove responsibilities from it. Consider the burdened action, shown below:

A heavyweight controller
public RedirectToRouteResult Ship(int orderId)
{
   User user = _userSession.GetCurrentUser();
   Order order = _repository.GetById(orderId);

   if (order.IsAuthorized)
   {
      ShippingStatus status = _shippingService.Ship(order);

      if (!string.IsNullOrEmpty(user.EmailAddress))
      {
         Message message = _messageBuilder
            .BuildShippedMessage(order, user);

         _emailSender.Send(message);
      }

      if (status.Successful)
      {
         return RedirectToAction("Shipped", "Order", new {orderId});
      }
   }
   return RedirectToAction("NotShipped", "Order", new {orderId});
}

This action is doing a lot of work-it's incomprehensible at first glance. You can almost count its jobs by the number of if statements. Beyond its appropriate role as director of the storyboard flow of the user interface, this action is deciding if the Order is appropriate for shipping and determining whether or not to send the User a notification email. Not only is it doing those things, but it's deciding how to do them-it's determining what it means for an Order to be appropriate for shipping and how the notification email should be sent.

Logic like this-domain logic, business logic-should generally not be in a user interface class like a controller. It violates the single responsibility principle, obfuscating both the true intention of the domain and the actual duties of the controller, which is redirecting to the proper action. Testing and maintaining an application written like this is difficult.

Cyclomatic complexity: source code viscosity

Cyclomatic complexity is a metric we can use to analyze the complexity of code. The more logical paths a method or function presents, the higher its cyclomatic complexity. In order to fully understand the implication of a particular procedure, each logical path must be evaluated. For example, each simple if statement presents two paths-one when the condition is true, and another when it's false. Functions with high cyclomatic complexity are more difficult to test and to understand and have been correlated with increased defect rates.

A simple refactoring that can ease this is called Refactor Architecture by Tiers. It directs the software designer to move processing logic out of the presentation tier into the business tier.

After we move the logic for shipping an order to an OrderShippingService, our action is much simpler.

A simpler action after refactoring architecture by tiers
public RedirectToRouteResult Ship(int orderId)
{
   var status = _orderShippingService.Ship(orderId);
   if (status.Successful)
   {
      return RedirectToAction("Shipped", "Order", new {orderId});
   }
   return RedirectToAction("NotShipped", "Order", new {orderId});
}

Everything having to do with shipping the order and sending the notification has been moved out of the controller into a new class. The controller is left with the single responsibility of deciding where to redirect the client. The new class can fetch the Order, get the User, and do all the rest.

But the result of the refactoring is more than just a move. It's a semantic break that puts the onus of managing these tasks in the right place. This change has resulted in a clean abstraction that our controller can use to represent what it was doing before. Other logical endpoints can reuse the OrderShippingService, such as other controllers or services that participate in the order shipping process. This new abstraction is clear, and it can change internally without affecting the presentation duties of the controller.

Refactoring doesn't get much simpler than this, but a simple change can result in significantly lower cyclomatic complexity and can ease the testing effort and maintenance burden associated with a complex controller.

Posted in ASP.NET MVC | 6 Comments

JSON Hijacking in ASP.NET MVC 2

The following is an excerpt from ASP.NET MVC 2 in Action, a book from Manning appearing in bookstores in May.  The early access (MEAP) edition is available now on http://manning.com/palermo2.  Authors include Jeffrey Palermo, Ben Scheirman, Jimmy Bogard, Eric Hexter and Matt Hinze.  Technically edited by Jeremy Skinner.


JSON (pronounced like the English name, Jason) hijacking is a rare hack similar to XSRF, except it’s targeted to request secure JSON from vulnerable applications. The JSON hijacking process involves several steps:

1. A conspiring site, via JavaScript, instructs the victim’s browser to request some secure JSON data from another site.

2. The evil JavaScript receives the JSON data.

3. If the JSON is formatted as an array, the evil script can exploit browser JavaScript processing code to read the JSON data and transmit it back to the attacking site.

Allow JSON via POST only

The solution to this exploit offered by ASP.NET MVC 2 is to only accept requests for JSON data by HTTP POST requests, rather than by GETs. This is baked into and enforced by the standard JsonResult action result that ships with the framework. If we were to request data to be returned by JsonResult with a GET request, we wouldn’t receive the JSON data.

Listing 11.12 shows how we must issue a POST from JavaScript code requesting JSON data.

Listing 11.12 Requesting JSON data via POST
<script type="text/javascript">
    $.postJSON = function(url, data, callback) {
        $.post(url, data, callback, "json");
    };

    $(function() {
    $.postJSON('/post/getsecurejsonpost',
        function(data) {
            var options = '';
            for (var i = 0; i < data.length; i++) {
                options += '<option value="' +  #|2
                data[i].Id + '">' + data[i].Title +
                '</option>';
            }
            $('#securepost').html(options);

        });
    });
</script>

 <h2>Secure Json (Post)</h2>
  <div>
    <select id="securepost"/>
 </div>

Listing 11.12 uses the jQuery JavaScript library to craft a special POST request for our JSON data.  When the results are returned, the function populates the select list with them.

Override defaults for GET access

The problem with this approach isn't technical-this works and it prevents JSON hijacking. But it's a workaround that's sometimes unnecessary and can interfere with systems developed using the REST architectural style.

If this approach causes problems, we have additional options. First, we can explicitly enable JSON requests from GETs with the code shown in listing 11.13.

Listing 11.13 Directing JsonResult to accept GETs
[HttpGet]
public JsonResult GetInsecureJson()
{
    object data = GetData();

    return Json(data, JsonRequestBehavior.AllowGet);
}

This will allow our action to respond to normal JSON GET requests. Finally, we can scrap JsonResult itself, instead using an action result to return only non-vulnerable, non-array formatted, JSON.

Modifying the JSON response

The code in listing 11.14 shows a special action result that wraps vulnerable JSON data in a variable, d.

Listing 11.14 Creating a SecureJsonResult encapsulates serialization logic
public class SecureJsonResult : ActionResult
{
    public string ContentType { get; set; }
    public Encoding ContentEncoding { get; set; }
    public object Data { get; set; }

    public override void ExecuteResult(ControllerContext context)
    {
         if (context == null)
         {
              throw new ArgumentNullException("context");
         }
         HttpResponseBase response = context.HttpContext.Response;
         if (!string.IsNullOrEmpty(ContentType))
         {
              response.ContentType = ContentType;
         }
         else
         {
              response.ContentType = "application/json";
         }
         if (ContentEncoding != null)
         {
              response.ContentEncoding = ContentEncoding;
         }
         if (Data != null)
         {
              var enumerable = Data as IEnumerable;
              if (enumerable != null)
              {
                    Data = new {d = enumerable};
              }
              var serializer = new JavaScriptSerializer();
              response.Write(serializer.Serialize(Data));
         }
    }
}

This action result encapsulates the tricky code to output the proper JSON, and it works well. The downside to this approach is that we must use this d variable in our JavaScript code. Listing 11.15 shows consuming the serialized data using jQuery.

Listing 11.15 Consuming SecureJsonResult with jQuery
$(function() {
$.getJSON('/post/getsecurejson',
    function(data) {
        var options = '';
        for (var i = 0; i < data.d.length; i++) {
            options += '<option value="' +
            data.d[i].Id + '">' + data.d[i].Title +
            '</option>';
        }
        $('#secure').html(options);
    });
});

Using this technique, we can still use GETs to retrieve our JSON data, but the JSON is secure because it's never just an array-any arrays are wrapped in a d variable. We just must be sure to access values through the d variable.

This unconventional code can be confusing. We recommend using the default behavior of using HTTP POST requests to retrieve JSON data. If that becomes a problem, you can switch to this technique.

Posted in ASP.NET MVC | 3 Comments

Serializing models for RouteValueDictionary and later model binding

(tl;dr : The bits, serializing to IDictionary<string, object> for RouteValueDictionary)

We have several spots in our ASP.NET MVC 2 app that need to serialize an object into a RouteValueDictionary so that, in a subsequent request, it can be inflated, resurrected by the model binding infrastructure.

We wrote extension method called ToHttpDictionary and we used it like this:

<%= Url.Action("index", "something", new RouteValueDictionary(Model.ToHttpDictionary())) %>

or like this:

RedirectToAction("index", new RouteValueDictionary(form.ToHttpDictionary()))

(and no, we do not use those raw helper method, we wrap them with a strong-typed, smarter one  = ) )

The goal is to transfer this example object, that we might accept as an action parameter:

var spec = new SearchSpecification
{
    FirstName = "John",
    LastName = "Doe"
};

Into the arguments for another request:

http://example.com/search?FirstName=John&LastName=Doe

That's a lot easier than creating a form and a bunch of hidden fields just to store parameters for a GET request, or using that kludge anonymous type business.  It also makes working with RedirectResult extremely easy, skipping all the nasty TempData stuff you might be tempted to use.

At first we just needed some small, flat objects to be "un bound", but then once we needed prefixes, the ability to skip properties, support for custom unbinders, and to handle arrays we decided to pull it into its own library and put it on CodePlex.

public static IDictionary<string, object> ToHttpDictionary<T>
    (this T model, params Expression<Func<T, object>>[] propertiesToSkip)
{
    SpecificValueUnbinderFactory.CustomUnbinders =
        () => new ISpecificValueUnbinder[]
        {
            new EnumerationValueUnbinder(),
            new PersistentObjectValueUnbinder()
        };

    return new Unbinder().Unbind(x => x.Source(model).SkipProperties(propertiesToSkip));
}

Custom value unbinders are the analog to custom model binders.  For example, I've seen model binders that take the Id of an Entity from the request and hydrate a full Entity object from persistence.  In that case we need an unbinder (sorry, I really can't think of a better name for the concept) to serialize the Entity as the Id property.  That would look something like this:

public class PersistentObjectValueUnbinder : ISpecificValueUnbinder
{
    public string UnbindValue(object value)
    {
        return ((PersistentObject)value).Id.ToString();
    }

    public bool AppropriatelyUnbinds(object value)
    {
        return value is PersistentObject;
    }
}

And you'd plug it in to the factory function like in the above example.

It's just a simple helper function, but the technique has been useful.

Posted in ASP.NET MVC | 6 Comments