Aaronontheweb

Hacking .NET and Startups

ASP.NET MVC4 Gotcha: Embedded Views and Razor Pre-Compilation

September 5, 2012 05:30 by Aaronontheweb in ASP.NET // Tags: , , , // Comments (0)

In the course of some of our work on MarkedUp, we discovered an interesting gotcha with MVC4, embedded views, and ASP.NET pre-compilation.

A little back-story:

One of the things we did as part of a major refactoring recently was to pull all of our email templates out of the main MarkedUp MVC4 project and stick them into their own independent assembly – we did this because we anticipated that these templates would have to be shared across multiple web distinct web applications running behind our firewall.

We use ActionMailer to generate our text and HTML emails from Razor templates, and one of the things that broke with MVC4 is the RazorEngine project which was used by ActionMailer.StandAlone to parse Razor templates in non-MVC4 assemblies.

So, what we did in this instance was embed our Razor views directly into the child assembly and wrote our own VirtualPathProvider to serve them from the MVC4 application – a standard practice for handling this sort of thing.

We started running our staging and development versions of our app this way without thinking twice about it.

Eventually, when we wanted to speed up our staging server’s rendering performance on AppHarbor we enabled ASP.NET pre-compilation. And since it was just our staging server with just us on it, no one noticed that our email volume dropped to zero.

Until we started bringing aboard a small number of friends to help us test our service, which is when we noticed the error.

When you turn on ASP.NET Pre-Compilation, the underlying virtual path changes and will change the behavior of your custom VirtualPathProviders.

Hence, our custom VirtualPathProvider was no longer able to find our embedded views. Doh!

If you enjoyed this post, make sure you subscribe to my RSS feed!



New Open Source Project: MVC.Utilities

August 14, 2011 07:33 by Aaronontheweb in ASP.NET, Open Source // Tags: // Comments (0)

I announced this on Twitter late last week, but I open-sourced a number of common helpers and service interfaces that I use throughout all of my production ASP.NET MVC applications and wrapped it into a project I call MVC.Utilities.

MVC.Utilities has a number of helper classes for the following:

  1. Encryption;
  2. Caching;
  3. Authentication;
  4. Routing;
  5. Security for user-uploaded files;
  6. and display helpers for things like Hacker News-style datetimes.

 

If you want more details and examples of what the library does exactly, be sure to check out the MVC.Utilities Wiki on Github. All of these services are meant to be Dependency-Injected into ASP.NET MVC controllers and are designed as such.

The library has a lot of room to expand and grow, and I encourage your ideas and contributions to the project! Just create a fork of MVC.Utilities on Github and send me a pull request; props to Wesley Tansey for already submitting a couple of patches!

Installing MVC.Utilities via NuGet

MVC.Utilities is available via NuGet – type in the following command via the Package Manager Console in Visual Studio to have it added to your project today:

Install-Package mvc-utilities

If you enjoyed this post, make sure you subscribe to my RSS feed!



MongoDB vs. SQL Server 2008: A .NET Developer’s Perspective

June 30, 2011 16:01 by Aaronontheweb in ASP.NET, MongoDB, SQL Server // Tags: // Comments (22)

One of the first projects I put together this year was Captain Obvious, a nifty little application that runs off of AppHarbor and ASP.NET MVC3. What made Captain Obvious special for me was that it was my first time using something other than a relational database1 in production – I chose MongoDB because it stands out to me as a lightweight, easy-to-work with store that’s easier to use for most CRUD applications. Since then I’ve gone on to build other projects which depend on Mongo.

What I’ve learned since is that MongoDB and SQL Server are tools that aren’t 100% interchangeable and are more situational than dogmatists make them out to be.

My goal in writing this is to help inform you on how you should decide to judge these two technologies as options for your ASP.NET / WebMatrix / WCF applications.

Relational Data Models vs. Document Models

The key to using Mongo or SQL Server effectively is understanding how the underlying data model works and how this impacts your ability to read / write what you want when you want to the datastore. Below are illustrations of the relational (SQL) model versus the document model.

mongo vs sql differences

 

In a relational model all of your information is expressed in the form of tables, all of which contain keys and some of which contain foreign keys to other tables. All of the information you read from and write to the database is expressed as either adding rows to these tables or combining their values based on some keys.

In a document model you have a relatively flat collection of items all identified by one primary key2, and instead of defining a relationship between two collections you simply embed one inside the other. So the relationship between the three tables in our relational model is expressed as a single, flat document in this model.

It is important to note here that there’s no schema that tells MongoDB what fields should or shouldn’t be expected in each document in the “Things” collection – in the relational universe each table is strongly, declaratively typed and every single item inserted into the row must conform to all of the constraints imposed by the relational database management system (RDBMS.) Anything goes in Mongo (although this can cause major problems, as we will see later.)

What Do Relational and Document Models Have in Common?

So what do these two data models have in common?

  1. Both support the notion of primary keys and indexes, and MongoDB can support multiple indices if needed;
  2. Both support queries and have models for sorting / limiting results;
  3. Both support the ability to reference other documents / tables (“wha? in Mongo? Yup.”)
  4. Both have a strong typing system; and
  5. Both support aggregation operations like SUM(), COUNT(), etc…

 

Seems pretty straightforward, right? But what’s up with documents being able to refer to each other?

As it turns out, implementing a database where every possible piece of information of interest to users is all embedded inside of its own distinct document comes with some drawbacks, so the creators of MongoDb added support for DbReference and the ability to do cross-collection references as well. A necessary evil in the name of practicality.

What’s Different between Relational and Document Models?

So what’s really different between the two models?

  1. Document models don’t have SQL – their query tools are extremely primitive in contrast (but the models are also much simpler and don't require sophisticated queries;)
  2. Fetching document references in the document model has to be done inside of separate queries3 whereas they can be done all in the same transaction in the relational model;
  3. Document models don’t have a schema – each document in a collection can have extra fields or fields with different type values, whereas all rows must conform to the same set of constraints in order to be inserted;
  4. In the document model types are associated with data upon assignment, rather than declared in advance;
  5. Document models have much more primitive tools for performing aggregations (edit: actually, not necessarily true for Mongo - it has built-in MapReduce which is powerful and sophisticated;) and
  6. Queries are defined on a per-collection basis in the document model, whereas a query in the relational model is an abstraction that simply refers to any number of related tables.

 

The picture that should be starting to form in your head of MongoDb vs. SQL Server at this point is a flat, simple system on one side and a multi-dimensional, rich system on the other. This is how I look at the two technologies in contrast to each other.

What’s Different about Developing .NET Apps Against Mongo and SQL Server 2008?

So at the end of the day, what's the bottom line for .NET developers who want to use Mongo or SQL Server in a web application? What are the REAL trade-offs?

Mongo IS the OR/M

The core strength of Mongo is in its document-view of data, and naturally this can be extended to a "POCO" view of data. Mongo clients like the NoRM Project in .NET will seem astonishingly similar to experienced Fluent NHibernate users, and this is no accident - your POCO data models are simply serialized to BSON and saved in Mongo 1:1. No mappings required.

I'm going to show you two similar pieces of source code I am using in production - one using NoRM and MongoDb on CaptainObvious and the other using Dapper and SQL Azure on XAPFest:

NoRM:

public IdeaResultSet GetIdeasByAuthor(int authorID, int offset = 0, int count = 10)
        {
            using (var db = Mongo.Create(ConnectionString))
            {
                var ideas =
                    db.GetCollection().AsQueryable().Where(x => x.AuthorReference.Id == authorID)
                    .Skip(offset)
                    .OrderByDescending(
                        x => x.DatePosted).Take(count).ToList();

                var totalIdeas = db.GetCollection().AsQueryable().Where(x => x.AuthorReference.Id == authorID).Count();

                //Fetch the referenced authors before we serve up the list again
                foreach (var idea in ideas)
                {
                    idea.Author = idea.AuthorReference.Fetch(() => db);
                }

                var resultSet = new IdeaResultSet { Ideas = ideas, 
                    PaginationValues = new PaginationTuple { 
                        MaxPages = PageCounterHelper.GetPageCount(totalIdeas, count), 
                        CurrentPage = PageCounterHelper.GetPageCount(offset, count) } 
                };

                return resultSet;
            }
        }

Dapper:

public IList<XfstApp> GetAppsByUser(string userName)
        {
            using (var conn = new SqlConnection(ConnectionString))
            {
                try
                {
                    conn.Open();

                    var appResults = conn
                        .Query(@"SELECT * FROM Apps
                                            INNER JOIN AppOwners ON AppOwners.AppName = Apps.AppName
                                            WHERE LOWER(AppOwners.UserName) = LOWER(@UserName)
                                            ORDER BY Apps.DateCreated ASC", new { UserName = userName });

                    //Return whatever we were able to collect
                    return appResults.Select(x => x.ToApp()).ToList();
                }
                catch (SqlException ex)
                {
                    TraceError(ex);
                    return new List<XfstApp>();
                }
                finally
                {
                    //Close the connection when we're finished, regardless of what happened
                    conn.Close();
                }
            }
        }

 

The amount of source code for these two technologies or the nature of it isn't wholly different.... What is drastically different is how I was thinking about the data when I was writing this code - I can save an instance of an Idea object to MongoDb on CaptainObvious without ever having created the collection first or defined a schema.

Whenever I want to look up an idea, I just pick one based off of a key value that I specify and I don't worry about any joins or anything (although I do have to load objects from the author collection if I need to display the author's name and contact info.)

In the SQL universe, I have to define my tables in advance and each time I want to extract an object, I have to think of it in terms of combined relationships between my tables and this requires a more thoughtful approach.

Mongo, in other words, lends itself to rapid application development whereas SQL Server has some innate friction built into any schema-based system.

In Mongo, the DBMS Isn't There to Protect Your Data's Integrity

One of the major advantages of a schema-based DBMS is that if the data a calling application tries to insert something that doesn't fit the schema into a row, the operation always fails. In Mongo, this isn't true - you can have one record in a collection with extra fields or fields of an odd type, and it can totally screw up the BSON serializer when it tries to process the collection (depending upon how flexible the serializer is.)

SQL users take this for granted, but when you have issues in Mongo along these lines they can be really frustrating to solve and difficult to debug.

In Mongo, Operations May Not Be Atomic

Operations are not atomic in Mongo by default, so all sorts of fun things can happen when you have multiple users changing properties on the same document. You can set an atomic flag to true, but even then operations still aren't really atomic (they're written from memory to disc in bulk.)

If you use Mongo and carry the same ACID assumptions that we learned on SQL Server, you might be in for a nasty surprise :p

Conclusion

Overall, the biggest difference between these two technologies is the model and how developers have to think about their data. Mongo is better suited to rapid application development, but in my opinion falls apart in scenarios where ACID-compliant systems are a must, like anything that goes anywhere near a financial transaction.

But, that's just my opinion :p


1typically I’ve only used SQL Server / MySQL in the past

2You can add indices to other fields in Mongo too, but that’s outside the scope of this article

3unless I am doing it wrong, which I may well be doing

If you enjoyed this post, make sure you subscribe to my RSS feed!



How to Securely Verify and Validate Image Uploads in ASP.NET and ASP.NET MVC

One of the more interesting things I had to do as part of building XAPFest was handle bulk image uploads for screenshots for applications and user / app icons. Most of the challenges here are UI-centric ones (which I resolved using jQuery File-Upload) but the one security challenge that remains outstanding is ensuring that the content uploaded to your servers is safe for your users to consume.

Fortunately this problem isn't too hard to solve and doesn't require much code in C#.

Flawed Approaches to Verifying Image Uploads

Here's what I usually see when developers try to allow only web-friendly image uploads:

  1. File extension validation (i.e. only allow images with .png, .jp[e]g, and .gif to be uploaded) and
  2. MIME type validation.

So what's wrong with these techniques? The issue is that both the file extension and MIME type can be spoofed, so there's no guarantee that a determined hacker might not take a js. file, slap an extra .png extension somewhere in the mix and spoof the MIME type.

Stronger Approach to Verifying Image Uploads: GDI+ Format Checking

Every file format has to follow a particular codec / byte order convention in order to be read and executed by software. This is as true for proprietary formats like .pptx as it is for .png and .gif.

You can use these codecs to your advantage and quickly tell if a file is really what it says it is - you quickly check the contents of the file against the supported formats' codecs to see if the content fits into any of those specifications.

Luckily GDI+ (System.Drawing.Imaging), the graphics engine which powers Windows, has some super-simple functions we can use to perform this validation. Here's a bit of source you can use to validate a file against PNG, JPEG, and GIF formats:

using System.Drawing.Imaging;
using System.IO;
using System.Drawing;

namespace XAPFest.Providers.Security
{
    /// 
    /// Utility class used to validate the contents of uploaded files
    /// 
    public static class FileUploadValidator
    {
        public static bool FileIsWebFriendlyImage(Stream stream)
        {
            try
            {
                //Read an image from the stream...
                var i = Image.FromStream(stream);

                //Move the pointer back to the beginning of the stream
                stream.Seek(0, SeekOrigin.Begin);

                if (ImageFormat.Jpeg.Equals(i.RawFormat))
                    return true;
                return ImageFormat.Png.Equals(i.RawFormat) 
|| ImageFormat.Gif.Equals(i.RawFormat);
            }
            catch
            {
                return false;
            }
        }

    }
}

All this code does is read the Stream object returned for each posted file into an Image object, and then verifies that the Image supports one of three supported codecs1.

This source code has not been tested by security experts, so use it at your own risk.

If you have any questions about how this code works or want to learn more, please drop me a line in the comments below or on Twitter.

Bonus: How Do I Make Sure Files Are below [X] Filesize?

Since I had this source code lying around anyway, I thought I would share it: 

public static bool FileIsWebFriendlyImage(Stream stream, long size)
        {
            return stream.Length <= size && FileIsWebFriendlyImage(stream);
        }
    }

Super-simple, like I said, but it gets the job done. Express the maximum allowable size as a long and compare it against the length of the stream

 


1The other important catch to note here is that I move the Stream's pointer back to the front of the stream, so it can be read again by the caller which passed the reference to this function.

If you enjoyed this post, make sure you subscribe to my RSS feed!



How I Built CaptainObvio.us

Captain Obvio.us - a place to share ideasI made a tiny splash on Hacker News a month ago when I asked for feedback on my newest side project, CaptainObvio.us – a simple portal for sharing ideas and soliciting feedback from a community of peers. The idea was popular and I’ve received a ton of feedback – I’ve implemented most of the Hacker News community’s suggestions but haven’t had the chance to do another round of customer development.

What I wanted to share in this blog post was some of the secret sauce I used for creating CaptainObvio.us – I originally created it mostly to learn MongoDB, and learned way more than that along the way.

Webstack: ASP.NET MVC3 on AppHarbor

I used ASP.NET MVC3 as my webstack of choice with AppHarbor as my hosting platform. ASP.NET MVC3 is a massive improvement over MVC2, and I took advantage of Razor syntax, the built-in support for DI (dependency injection) on controllers, and wrote a number of customized helpers to do things like create an action filter for Twitter @Anywhere.

AppHarbor has been a great experience to work with - I use Git for souce control for most of my personal projects like this one so deployments are a breeze on AppHarbor, but the other major reason I picked AppHarbor is that it shares the same Amazon AWS datacenter as MongoHQ - another [Thing]-as-a-Service that I used for hosting my MongoDB instance.

Data: MongoDB on MongoHQ

The original purpose of CaptainObvio.us was for me to learn MongoDB, a schemaless (NoSQL) document database written in C++ that is becoming all the rage in the Ruby on Rails universe. CaptainObvio.us is a good fit for Mongo given that the vast majority of its content consists of simple documents with a small amount of relational data for tying authors to ideas / comments and so forth.

I could not have gotten very far with MongoDB in C# were it not for the NoRM Project MongoDB drivers for C# - NoRM's drivers are much better than the default MongoDB drivers for C# and work similarly to LINQ-to-SQL (although not exactly.) It was a matter of hours for me to go from installing Mongo to having a functional site running with NoRM and ASP.NET MVC3.

Authentication: Originally Twitter @Anywhere; Sign-in-with-Twitter and Hammock Later

Twitter @Anywhere is a fairly new JavaScript-only framework for integrating Twitter into existing websites quickly and effortlessly - it automates all of the OAuth workflow, comes with tons of useful built-in widgets, and eliminates the need to write much (if any) plumbing needed to support Twitter integration.

This is all nice in theory but if you're building a site like CaptainObvious where your users use Twitter to sign-in and leave behind persistent data in your own data store, then this framework can really cause problems. I had to gut Twitter @Anywhere eventually because the post-authentication event hook would misfire on occasion due to issues on the client and thus I would have an "authorized" user running around adding comments and voting despite no record of them existing in the database.

In order to resolve the issue, I dumped Twitter @Anywhere and went with traditional Sign-in-with-Twitter powered by Hammock, which works fine.

Final Thoughts

I'm going to continue working on CaptainObvio.us, although I put it off largely due to all of the work I had to do engineering XAPFest's online presence on Windows Azure. If the project taught me anything, it's the value of continuous integration environments like AppHarbor and the ease with which you can get something up and running quickly with MongoDB.

I'd highly recommend checking out AppHarbor, MongoHQ, and the NoRM drivers if you're a .NET developer who's new to Mongo and wants to learn how to use it. I guarantee you that you'll appreciate traditional .NET databases like SQL Server 2008 and SQL Azure a lot better after you've worked with Mongo, as it'll help you learn the strengths and weakness of each respesctive platform.

If you enjoyed this post, make sure you subscribe to my RSS feed!



How to Create a Twitter @Anywhere ActionFilter in ASP.NET MVC

My newest project, Captain Obvious, got a fair amount of attention this week when it landed on the front page of Hacker News – one of the key features that makes the first version of Captain Obvious tick is Twitter @Anywhere integration.

Twitter @Anywhere is brand new and there isn’t much developer documentation for it – the way to think about Twitter @Anywhere is as a Javascript platform that allows you to outsource user authentication and registration to Twitter’s servers, and in exchange you get less hassle but also less intimate access to your user’s accounts.

One of the key features to integrating Twitter @Anywhere users with your ASP.NET MVC site is reading the cookie that Twitter sets after users have authenticated – this cookie contains two parts:

  1. The Twitter user’s unique ID, an integer representing their unique account (because remember – Twitter users can change their handles!)
  2. A SHA1 hash of the Twitter user’s unique ID + your application’s consumer secret – you use this to verify that the cookie was set by Twitter, since they’re the only ones who know your consumer secret other than you.

Naturally, if you are going to use this cookie as your authentication mechanism, then you are going to need to write your own Authorize attribute that allows you to validate the content’s of the Twitter @Anywhere cookie. Here’s the source code I use for doing that:

 

ValidTwitterAnywhereAttribute

/// <summary>
    /// Controller Action Filter Attribute which requires that
    /// </summary>
    public class ValidTwitterAnywhereAttribute : AuthorizeAttribute
    {
        protected override bool AuthorizeCore(HttpContextBase httpContext)
        {
            var twitterCookieValue = httpContext.Request.Cookies["twitter_anywhere_identity"];
            var consumerSecret = ConfigurationManager.AppSettings["TwitterConsumerSecret"];

            return TwitterAuthHelper.ValidateTwitterCookie(twitterCookieValue, consumerSecret);
        }
    }

 

Here’s the relevant source code from the TwitterAuthHelper that I use to actually validate the hash in the cookie:

TwitterAuthHelper

 

 public static bool VerifyTwitterUser(string twitterCookieValue, string ConsumerSecret)
        {
            var results = twitterCookieValue.Split(':');
            var sha1 = new SHA1CryptoServiceProvider();
            var hash = BitConverter.ToString(sha1.ComputeHash(Encoding.UTF8.GetBytes(results[0] + ConsumerSecret)))
                        .ToLower().Replace("-", "");
            if (hash.Equals(results[1]))
                return true;


            return false;
        }

I’m using this code in production and it works great – all you have to do to use it is decorate your controller methods with the [ValidTwitterAnywhere] attribute and it will behave just like Forms authentication if you have that enabled in your ASP.NET MVC app.

If you enjoyed this post, make sure you subscribe to my RSS feed!



ASP.NET MVC3 / Razor: How to Get Just the Uri for an Action Method

March 15, 2011 17:43 by Aaronontheweb in ASP.NET // Tags: , , // Comments (0)

I normally wouldn’t post something this small to my blog, but this issue bothered me so much when I was working on some Twitter @Anywhere + jQuery integration in ASP.NET MVC3 that I couldn’t help but share it.

Issue: You’re using ASP.NET MVC3 and want to be able to place a relative Uri for one of your ASP.NET MVC controller’s action methods in a block of JavaScript or anywhere else, and you want to be able to do it without having to parse it out of an Html.ActionLink output or anything else. What built-in helper method do you use?

Solution: The answer is that you use the Url.Action method, which yields a relative Uri, as you’d expect.

Observe the code below:

T("#login").connectButton({
	authComplete: function (user) {
		// triggered when auth completed successfully
		$.post('@Url.Action("AnywhereTest", "Auth")');
		location.reload();
	   
	}
});

And here’s the output to go along with it:

T("#login").connectButton({
	authComplete: function (user) {
		// triggered when auth completed successfully
		$.post('/auth/anywheretest');
		location.reload();
	   
	}
});

If you enjoyed this post, make sure you subscribe to my RSS feed!



Getting Started with AppHarbor – Heroku for .NET

January 14, 2011 12:53 by Aaronontheweb in ASP.NET, Startup // Tags: , , , , , // Comments (5)

I’ve a lot of friends who are proficient Rails developers, many of whom who have left .NET for Rails.

The one piece of consistent feedback that I hear back from them is that it’s the frictionless Ruby-on-Rails ecosystem that is so attractive; moreso than the Ruby language or the Rails framework itself (although they like that too.)

Heroku, above all others, is hailed as a step forward in developer productivity and easy web application hosting and deployment.

Platform-as-a-Service (PaaS), which is what Heroku provides to Rails developers, is powerful because it eliminates much of the need to manage and maintain infrastructure. Instead of managing a number of virtual machines on a service like EC2, you manage a number of application instances or some other such abstraction. PaaS combined with a continuous build / deployment system is a powerful combination indeed and allows for unparalleled productivity for agile web developers and startups.

.NET developers have had PaaS available to them for a couple of years in the form Windows Azure, but Azure is really meant to service the needs of rapidly growing services and cloud applications, not brand new projects that have no users yet.

AppHarbor fills two needs that are unmet by Azure – it makes it easy (and currently, free) for .NET developers to have access to a Git-enabled continuous development environment, something which our friends on Rails have had for a long time, and it supports the sorts of rapid build / test / deploy workflow that is common among agile groups and startups in particular.

This, in my opinion, makes AppHarbor the perfect starting place in the lifecycle for any new ASP.NET or WCF project.

Here’s how easy it is to get started with AppHarbor using their early beta interface:

Create a new ASP.NET project in Visual Studio

image

I’m using ASP.NET MVC3 here, which Microsoft just released-to-market yesterday. It’s great if you haven’t used it yet. You can download and install ASP.NET MVC3 here.

Initialize a new Git repository and commit the app

image

I use Git Extensions for Visual Studio to manage staging / commits, and it also includes the Git bash for Windows. I highly recommend it.

image

image

Now you’re all set for your initial deployment on AppHarbor.

Create a new application on AppHarbor

image

Follow the instructions for adding AppHarbor as a remote to your Git repository

image

PUSH!

image

Check the build status on AppHarbor

image

and then check out the app itself: http://appharbor-demo.apphb.com/ 

If you want to see a more substantial project on AppHarbor, check out Geeky Reads.

That’s all there is to it – it’s a matter of seconds to push and deploy new builds, and thanks to Git’s commit model, it’s effortless to rollback to previous builds. Compare this experience to the pain of doing a traditional web-deploy on a shared host or pushing a solution onto Azure – this is much simpler and faster.

AppHarbor can also hook a unit test project and use the success / fail of those to determine whether or not your build can be deployed, even if it is built successfully.

 

 

Want an AppHarbor invite?

If you’re interested in trying out AppHarbor yourself, click here to create a new AppHarbor account courtesy of the special invite code on this link. The AppHarbor team is eager for early adopters and feedback, so by all means please try it out and share your thoughts and experiences with them.

If you enjoyed this post, make sure you subscribe to my RSS feed!



How to Use Asynchronous Controllers in ASP.NET MVC2 & MVC3

January 6, 2011 12:01 by Aaronontheweb in ASP.NET // Tags: , , , , // Comments (6)

The primary reason I added asynchronous methods to Quick and Dirty Feed Parser 0.3 was because I wanted to use QD Feed Parser in conjunction with asynchronous controllers in ASP.NET MVC3.

MSDN has some excellent documentation which explains the ins and outs of asynchronous controllers in ASP.NET MVC, yet there aren’t many good examples of how to use it online. Given this, I thought I would make one.

Asynchronous Controllers in Theory

Here’s how asynchronous controllers work:

async controllers asp.net mvc

And in case my visual isn’t clear:

  1. A user sends a request to some resource on your server, i.e. GET: /controller/action
  2. Since the action method is part of an asynchronous controller, the processing of the request is handed off to a CLR thread pool worker while the IIS thread bails and serves other requests. This is the key advantage of asynchronous controllers: your limited, precious IIS threads get to off-load long-running, blocking tasks to inexpensive CLR worker threads, freeing up said IIS threads to serve more requests while work is being done in the background.
  3. The CLR worker thread diligently works away while the IIS worker thread gets recycled.
  4. The CLR worker thread finishes its task, and invokes the AsnycManager.Sync method and passes back some piece of processed data to be returned to the end-user.
  5. The AsyncManager hitches a ride on the first available IIS thread and passes along the processed data from the CLR worker thread into a special action method which returns some sort of consumable data back to the User, such as a View or a JsonResult.

Asynchronous controllers are all about utilizing additional, cheap CLR threads to free up expensive IIS threads as often as possible. In any situation where you have a long-running IO-bound task, they’re a good way to improve the performance of your ASP.NET MVC application.

Asynchronous Controllers in Code

To create an asynchronous controller, all you have to do is inherit from the AsyncController class:

public class FeedController : AsyncController
{}

For every asynchronous action method you want to add to your controller, you have to actually supply two different methods:

// GET: /Feed/

public void FeedAsync(string feeduri, 
			int itemCount){ ... }

public JsonResult FeedCompleted(IFeed feed,
			int itemCount){ ... }
			

Both of these methods on the asynchronous controller support the same action method (“Feed”) and any request that goes to “Feed/Feed” in our ASP.NET MVC application will be served asynchronously.

Asynchronous Action Naming Conventions

Both methods have to follow these naming conventions:

  1. The first method is the worker method that actually performs the long-running task on a worker thread; it should return void; and it must be named [Action Name]Async.
  2. The second method is the output method which is joined back to an IIS worker thread by the AsyncManager; it must return a valid ActionResult derivative (View, RouteResult, JsonResult, etc…;) and it must be named [Action Name]Completed.

Full Asynchronous Action Methods

Here’s full source code from the asynchronous controller I used to build Geeky Reads, minus a bit of work I put in to use object caching:

public class FeedController : AsyncController    {
protected IFeedFactory _feedfactory;

public FeedController(IFeedFactory factory){
	_feedfactory = factory;        
}

// GET: /Feed/       
public void FeedAsync(string feeduri, 
			int itemCount){            

AsyncManager.OutstandingOperations.Increment();

_feedfactory.BeginCreateFeed(new Uri(feeduri),
	async => AsyncManager.Sync(
		() => { 
		
		var feed = _feedfactory.EndCreateFeed(async);
		AsyncManager.Parameters["feed"] = feed;
		AsyncManager.Parameters["itemCount"] = itemCount;
		AsyncManager.OutstandingOperations.Decrement();
		
		}));
		
}

public JsonResult FeedCompleted(IFeed feed, int itemCount){

return Json(FeedSummarizer.SummarizeFeed(feed, itemCount), 
	JsonRequestBehavior.AllowGet);
}

}

Here’s what you should be looking at in this example:

AsyncManager.OutstandingOperations.Increment and .Decrement – the number of .Decrement operations must mach the number of .Increment operations; otherwise, the AsyncManager will time out operations that are running too long and the completion method will never return an ActionResult to the end user.

The accounting is pretty simple: you call .Increment at the beginning of your long-running operation, and .Decrement once your operation is finished and the results have been passed into the AsyncManager.Parameters collection, which brings me to my next point.

AsycManager Parameters and Argument Names for the Completion Method

Notice how the AsyncManager parameters “feed” and “itemCount” match the argument names for the FeedCompleted method – that’s not an accident. The AsyncManager binds its parameter collection to the FeedCompleted method arguments once the .Sync method completes, and this is what allows the asynchronous method results to be passed back to the consumer.

If you’d like to see the full source for this example, check it out on Github.

If you enjoyed this post, make sure you subscribe to my RSS feed!



How-To: Remote Validation in ASP.NET MVC3

ASP.NET MVC3 has been a major boon to my productivity as a web developer since I started using it at the beginning of November – the new Razor view engine has been attracting most of the attention with this iteration of MVC, but one extremely sexy feature has gone unnoticed thus far: Remote validation.

Remote validation was one of the new features added in the November Release Candidate (RC) for MVC3 – I had a chance to demo it in front of the LA .NET User Group last night and it was a monster hit.

Example:

You’ve all seen remote validation before – ever see one of these when you’re signing up for a service like Twitter?

image

That’s remote validation at work. Twitter, Facebook et al make an AJAX call to a remote service hook that checks the database to see if the username is available as the user types it. It’s a major user experience improvement as they get that feedback instantly instead of having to wait until after they fill out the rest of the form and submit it.

In the past with ASP.NET MVC you had to write your own custom jQuery scripts on top of the jQuery validation engine to achieve this end-result; in ASP.NET MVC3 this is taken care of for you automatically!

Let me show you how it works:

1. Decorate a model class with the Remote attribute

Take the class you want to validate and decorate the attributes you need to remotely validate with the Remote attribute.

public class NewUser
{
	[Remote("UserNameExists", "Account", "Username is already taken.")]
	public string UserName { get; set; }

	[Remote("EmailExists", "Account", "An account with this email address already exists.")]
	public string EmailAddress { get; set; }

	public string Password { get; set; }
}

In both of these instances of the Remote attribute, I’ve passed the following arguments:

  • The name of an action method;
  • The name of the controller where the action method lives; and
  • A default error message should user input fail this validation challenge.

2. Implement an action method to support your Remote attribute

You will need to implement an action method that supports your Remote attribute. Add an action method which returns a JsonResult to the controller you named in your Remote attribute arguments. In this example I’ll need to add these action methods to the “Account” controller:

public JsonResult UserNameExists(string username)
{
    var user = _repository.GetUserByName(username.Trim());
    return user == null ? 
		Json(true, JsonRequestBehavior.AllowGet) : 
		Json(string.Format("{0} is not available.", username),
			JsonRequestBehavior.AllowGet);
}

public JsonResult EmailExists(string emailaddress)
{
    var user = _repository.GetUserByEmail(emailaddress.Trim());
    return user == null ?
		Json(true, JsonRequestBehavior.AllowGet) : 
		Json(
			string.Format("an account for address {0} already exists.",
			emailaddress), JsonRequestBehavior.AllowGet);
}

 

If the repository returns null, meaning that a user account with this particular user name or email address doesn’t already exist, then we simply return a JsonResult with a value of true and go off on our merry way. This will suppress the jQuery validation library from raising a validation error.

If the action method returns false, then jQuery will raise a validation error and display the default error message provided in the Remote attribute arguments – if you didn’t provide a default error message yourself then the system will use an ambiguous default one.

If the action method returns a string, jQuery will raise a validation error and display the contents of the string, which is what I’m doing here in this example.

Small Gotcha – Naming Action Method Parameters

There’s one small gotcha that can be easy to miss – the name of the argument on your action method must match the name of your property on your model. If I changed the EmailExists body to look like this:

public JsonResult EmailExists(string email);

Then ASP.NET would pass a null value to this method.

Case 1: parameter name matches the name of the model’s property

image

Case 2: parameter name does not match the name of the model’s property

image

This is because the jQuery parameter takes the name of the property on the model and passes it as a querystring argument to your action method – here’s what your validation request looks like in Firebug:

[host]/account/emailexists?area=an%20account%20with%20this%20email%20address%20already%20exists.&EmailAddress=test%40test.com

The ASP.NET MVC model-binder isn’t all-knowing – it’s not going to be able to tell that EmailAddress and email are the same thing, thus it ultimately won’t bind an argument to your action method, hence why the null value is passed.

If you follow the convention of using a common name for your Remote validator action method arguments and your model properties, you won’t run into this issue.

UPDATE: Thanks to Rick Anderson for directing me to the much more extensive MSDN documentation on how to implement custom Remote validaiton in ASP.NET MVC3.

If you enjoyed this post, make sure you subscribe to my RSS feed!



Search

About

My name is Aaron, I'm an entrepreneur and a .NET developer who develops web, cloud, and mobile applications.

I left Microsoft recently to start my own company, MarkedUp - we provide analytics for desktop developers, focusing initially on Windows 8 developers.

You can find me on Twitter or on Github!

Recent Comments

Comment RSS

Sign in