Hacking .NET and Startups

College: Four Years Later

July 10, 2012 19:15 by Aaronontheweb in General // Tags: , // Comments (3)

This is intended for recent graduates who are finding themselves lost in the shuffle as they adjust to the real world, but has advice that is applicable to everyone. Your mileage may vary.

I graduated from Vanderbilt University with a B.S. in Computer Science in May of 2008. I was a Cum Laude student and a member of Sigma Nu (a fraternity.) I rebuilt the student embodiment of IEEE & ACM back up from scratch into a meaningful, attractive organization inside Vanderbilt’s engineering school, and did it alongside some really wonderful and amazing people. I helped rebuild Sigma Nu, lived in the house for a time, and met fantastic people who will be some of my lifelong friends.

The four years I spent in school were life-changing for me and completely reshaped me into a much more confident, balanced person than I was going in. I have few regrets.

The four years after I graduated have been disorienting – I’m not in a structured social environment anymore where everyone is new and eager to make friends. I have conference calls at nine in the morning. I need to renew my auto insurance. I have three weddings that I have to go to this Summer and I’m 90% sure I won’t fit into my old pair of dress pants.

I’m fortunate that I live in a country where I don’t have to worry about clean water or getting my head cut off, but that first world self-assurance doesn’t make the transition to adult life any less jarring.

The most bothersome part about being an adult is a moment that occurs 1-3 years after you don the cap and gown.

Here, I’ll set it up: you got your first job or two and have busted your ass nonstop trying to build a life, income, and home for yourself. You haven’t gotten blackout drunk with your friends “because it’s Tuesday night” since graduation. You haven’t done any travel beyond seeing family and maybe a small trip here and there. You’ve grown apart from many of your college friends as you all went your separate ways, both geographically and professionally. You’ve settled into some routines that give you some sense of control over things, but you still aren’t really cooking for yourself or exercising enough (as your mother will tell you.)

With that picture in mind, here’s the moment that brings it all home: during the course of a regular day, maybe when you’re at work or just coming home, you rediscover a piece of your personal history directly from the time when you were at college – and you start thinking about what’s elapsed between then and now. And a wave of private humiliation starts to well up inside you… my life is a lot smaller than what it used to be, isn’t it? Not going out there and changing the world yet exactly are am I? And my, what a boring person I’ve become!

It’s the moment where you realize that you’ve unknowingly started to cast a boring mold for your future – and you start to mourn all of the ambition and optimistic hopes you held when you were in college. You start to doubt the path you’re on and wonder, sometimes out loud, if you completely wasted the years of your life after college. You worry about sounding like an entitled, whiny bitch if you complain about it to anybody save your absolute closest friends, and thus deal with your torment in private.

Relax. Everyone goes through this – you’re not alone, and it’s scary for everyone too. It doesn’t mean you’ve screwed up or made poor choices, and there’s no point in worrying about that anyway since you only live once. Heed the moment for what it really is: your sub-conscious telling you “initiate adulthood, phase 2!”

Adjusting to adulthood isn’t easy – if it were there wouldn’t be any coming of age movies and Workaholics wouldn’t be hilarious.

College doesn’t prepare you for the ambiguity of real life – it gives you some of the tools and weapons to figure that out yourself in an economy where people are valued based on knowledge (rather than the ability to wield a shovel.) It takes a few years to figure out “where the bathroom is,” if you pardon the metaphor, before you can really go on to do the stuff that you were born to do.

The moment is a call to arms, crafted just for you. It means you’ve figured out what it takes to survive outside of the corner room in your parent’s house; you’ve got some money and some workplace experience; and you’ve a certainty that things are not where you want them to be. And you have no obligations to anyone or anything, save some manageable debt and a lease that expires in 7 months.

The moment means the time to make your life the way you imagined has come – don’t screw it up. Quit your job and start a company; go traveling; marry that girl you’ve been dating steadily for three years; take up surfing; whatever your dreams are, do what it takes to realize them.

This won’t be the last time you have a major moment of “oh shit what am I doing?” either – take them for what they are: indicators that you’re ready to advance and change. Go do what it takes to be interesting and optimistic again now that you’ve conquered the banalities of adulthood.

If you enjoyed this post, make sure you subscribe to my RSS feed!

Code Camp Talk: RavenDB vs MongoDB

June 27, 2012 06:19 by Aaronontheweb in MongoDB, RavenDB // Tags: , , // Comments (1)

This past weekend at SoCal Code Camp I presented a session along with my friend Nuri Halperin entitled “Battle of the NoSQL Databases: RavenDB vs. MongoDB.”

I represented the RavenDB team, having used it in production now for a couple of months (and ditched Mongo to do it.) I’ll blog more about the specifics of RavenDB and what it’s awesome at some point in the future, but nevertheless I wanted to post my slides here so you could see the bullet-by-bullet comparison between the databases.

We didn’t cover everything, but we did try to capture all of the high-level details:

NoSQL Shootout: RavenDB vs MongoDB
Update: Some errata that has been pointed out to me courtesy of Itamar Syn-Hershko of the Hibernating Rhinos team.
 Raven actually uses BSON internally as well, and has no auto-sharding support by design, see 

If you enjoyed this post, make sure you subscribe to my RSS feed!

Managing Your Windows Azure Services from OS X, Linux, or Windows Using the Command Line Interface (CLI)

June 8, 2012 06:14 by Aaronontheweb in Azure, Node // Tags: , , // Comments (4)

A lot of exciting things were announced at today’s Meet Windows Azure event, and one of the things I wanted to share with you is how you can now use our cross-platform Windows Azure Command Line Interface (CLI), part of our Node SDK, to administer your hosted services, VMs, and websites all from the comfort of your favorite terminal.

For the purposes of this blog post I’m going to use iTerm on OS X as my terminal of choice and I’ve also run all of these commands off of the DOS / PowerShell terminals in Windows 7.

Installing the CLI

The Windows Azure CLI is written in Node.JS, so it’s cross-platform by design. To install it you need to have Node.JS and NPM (node package manager) installed.

Once you have both of those, you can just install the CLI using Git and NPM:

$ git clone git://github.com/WindowsAzure/azure-sdk-for-node.git

$ cd ./cli-tools

$ sudo npm install –g

It’s important to do a global install using the –g flag; that way the CLI will always be accessible regardless of what your current working directory is on the terminal.

You can also install the Windows Azure CLI directly via NPM itself:

$ sudo npm install azure -g

Connecting to your Windows Azure Account

With the tools installed, you can now start exploring the SDK. Simply type the following on your terminal to see the full list of commands available in the CLI:

$ azure

In order to effectively leverage any of these, you need to import you Windows Azure account settings and import them into the CLI. You only need to do this once.


This will open up the browser to a page on the Windows Azure portal which will download a .publishSettings file specific to your Azure account automatically. Once it’s downloaded, then you need to import it using the azure account import command.


Once that’s finished, you’re good to go and can start using the CLI to administer your services!


Creating and Managing Windows Azure Websites

Now that we have our account criteria, we can start creating new websites, hosted services, or virtual machines.

If you’re a new Azure customer, you’ll probably want to start with Windows Azure websites – they’re the simplest to set up, cheapest to run, and easiest to deploy.

One small hitch: before you can use the CLI to create Azure websites, you need to create your first one via the new Windows Azure Management Portal. You’ll also need to Git publishing credentials via the web portal too.

I’m going to create a new website by using the azure site create command:


I used the --git flag to tell the Azure web portal “yes, I want to enable Git publishing for this website” which is one of the cool new features.

I can list all of my current Windows Azure websites using the azure site list command – if you have multiple accounts the CLI will list them per-account.


I can also manage my currently running sites, so for instance if I wanted to download my diagnostic logs for a specific site I could run azure site log download and grab the raw data for my performance logs.


You can see a full reference for the Windows Azure CLI here.

Working with Windows Azure VMs

The most sweeping change made in today’s announcements was the inclusion of Windows Azure Virtual Machines; even though it’s still in “preview mode,” you can manage it via the CLI if you have access to the preview (you can apply for Windows Azure Preview Features here.)

I’m going to create a new Linux VM using the CLI.

First thing I want to do is check the list of available VM images, which I can do by running the azure vm image list command.


So you can see the list of default images here – I’m going to use the CentOS image because I’m feeling all Linuxy today.


So I create a new CentOS VM in our US-West data center and I even set up a username and password for administering the box (although I forgot to pass the –-ssh flag, which would add SSH to the box.)

The syntax for creating a new VM is azure vm create <dns-name> <image> <userName> [password].

If I want to check in on the status of this instance, I can just run the azure vm show <name> command to get the details.


Since I don’t want a bunch of people on the Internet hacking into my brand new VM, I’m going to go ahead and delete it via the CLI using the azure vm delete <name>.


Note: when you use --blob-delete command, this deletes the VM’s hard drive image from your storage account permanently.

Working with Hosted Services

You can also manage our signature PaaS offerings on Windows Azure too, our hosted services. I’m already running some of these on my personal Azure account, so I’m going to run the azure service list command to see a list of hosted services:


If I wanted to I could delete hosted services, add new management certificates, and so forth.

Wrapping Up

Our announcements yesterday changed the game with Azure, and using the CLI I now have a level of control and visibility over all of my VMs, websites, and services a mere terminal away from my fingertips. This is a game-changer.

If you need a full reference for the Windows Azure CLI, click here.

If you enjoyed this post, make sure you subscribe to my RSS feed!

How to Build a Real-Time Chat Service with Socket.IO, Express, and the Azure SDK–Part 1: Setting Up

April 16, 2012 13:56 by Aaronontheweb in Azure, Node // Tags: , , , // Comments (0)

This past weekend I ran a Node Bootcamp on behalf of Microsoft and in partnership with the fine folks at Cloud9 IDE – the goal of these camps is to help teach newbies Node.JS and to get some Node.JS-on-Azure business from attendees who have a good experience.

So I decided to build a sample Socket.IO application that leverages our platform and would give our attendees a base to work from when it came time for them to participate in the Node Bootcamp Hackathon. I’ve done a lot of work with Express on Node but had never really done much with Socket.IO, so I was curious to see how hard it was to pick up.

Total time to build and deploy this application from end-to-end, including learning Socket.IO: about 5 hours.

The Requirements

All I wanted to build was a basic chatroom – a simple version of JabbR or something of the sort. Below is a screenshot of the finished product, to give you an idea.


The chat room I wanted had these simple requirements:

  • All currently connected users would be displayed in a list alongside the chat room at all times;
  • Whenever a user connects or disconnects an alert will be displayed to all other members of the chat room and the participant list would be instantly updated;
  • A new user should always be able to see the 30 previous messages in the chat room, including messages from the server;
  • All new messages are always added to the bottom of the list, and the chat window always scrolls down to accommodate any new users;
  • All users are required to have a registered chat handle; and
  • It had to be cross-browser friendly.

These are all pretty simple requirements for a single chat room; nothing too daunting here.

The Tools

So I decided that to pull this off these technologies would make the best fit:


  • Express makes it easy for me to handle custom routes and its session system is the right tool to force users to sign-in with proper user handles before entering the chat room. Express is the primary web framework for Node.JS and I use it in virtually all of my Node projects.
  • cookie-sessions is a Connect Middleware session state provider, and it’s one of the few that doesn’t require an explicit “session store” like a Redis or MongoDB database. Since I was trying to keep this project simple for new Node.JS developers I thought that this NPM package would be the right choice for tracking users’ handles throughout their chat sessions.
  • socket.io is, in my opinion, the industry standard solution for building real-time web applications with WebSockets. I picked socket.io because its ubiquitous, boasts the best cross-browser support out of any real-time app framework, and it plays nice with Express.
  • azure – for my logging requirements, Azure Table Storage made the most sense. I was able to store all of my messages in one table easily and I didn’t have to write much code to leverage it.
  • uuid – used for generating row keys for Azure Table Storage.


  • Foundation CSS – I am an idiot when it comes to CSS; I suck at it and will probably never be very good. Thankfully Foundation is so easy to use that I can fake client-side competence pretty well with it.
  • KnockoutJS – socket.io and Knockout are like chocolate and peanut butter: they were meant to be together. Being able to seamlessly update the chat room’s DOM whenever a new message arrived or a user connected / disconnected made building the front-end tremendously easier.
  • dateformat.js – a popular JavaScript for applying intelligent format strings to timestamps and datetimes; I use this in virtually all of my projects (including ASP.NET MVC.)

The Design

The design of this application is straight forward.

  • Our Express server redirects users who are not cookie’d with a handle already to a sign-in page where they can get one; once they do have a cookie they are sent to the root document which runs the chat server.
  • socket.io handles all of the chat + user events across XHR given that we’re hosting the application inside of IIS, which doesn’t yet support the WebSockets protocol (next version!)
  • Knockout handles the client-side events from socket.io and updates the DOM accordingly.


Next in the series:

Part 2 – Setting up Express and session-handling;

Part 3 – Setting up socket.io

Part 4 – Integrating socket.io on the client-side with KnockoutJS

Part 5 – Using Azure Table Storage for persistent chat

Source Code:

If at any time in this series you want to see the source code to this application, visit my github repo for nodebootcamp-chat here or see it live in action at http://chat.nodebootcamp.com/

If you enjoyed this post, make sure you subscribe to my RSS feed!

How to Do Business with Extremely Busy People

February 10, 2012 07:00 by Aaronontheweb in General // Tags: // Comments (1)

Big Picture

The bottom line when working with busy people is to preempt as much of the mental overhead of working with you as possible; all it really takes is some brevity and thoughtfulness on your part. If you form the eight behaviors I list below (and others I may be forgetting) into habits, you'll be much easier to work with and you'll get better results.

Define "Busy"

One of the transformational things my job at Microsoft has done for me is help me appreciate what it is like to be extremely busy and how hard it can be to work with other extremely busy people.

“Busy-ness” isn’t a measure of how much time someone spends working, although there’s typically a strong correlation; it’s really a measure of the total amount of concern a particular individual has to manage at any point in time. The busier you are, the greater the concern you are managing.

Each task / person / problem / thing you have to manage at any given time carries a non-zero amount mental and emotional overhead – dates, consequences, pressures, stakeholders, key facts and figures, costs, opportunities, sentiments, and so forth. All of this takes effort to remember, recall, manage, and act upon.

A person who has to manage 1,000 small things is extremely busy; a person who has had a parent or close relatively die recently is extremely busy. The number of items isn't what matters - it's the total sum of the mental and emotional overhead that drives busy-ness.

When you’re trying to do business with extremely busy people, you are effectively adding more stuff onto their already full plate. In order to effectively communicate and do business with them, you need to minimize the overhead of whatever it is you need said busy person to do for you.

So here are some ways you can make it easy for busy people to get back to you:

1. Always be the one to propose possible times for phone calls or meetings, and include more than one option.

When I ask to meet with the managing director of an accelerator or the CTO of an interesting startup (busy people,) I always end my emails with a sentence that reads something like this “do you have time for a quick phone call any time after 3:00pm on Tuesday or Wednesday?”

The effect here is subtle: what you do by proposing times yourself is you give the busy person the ability to focus on a small range of possible times, and the likelihood of getting a response back in-turn is drastically higher than if you left the scheduling completely open. The proposed times will either work for the busy person or they won’t, and they can give you a simple yes/no answer in return.

If you leave the scheduling for an appointment totally wide open, you are essentially forcing the busy person to do a scan of their entire calendar over the next couple of weeks and force them to find a time. This is overhead – busy people hate overhead, and they may defer responding to you indefinitely.

All of the other techniques I described here are derivatives of the following rule:

Everyone likes being responsive, even the extremely busy. When you the decrease need for busy people to think when considering any business opportunity or engagement you might have, you increase the likelihood that they will get back to you.

2. Keep it brief.

If you want to do business with the extremely busy, make certain that they can understand what it is you want to do in a matter of seconds – not minutes. If find yourself writing War and Peace emails, then you have failed.

Save your stories and background for when you talk in-person or over the phone – keep any requests you have in writing short and specific. One sentence for who you are and what you do. One sentence for what you need. One sentence for the value you can offer the busy person in return. One question on how to take the next steps. Done1.

Anything beyond that and you’re using the wrong communication medium.

An even better technique than the Three Sentence Rule for emails is the EOM rule, where you fit the entire body (super short, obviously) of your email into the subject line and terminate it with [EOM] for “end of message.”

If you use the EOM rule, then busy people will read your entire message whether they want to or not given that they can see the entire body of the message in the subject line when they glance over their inbox.

3. Have a specific idea for what it is you want to do; articulate it clearly.

So let’s say you do a good job and manage to get a meeting with a busy person – what then? You should always have an objective for whatever it is you want from them, and you should make that objective as specific as possible.

The more specific your demands are of a busy person’s time, the less overhead for them (usually.)

When I meet with someone who asks me if I can get help their startup get started with a Windows Phone 7 version of their application, then that’s reasonably easy – if they ask me how to solve a specific engineering or design problem that they’ve run into, then that’s even easier.

The amount of information I have to extract from the requestor in order to actually help them is drastically smaller in the latter example and thus I can move from words to action much more quickly.

Contrast this with some meetings I’ve had where the people who requested the meeting open with “how can you help me?” There are hundreds of different ways I can answer that question, but since the meeting requestor hasn’t provided me with any context as to what’s important to them I’m going to do what any other busy person would do and go down the path of least resistance.

That path does not always lead to the results that either party wants, so make the path of least resistance the one that leads to you walking away with your goals met by eliminating the busy person’s need to craft your engagement plan for you.

4. Always include the contact information or address of meeting place in the calendar appointment (and start using calendar appointments if you aren't already.)

This is a no-brainer. If you have scheduled a meeting with a busy person, do one of the following:

  • Specify who is calling whom and at what phone number;
  • Specify a bridge line if it’s a conference call; or
  • Specify the actual address of the place you’re meeting for in-person meetings.

If a busy person can’t figure out how they’re supposed to contact you or meet with you, they might just push or not show up. Make it clear for everyone and take the extra 2 seconds to add a little specificity to the meeting request.

5. Always include your contact information in your email signature.

Particularly important if you are doing in-person meetings – stuff comes up and people might run late or might get lost, in which case they need to be able to get back to you.

Always include a phone number where you can be reached in your email signature so they can contact you in the event that something goes wrong and they run late or need help finding you. If they haven’t had time to save your contact information, they can at least quickly look up your last email conversation and grab your contact information that way.

6. Do not, under any circumstances, go “favor-shopping.”

A surefire way to never hear from a busy person ever again is to shop them for favors. This happens when one person tries to extract as much value out of a busy person’s time as possible by stuffing the meeting full of requests for different and often unrelated favors. 

I had a meeting where a person presented me with four unrelated projects and asked how I could help with each; I ultimately decided not to help with any of them, because it was clear that this person did not care what my interests were. It was all about their projects, not about building a mutual business relationship.

When you go favor shopping, you’re not offering any value in return – you’re openly using the other person and being a parasite in the process.

Favors don't come at a volume discount.

7. Be appropriately persistent.

Follow-up is good; busy people let stuff slide and can easily forget. Finding a way to stay on someone’s radar appropriately is necessary and good.

However, asking for read receipts for every single email you send is obnoxious and busy people will simply delete your messages.

In contrast, following up after 2-3 business days with a simple “just wanted to double check to see if you’re still available for lunch on these dates” is fine under most circumstances.

Here are my rules of thumb:

  • Time-sensitive business emails: 2-3 days for people I don’t know, 1 day for people I do. Obviously this varies depending on just how time sensitive the matter is.
  • Important but not urgent business emails: 1 business week.
  • Anything else: why are you bothering this person?

Email is often a crappy medium for doing business anyway – pick up the phone and call the busy person’s office if you aren’t able to get ahold of them via email.

8. Be flexible.

Life happens; meetings get moved; and things come up. Be flexible enough to take these things in stride. You’ll get a lot more business done this way and will be thanked for it.

If you enjoyed this post, make sure you subscribe to my RSS feed!

How to Use the Azure npm Package Locally without the Azure Compute Emulator

February 6, 2012 05:28 by Aaronontheweb in Azure, Node // Tags: , , , // Comments (2)

One thing that is a little dicey about the Windows Azure SDK for Node 1 is that it by default it depends on being run inside of Azure itself or the compute emulator.

The Azure npm package looks for environment variables parsed from web.config and won’t find them if you run your node application via the node [entrypoint].js commandline.

So why would you want to run your Node application outside the Azure emulator if you’re utilizing the Azure npm package? If you’re using Cloud9 to develop and deploy Node applications to Windows Azure, then that’s one reason.

Another is that IIS eats any error messages your Node application throws by default2 and on some occasions errors don’t always get logged to server.js/logs/[n].txt, so occasionally you have to debug by running the stand-alone node server where you get verbose error messages to console.

To work around this, you can set your tableClient / storageClient / queueClient object to target a specific account directly.

var tableClient = azure.createTableService(ServiceClient.DEVSTORE_STORAGE_ACCOUNT, ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, ServiceClient.DEVSTORE_TABLE_HOST);

//Live account
var tableClient = azure.createTableService(‘aaronnodedemo’,’scary-looking-access-key’,’ [account].table.core.windows.net’);

When you do this you lose the ability to let Azure .config transforms enable you to switch between production and staging easily, but this can be a necessary evil for debugging tricky bugs ;)

1“azure” is the name of the associated npm package

2I’m sure there’s a way to change that behavior in the IISnode configuration settings

If you enjoyed this post, make sure you subscribe to my RSS feed!

Code Camp Talks: Intro to Node.JS and Building Web Apps with Express

February 2, 2012 17:15 by Aaronontheweb in Node, Azure // Tags: , , , // Comments (0)

This past weekend at SoCal Code Camp I gave two presentations back-to-back on Node.JS: “Intro to Node.JS” and “Building Web Apps with Express.”

I don’t have much to add on what I did at the sessions other than to mention just how surprised I was at how enthusiastic people were to see Microsoft involved with the Node effort and how eager everyone was to learn Node. I was thoroughly impressed.

Below are links to my slides and code samples for both talks – enjoy!

Intro to Node.JS

Source code: Github or Cloud9



Building Web Apps with Express



Let me know if you have any questions!

If you enjoyed this post, make sure you subscribe to my RSS feed!

How to Automatically Utilize Multicore Servers with Node on Windows Azure

January 17, 2012 11:31 by Aaronontheweb in Azure, Azure, Node, Node // Tags: , , , // Comments (0)

One major advantage of developing Node applications for Windows Azure is the ability to have your Node apps managed directly by IIS via iisnode.

You can read more about the benefits of iisnode here, but I want to call out these two points specifically from the iisnode wiki:

Scalability on multi-core servers. Since node.exe is a single threaded process, it only scales to one CPU core. The iisnode module allows creation of multiple node.exe processes per application and load balances the HTTP traffic between them, therefore enabling full utilization of a server’s CPU capacity without requiring additional infrastructure code from an application developer.

Process management. The iisnode module takes care of lifetime management of node.exe processes making it simple to improve overall reliability. You don’t have to implement infrastructure to start, stop, and monitor the processes.

Scalability on multicore servers with Node typically requires developers to use the Node cluster module and write some code which spawns one node.exe process per available processor and write your own handlers for what to do in the event of a failed process and so forth.

Update: It occurred to me after I initially published this that many developers may not understand the need for multiple node.exe processes. If you read through my “Intro to Node.JS for .NET Developers“ you’ll get a better sense for how Node works under the hood – the gist of it is that Node handles all incoming HTTP requests on a single thread, and thus can only utilize a single core at any given time regardless of the number of available processors. Having multiple Node.exe processes handling HTTP requests allows for multiple event loop threads, all of which can be run in parallel across different processors. That’s why this technique is important for multicore systems.

IIS and iisnode can take care of this for you automatically without you having to write any code to do it (a good thing,) and on Windows Azure you can automate this via startup task (fork the source on Github:)

if "%EMULATED%"=="true" exit /b 0

REM Count the total number of available processors on this system
powershell -c "exit [System.Environment]::ProcessorCount"

REM set the default number of processes for our app pools in IIS equal to the 
number of available processors
%windir%\system32\inetsrv\appcmd set config -section:applicationPools 

This startup task automatically determines the number of available processors on the CPU and tells IIS to set the number of worker processes in your Node application pools to use one process-per-core, thus allowing your Node applications to take advantage of every core on the system.

Adding Startup Tasks to Node Projects

If you want to use this startup task in your Node project, follow these steps:


  • The last thing you need to do is just include the startup task in your ServiceDefinition.csdef file, located in the root of your Node Azure Service. The Startup section of the file should look like this:
      <!-- Included by default; installs everything you need to run Node on Azure -->
      <Task commandLine="setup_web.cmd" executionContext="elevated">
          <Variable name="EMULATED">
            <RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" />
      <Task commandLine="setMaxProcessesToAvailableProcessors.cmd" 
          <Variable name="EMULATED">
            <RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" />
    Once all of this is setup, go ahead and deploy your Node service to Azure and it will be able to take advantage of all of the cores on the VMs. The script will dynamically scale with the size of your role instance, so there is no need to alter it.

To verify that the script worked:

  • RDP into one of your Node + Azure instances;
  • Go to IIS Manager from the Windows Menu;
  • Go to Application Pools;
  • Right click on the Application Pool for your Node application and select Advanced Settings – your application pool will be the one that has Applications = 1 on the far right of the table; and lastly
  • Scroll down to Maximum Worker Processes and check the value – in the screenshot below I’m running on a pair of Medium Azure instances, which have two cores each and thus two processes.



And voila, you’re done.

Let me know if you have any questions!

If you enjoyed this post, make sure you subscribe to my RSS feed!

Troubleshooting “500 Internal Server Errors” in the Windows Azure Emulator when Working with Node.JS

January 13, 2012 17:12 by Aaronontheweb in Azure, Node // Tags: , , , // Comments (7)

On my primary development machine, where I have tweaked and prodded IIS multiple times for many projects over the past couple of years, I get the following 500 – Internal Server Error message when I try to fire up even the simplest “Hello World” Node.JS project included in the default template for each Windows Azure Node Web Role:


However, when I push the default “Hello World” Node.JS app to a production on Windows Azure, I have no issues whatsoever! So what’s the problem?


The issue is that the Windows Azure emulator:

  • Creates integrated AppPools for your Node roles in IIS (hosted in iisnode), which is intended for ASP.NET applications that take advantage of the HTTP request processing pipeline in IIS 7+ and
  • Runs the AppPools under an administrative process.

Under these settings, Windows thinks that each Node.exe process is trying to access the integrated IIS pipeline (which only .NET applications can do currently) under the process identity of  the Windows Azure emulator (which has administrative rights), which is not supported. This issue comes up with other CGI / non-.NET applications run in IIS besides Node too.

Here are a few ways to fix the problem:

Fix #1: Set Identity.Impersonation=False in <system.web> in your web.config file

In this instance, simply change the web.config file in the root of your Node.JS application directory to set Identity.Impersonation=False, like this:

	<identity impersonate="false"/>

Effectively this makes it such that your Node role doesn’t not assume the identity of the Windows Azure emulator (which runs in a process with administrative rights), so it runs under a local system process identity and no longer raises the error.

Remember that web.cloud.config affects your live settings on Windows Azure; you want to change just web.config which affects only your local settings when you run in the Windows Azure emulator.

Fix #2: Set validateIntegratedModeConfiguration=False in <system.webServer> in your web.config file

Another alternative is to simply tell IIS to suppress the error altogether, which you can do be editing the system.WebServer section of your web.config file:

<validation validateIntegratedModeConfiguration="false" />

Fix #3: Change the AppPool to Classic Mode

The final and preferred method is to change the AppPool to Classic Mode, which shuts off the application’s access to the integrated IIS processing pipeline regardless of process identity. You can do this via startup task:

%systemroot%\system32\inetsrv\APPCMD.EXE set app "Default Web Site/" /applicationPool:"Classic .NET AppPool"

Although this might be the most kosher method of fixing the issue in IIS, it also happens to be the most awkward to implement on Azure. I’d recommend using approach #1 or #2 instead given that this issue affects the local emulator only.

If you enjoyed this post, make sure you subscribe to my RSS feed!

Node.JS on Windows Azure Part 1: Setting Up Your Environment

January 9, 2012 19:14 by Aaronontheweb in Azure, Node // Tags: , , , // Comments (1)

Following Microsoft’s announcements regarding first-class Node.JS support on Windows Azure, I thought it would be helpful to walk newbie Node.JS and Windows Azure developers through the process of getting their Node.JS development environment set up, building their first simple Node.JS application, and then learning how to take advantage of popular NPM packages like Express to build actual web applications on Windows Azure.

If you need a primer on how Node.JS works and what Node is useful for, go ahead and check out my post from last week: “Intro to Node.JS for .NET Developers.”

Setting Up Your Node.JS + Windows Azure Environment

First, a big disclaimer: all of the Windows Azure deployment and packaging components require Windows in order to work. You will not be able to deploy your application to Windows Azure without a Windows Vista / 7 box.

That being said, the rest of the setup process is straight forward and all of the tools are free (with the exception of one possible text editor I am going to recommend.)

Step 1 – Install the Node.JS Tools for Windows Azure Tools via Web PI

The first thing you need to do is install the Node.JS tools for Windows Azure via Web Platform Installer – click here to install the tools.


This will install:

  • Node.JS for Windows;
  • Node Package Manager (NPM) for Windows; and
  • Windows Azure Powershell for Node.

If you already have a previous version of Node.JS for Windows installed (anything before the latest version, currently v0.6.6), you should uninstall it before running Web PI.

If for whatever reason you need to install Node.JS for Windows manually, you can do it via NodeJS.org.

Once the installation is finished, you should see the following item appear on your Windows bar:


Click on it and you’ll see a PowerShell command window fire up – this command window supports a number of Node + Azure-specific commands you can use for creating new roles and services.


We’ll get around to playing with PowerShell for Node a little later – next we need to set up your Windows Azure account.

Step 2 – Get a Windows Azure Account

In order to take advantage of Node.JS for Windows Azure you’re going to need to have… a Windows Azure account.

If you’re looking for a way to try Windows Azure for free, I recommend signing up for the three-month Windows Azure free trial.

If you’re in a startup and want access to some longer-term Windows Azure benefits, sign up for BizSpark, activate your BizSpark Windows Azure benefits, and get access to some more substantial Windows Azure benefits.

Step 3 – Install a Text Editor with JavaScript Syntax Highlighting

When you step into the world of Node.JS, you’re no longer in a universe where Visual Studio can really help you. Thus, we need to pick up a text editor to use in our Node.JS development.

I recommend that you pick one of the following:

  • Notepad++ – JavaScript syntax highlighting support, lightweight, simple formatting tools, and has a decent number of plug-ins. Cost: free.
  • Sublime Text – this is the community favorite for doing Node.JS; it’s cross-platform so it works on OS X, Linux, and Windows. Cost: free trial, but $59 for a single seat.

Install one of these two text editors and start playing around with it… You’ll get used to it.

Step 4 – Create Your First Node.JS + Azure Project

Now that you have your development environment all set up, now it’s time to create your first Node.JS on Azure project. The official Node.JS for Windows Azure guide will walk you through a lot of the same steps in more detail if you wish to use that instead.

Inside the PowerShell for Node.JS command window, do the following:

  • Create a directory for saving all of  your Node.JS projects – if you want to use “C:\Repositories\Node\” then you can just type mkdir C:\Repositories\Node inside of PowerShell and it will create the directory for you.
  • Change the current working directory to C:\Repositories\Node – type cd C:\Repositories\Node inside the PowerShell window.
  • Create a new Windows Azure service by typing New-AzureService [projectname] – this will automatically create a new [projectname] subdirectory and set it to the current working directory.
  • Create a new Node.JS Web Role (an Azure role that serves HTTP requests from within IIS) by typing Add-AzureNodeWebRole [rolename] – this will automatically create a set of folders under the [projectname]\[rolename] directory which contain everything you need to run your first Node.JS + Azure project.

Here’s what the output looks like in the command window:


Step 5 – Fire Up Your Node.JS Project in the Emulator

Just to make sure that our simple “Hello World” Node.JS project runs properly, we’re going to run it in the emulator before we do anything else with it.

Simply type “Start-AzureEmulator – launch” in the command line, and you’ll see a page like the one below:


Step 6 – Download and Import Your Windows Azure Publication Settings

In order to actually publish to Windows Azure, you have to download your .publishSettings file from the Windows Azure portal. Luckily, you can just do this using the Get-AzurePublishSettings command in the PowerShell window.

This will launch a new web browser window to this page on the Windows Azure portal, which will have you login to your Azure account and will then automatically ask you where you want to save your Windows Azure publication settings.


Save your Windows Azure publication settings to C:\Repositories\Node\[projectname].


Finally, you just need to import your publication settings in the Node.JS PowerShell Window like this:

Import-AzurePublishSettings C:\Repositories\[projectname]\[file-you-downloaded].publishSettings

You should see a message like this in the command window when the operation succeeds:


Step 7 – Publish (and then Take Down) Your “Hello World” Node.JS Project to Windows Azure

The last step before we start diving into Node.JS is getting some experience publishing Node.JS projects to Windows Azure – now that we’ve imported our .publishSettings file we can easily publish our little “Hello World” example directly to the Windows Azure account we set up earlier using the following command in the PowerShell terminal:

Publish-AzureService –name [NewServiceName] –location “North Central US” –launch

This will automatically publish the Windows Azure service in your current working directory (C:\Repositories\Node\[projectname]) to a new hosted service in the North Central United States Azure Data Center called “NewServiceName” with one small instance of the [rolename] Node.JS web role running. The –launch flag will automatically trigger the browser window to open to your service once it finishes deploying.

Here’s what the output window will look like in PowerShell upon a successful deployment:


And you’ll see this in your browser window once the deployment is complete:


If you’re using the Windows Azure free trial or a BizSpark Windows Azure account, you can leave up your Node.JS “Hello World” project if you wish as you’re not going to incur a bill.

However, it’s generally a good practice to take down Windows Azure instances that aren’t doing actual work because in the future you will be billed for the amount of time of the instance is deployed… So to take down an Azure deployment we can either delete it through the Windows Azure management portal or through the following command in the Node.JS PowerShell:


That’s it for this tutorial – the next lesson will show you how to get your hands dirty with the Node Package Manager and Express.JS for doing MVC-style websites in Node!

If you enjoyed this post, make sure you subscribe to my RSS feed!



My name is Aaron, I'm an entrepreneur and a .NET developer who develops web, cloud, and mobile applications.

I left Microsoft recently to start my own company, MarkedUp - we provide analytics for desktop developers, focusing initially on Windows 8 developers.

You can find me on Twitter or on Github!

Recent Comments

Comment RSS

Sign in