Aaronontheweb

Hacking .NET and Startups

Tradeoffs in High Performance Software

July 15, 2014 15:56 by Aaronontheweb in .NET, Open Source // Tags: , , // Comments (0)

I’ve spent down the past week tracking down an absolutely brutal bug inside Akka.NET. Sometimes the CPU utilization of the system will randomly jump from 10% to 100% and stay pegged like that until the process is recycled. No exceptions were thrown and memory / network / disk usage all remained constant (until I added monitoring.)

I couldn’t reproduce this CPU spike at all locally, nor could I determine the root cause. No one else had reported this issue on the Akka.NET issue list yet, probably because the product is still young and MarkedUp In-app Marketing is the only major commercial deployment of it that I know of (hopefully that will change!)

I had to hook up StatsD to MarkedUp’s production Akka.NET deployments to figure out what was going on ultimately1 – using that plus the combination of a DebugDiag dump I was able to isolate this down to a deadlock.

I didn’t have a meaningful stack trace for it because the deadlock occurred inside native CLR code, so I had to actually look at the public source for .NET 4.5 to realize that the issue was caused by an edge-case where a garbage collected Helios IFiber handle could accidentally make a BlockingCollection.CompleteAdding() call on the Fiber’s underlying TaskScheduler (possibly leaked to other Fibers) via the IFiber,Dispose() method which would gum up the works for any network connection using that Fiber to dispatch inbound or outbound requests.

Took a week to find that bug and a few hours to fix it. What a bitch.

Before I pushed any of those bits to master, however, I decided to run Visual Studio’s performance analysis toolset and test the impact of my changes – a mix of CPU performance and memory consumption testing. This revealed some interesting lessons…

Tradeoff #1: Outsourcing work to the OS vs. doing it yourself

First, the reason we had this performance issue to begin with: Helios is C# socket networking middleware designed for reacting quickly to large volumes of data. Inside a Helios networking client or a reactor developers have the opportunity to specify the level of concurrency used for processing requests for that particular connection or connection group – you can run all of your IO with maximum concurrency on top of the built-in thread pool OR you can use a group of dedicated threads for your IO. It was this latter type of Fiber that caused the problems we observed in production.

Why would you ever want to build and manage your own thread pool versus using the built-in one? I.E. manage the work yourself instead of outsourcing it to the OS?

In this case it’s so you can prevent the Helios IConnection object from saturating the general-purpose thread pool with thousands of parallel requests, possibly starving other parts of your application. Having a handful of dedicated threads working with the socket decreases the system’s overall multi-threading overhead (context-switching et al) and it also improves throughput by decreasing the amount of contention around the real bottleneck of the system: the I/O overhead of buffering messages onto or from the network.

In most of applications, outsourcing your work to the OS is the right thing to do. OS and built-in methods are performance-optimized, thoroughly tested, and usually more reliable than anything you’ll write yourself.

However framework and OS-designers are not all-seeing and don’t offer developers a built-in tool for every job. In the case of Helios, the .NET CLR’s general purpose thread-scheduling algorithm works by servicing Tasks in a FIFO queue and it doesn’t discriminate by request source. If a Helios reactor produces 1000x more parallel requests per second than the rest of your application, then your application is going to be waiting a long time for queued Helios requests (which can include network I/O) to finish. This might compromise the responsiveness of the system and create a bad experience for the developer.

You can try working around this problem by changing the priority of the threads scheduled from your application, but at that point you’ve already broken the rule of outsourcing your work to the OS – you’re now in control of scheduling decisions the moment you do that.

Taking the ball back from the OS isn’t a bad idea in cases where the built-in solution wasn’t designed for your use case, which you determine by looking under the hood and studying the OS implementation.

Another good example of “doing the work yourself” – buffer management, such as the IBufferManager implementation found in Jon Skeet’s MiscUtil library or Netty’s ByteBuffer allocators. The idea in this case: if you have an application that creates lots of byte arrays for stream or buffer I/O then you might be better off intelligently reusing existing buffers versus constantly allocating new ones.. The primary goal of this technique is to reduce pressure on the garbage collector (CPU-bound problem), which can become an issue particularly if your byte arrays are large.

Tradeoff #2: Asynchronous vs. Synchronous (Async != faster)

I have a love/hate relationship with the async keyword in C#; it is immensely useful in many situations and eliminates gobs of boilerplate callback spam.

However, I also hate it because async leads droves of .NET developers to believe that you can inherently make your application faster by wrapping a block of code inside a Task and slap an await / async keyword in front of it.2 Asynchronous code and increasing parallelism inside your application doesn’t inherently produce better results, and in fact it can often decrease the throughput of your application.

Parallelism and concurrency are tools that can improve the performance of your applications in specific contexts, not as a general rule of thumb. Have some work that you can perform while waiting for the result of an I/O-bound operation? Make the I/O call an asynchronous operation. Have some CPU-intensive work that you can distribute across multiple physical cores? Use N threads per core (where N is specific to your application.)

Using async / parallelism comes at a price in the form of system overhead, increased complexity (synchronization, inter-thread communication, coordination, interrupts, etc…) and lots of new classes of bugs and exceptions not found in vanilla applications. These tradeoffs are totally worth it in the right contexts.

In an age where parallelism is more readily accessible to developers it also becomes more frequently abused. And as a result: one of the optimizations worth considering is making your code fully synchronous in performance-critical sections. 

Here’s a simple example that I addressed today inside my fork of NStatsD which resulted in a 40% increase in speed.

This code pushes a UDP datagram to a StatsD server via a simple C# socket – the original code looked like this:

foreach (var stat in sampledData.Keys)
{
   var stringToSend = string.Format("{0}{1}:{2}", prefix, stat, sampledData[stat]);
   var sendData =  encoding.GetBytes(stringToSend);
   client.BeginSend(sendData, sendData.Length, callback, null);
}

The code here uses one of the UdpClient’s asynchronous methods for sending a UDP datagram over the wire – it takes this code about 2.7 seconds to transmit 100,000 messages to a StatsD server hosted on a VM on my development machine. Having done a lot of low-level .NET socket work with Helios, I’m suspicious when I see an asynchronous send operation on a socket. So I changed the call from BeginSend to just Send.

foreach (var stat in sampledData.Keys)
{
   var stringToSend = string.Format("{0}{1}:{2}", prefix, stat, sampledData[stat]);
   var sendData = Encoding.ASCII.GetBytes(stringToSend);
   client.Send(sendData, sendData.Length);
}

This code, using a fully synchronous send method, pushed 100,000 messages to the same StatsD server in 1.6 seconds – an improvement of 40%. The reason why? BeginSend operations force the OS to maintain some state, make a callback when the operation is finished, and release those resources after the callback has been made.

The Send operation doesn’t have any of this overhead – sure, it blocks the caller until all of the bytes are buffered onto the network, but there’s no context switching or caching objects to be included in the async state. You can validate this by looking at the .NET source code for sockets and compare the SendTo vs. BeginSendTo methods.

The point being: when you have thousands of write requests to a socket in a short period of time the real bottleneck is the socket itself. Concurrency isn’t going to magically make the socket write messages to the network more quickly.

Tradeoff #3: Memory vs. Throughput

I rewrote Helios’ DedicatedThreadFiber to use a BlockingCollection to dispatch pending work to a pool of dedicated worker threads instead of using a custom TaskFactory – a functionally equivalent but a mechanically significant change in how this Fiber’s asynchronous processing works. After making a significant change like this I test Helios under a heavy load using the TimeService example that ships with the Helios source.

Each TimeServiceClient instance can generate up to 3,000 requests per second and usually I’ll have 5-15 of them pound a single TimeServiceServer instance. On my first test run I noticed that the first TimeServiceClient instance’s memory usage shot up from about 8mb to 1.1GB in a span of 45 seconds. “Holy shit, how did I create such an awful memory leak?” I wondered.

Turns out that the issue wasn’t a memory leak at all – the TimeServiceClient could produce requests 2-3 orders of magnitude faster than its Fiber could process them (because, again, it was waiting on a socket), so the BlockingCollection backing the Fiber would grow out of control.

I decided to test something – if I capped the BlockingCollection’s maximum size and have it block the caller until space frees up, what would the impact on memory and performance be?

I capped the Fiber initially to 10,000 queued operations and I was surprised with the results – throughput of the system was the same as it was before but memory usage was only 5mb as opposed to 1.1GB. Sure, some of the callers would block while waiting for room to free up in the BlockingCollection but the system was still operating at effectively the same speed it was before: the maximum speed at which the outbound socket can push messages onto the network.

I made a choice to limit memory consumption at the expense of “on-paper” throughput, but it wasn’t much of a tradeoff at all.

Hopefully a theme is starting to emerge here: your system is only as fast as your slowest component, whether it’s the file system, the network, or whatever. Consuming all of your system’s memory in order to queue pending I/O socket requests faster doesn’t improve the speed of that slow component.

You’re better off blocking for a picosecond and conserving memory until that slow resource becomes available.

However, imagine if I had taken a different approach and decided to use more memory to group related socket messages in batches before we sent them over the socket. That might result in an improvement in total message throughput at the expense of increased memory consumption – I’d have to write some code to ensure that the receiver on the other end of the socket could break up the batched messages again, but there’s potential upside there.

Tradeoff #4: Heavyweight Resources vs. Lightweight Resources

Lightweight resources are primitives, structs, objects, and so forth. Heavyweight objects are threads, processes, synchronization mechanisms, files, sockets, etc… anything that is usually allocated with a resource handle or implements the IDisposable interface is a good candidate for a “heavyweight” object.

After I fixed the issue I described in tradeoff #3 – we had another memory-related performance problem inside our Executor object, used for executing operations inside a Fiber. When we terminate an Executor, we do it gracefully to give it some time to wrap up existing operations before we shutdown.

Helios offers two tools for “timed operations” – a lightweight Deadline class and a heavyweight ScheduledValue class. Both of these classes have the same goal: force some value to change after a fixed amount of time.

The Deadline class accomplishes this by comparing its “due time” with DateTime.UtcNow – if DateTime.UtcNow is greater than due time then the Deadline is considered “overdue” and starts evaluating to false.

The ScheduledValue is a generic class backed by a built-in .NET Timer object – you tell it to change its value from A to B after some amount of time, which the Timer does automatically once it elapses.

We use Deadlines most frequently inside Helios because they’re simple, immutable, and don’t pose any risk of a resource leak. Deadline is what we were using inside the Executor object’s graceful shutdown mechanism.

Every time a Fiber dequeues an operation for execution it checks the Executor.AcceptingJobs property (which is determined by the Executor’s internal DeadLine) – if this expression evaluates to false, the Fiber stops processing its queue and releases its threads. In other words, this property is evaluated hundreds of thousands of times per second – and as my performance profiling in Visual Studio revealed, generated a TON of GC pressure via all of its DateTime.UtcNow calls, which allocates a new DateTime structure every time.

I tried, in vain, to rewrite the Deadline to use a Stopwatch instead but wasn’t able to get reliable shutdown times with it. So I decided to use our heavyweight ScheduledValue class in lieu of Deadline – it uses a Timer under the hood but doesn’t allocate any memory or use any synchronization mechanisms when its value is polled.

This resulted in a significant drop in memory usage and garbage collection pressure for high-volume Fibers, and the ScheduledValue cleans up after itself nicely.

Even though the ScheduledValue requires more upfront resources than a Deadline, it was the superior performance tradeoff because it doesn’t have any additional overhead when its value is frequently polled by executing Fibers.

If the scenario was a little different and we had to allocate a large number of less-frequently polled objects, the Deadline would win because it wouldn’t require a large allocation of threads (used by the Timer) and heavyweight resources.

Wrapping Up

High-performance code isn’t black magic – in a general sense it comes down to the following:

  • Correctly identifying your bottlenecks;
  • Understanding the limitations and design considerations of built-in functions;
  • Identifying where work can effectively be broken into smaller parallelizable units; and
  • Making resource tradeoffs around those limitations, bottlenecks, and opportunities for beneficial parallelism.
    One thing I have not talked about is the cost of developer time vs. application performance, which is the most important resource of all. I’ll save that for another post, but as a general rule of thumb: “don’t performance-optimize code until it becomes a business requirement.”

1I open sourced our Akka.NET + StatsD integration into a NuGet package – Akka.Monitoring

2I complained about this habit before in “10 Reasons Why You’re Failing to Realize Your Potential as a Developer

If you enjoyed this post, make sure you subscribe to my RSS feed!



The Profound Weakness of the .NET OSS Ecosystem

July 3, 2014 13:58 by Aaronontheweb in .NET, Open Source // Tags: , , // Comments (39)

I’m in the process of writing up a lengthy set of blog posts for MarkedUp about the work that went into developing MarkedUp In-app Marketing, our real-time marketing automation and messaging solution for Windows desktop applications (and eventually WP8, WinRT, iOS, Android, Web, etc…)

During the course of bringing this product to market, I personally made the following OSS contributions:

    There’s more to come – I just finished a Murmur3 hash implementation this week (not yet OSS) and I’m starting work on a C# implementation of HyperLogLog. Both are essential ingredients to our future analytics projects at MarkedUp.

I really enjoy contributing to OSS, especially on Akka.NET. It’s a successful project thus far, attracting several strong contributors (Roger Alsing, the developer who originally started Akka.NET, is a genius and overall great person) and lots of support from big names in .NET like Don Syme and others.

But here’s the kicker – MarkedUp is a very small company; we’re operating a business that depends heavily on doing distributed computing in .NET; and I have lots of tight deadlines I have to hit in order to make sales happen.

I didn’t make any of these contributions because they made me feel all tingly and warm inside – I don’t have the free time for that any more, sadly. I did it because it was mission-critical to our business. So here’s my question: in the ~15 year history of .NET, no one built a reactive, server-side socket library that’s actively maintained? It’s 2014 for fuck’s sake.

Over the course of working on this product, I was constantly disappointed by a .NET OSS landscape littered with abandoned projects (looking at you, Kayak and Stact) and half-assed CodeProject articles that are more often factually wrong than not.

This week when I started work on HyperLogLog1 I fully expected that I was going to have to implement it myself. But I was incredulous when I couldn’t find a developer-friendly implementation of Murmur3 in C# already. The algorithm's been out for over three years – I could find a dozen decent implementations in Java / Scala and two barely-comprehensible implementations in C#, neither of which had an acceptable license for OSS anyway. I had an easier time porting the algorithm from the canonical C++ Murmur3 implementation than I did following the two C# examples I found online.

So I had to ask myself:

Do .NET developers really solve any hard problems?

We came to the conclusion early on that the right architecture for MarkedUp In-app Marketing depended on a successful implementation of the Actor model. We thought to ourselves “this is a programming model that was invented in the early 70s – surely there must be a decent actor framework in .NET we could leverage for this.”

We weren’t just wrong, we were fucking wrong. We found a graveyard of abandoned projects, some half-assed Microsoft Research projects that were totally unusable, and a bunch of conceptual blog posts. Nothing even remotely close to what we wanted: Akka, but for .NET.

So we spent two weeks evaluating migrating our entire back-end stack to Java – we already depend heavily on Cassandra and Hadoop, so being able to take advantage of 1st party drivers for Cassandra in particular really appealed to us. However, we decided that the cost of migrating everything over to the JVM would be too expensive – so we went with an slightly less expensive option: porting Akka to .NET ourselves.

Why in the hell did it fall on one developer working at a startup in LA and one independent developer in Sweden to port one of the major, major cornerstones of distributed computing to .NET? Where the hell was Project Orleans (released by Microsoft AFTER Akka.NET shipped) five years ago?

Did none of the large .NET shops with thousands of developers ever need a real-time distributed system before? Or how about a high-performance TCP socket server? No? What the hell is going on?

.NET is the tool of choice for solving rote, internal-facing, client-oriented problems

There’s a handful of genuinely innovative OSS projects developed by .NET programmers – and they largely deal with problems that are narrow in scope. Parsing JSON, mapping POCOs, dependency injection, et cetera.

The number of projects like MassTransit, i.e. projects that address distributed computing, are rare in .NET – I can’t even find a driver for Storm in C# (which should be easy, considering that it uses Thrift.)

The point I made four years ago this very day about why .NET adoption lags among startups is still true – .NET is the platform of choice for building line of business apps, not building customer-facing products and services. That’s reflected strongly in it’s OSS ecosystem.

Need a great TCP client for Windows Phone? Looks like there are plenty of those on NuGet. Or a framework for exporting SQL Server Reporting services to an internal website? A XAML framework for coalescing UI events? .NET OSS nails this.

The paramount technical challenge facing the majority of .NET developers today looks like building Web APIs that serve JSON over HTTP, judging from the BUILD 2014 sessions. Distributed computing, consistent hashing, high availability, data visualization, and reactive computing are concepts that a virtually absent from the any conversation around .NET.

And this is why we have a .NET OSS ecosystem that isn’t capable of building and maintaining a socket server library, despite being the most popular development platform on Earth for nearly 10 years2.

Compare this to the Java ecosystem: virtually every major .NET project is a port of something originally evented for the JVM. I’m looking at you, NAnt, NUnit, NuGet (Maven), NHibernate, Lucene.NET, Helios, Akka.NET, and so on.

You know what the difference is? There’s a huge population of Java developers who roll hard in the paint and build + open source hard shit. The population of people who do this in .NET is miniscule.

We are capable of so much more than this

The tragedy in all of this is that .NET is capable of so much more than the boring line of business apps Microsoft’s staked it’s business on and CRUD websites.

The shortcomings of .NET’s open source ecosystem are your fault and my fault, not Microsoft’s. Do not point the finger at them. Actually, maybe blame them for Windows Server’s consumer-app-hostile licensing model. That merits some blame.

But the truth is that it’s our own intellectual laziness that created a ghetto of our OSS ecosystem – who’s out there building a lightweight MMO in .NET? Minecraft did it with Java! How about a real-time messaging platform – no, not Jabbr. I mean something that connects millions of users, not dozens.

We don’t solve many hard problems – we build the same CRUD applications over and over again with the same SQL back-end, we don’t build innovative apps for Windows Phone or Windows 8, and there’s only a handful of kick-ass WPF developers out there building products for every day consumers. Looking at you, Paint.NET – keep up the good work.

Don’t make another shitty, character-free monstrosity of a Windows Phone application – no, it’s not“new” because you can write it using WinJS and TypeScript instead of C#.

Do something mind-blowing instead – use Storm to make a real-time recommendation app for solving the paradox of choice whenever I walk into a 7-11 and try to figure out what to eat.

There are brilliant developers like the Akka.NET contributors, the Mono team, Chris Patterson, and others who work on solving hard problems with .NET. YOU CAN BE ONE OF THEM.

You don’t need Ruby, Node.JS, or even Java to build amazing products. You can do it in .NET. Try a new way of thinking and use the Actor model in Akka.NET or use the EventBroker in Helios. Figure out how to fire up a Linux VM and give Cassandra or Storm or Redis a try. Learn to how work with raw byte streams and binary content. Learn how to work with raw video and audio streams.

The only way to clean up our acts and to make an ecosystem worth keeping is to stop treating our tools like shit shovels and start using them to lay alabaster and marble instead. .NET is more than capable of building scalable, performant, interesting, consumer-facing software – and it falls on us, individual and independent developers, to set the example.


1here’s the only known HyperLogLog implementation in C#, which appears factually correct but out of date and not production-usable

2no, not going to bother citing a source for this. Get over it.

If you enjoyed this post, make sure you subscribe to my RSS feed!



Instant File Server: turn any directory into a webserver with a simple command

August 14, 2013 03:46 by Aaronontheweb in Node, Open Source // Tags: // Comments (4)

Our engineering team has been neck-deep in configuration hell lately. Editing 2000-line Solr configuration files, trying to get Apache Oozie integrated into DataStax Enterprise, Cassandra 1.2 upgrades, and more – and the one thing in common with all of these tasks is the prevalence of enormous XML configuration files.

Having wasted countless hours trying to use tools like SCP and various Sublime Text plugins to try to edit (or hell, even view) the configuration files on our dozens of Linux machines, I finally had a “fuck this shit” moment this week and wrote instant-fileserver, a stand-alone file server that you can start using a single command on any directory on any operating system.

instant-fileserver (ifs) allows you to:

  • expose the contents of any file system via HTTP;
  • view individual files as well as directory listings;
  • create, read, update, or delete any file using ifs’ dead-simple RESTful API;
  • create multiple ifs instances using a simple command; and
  • safely edit any file from the comfort of your Windows / OS X machine and push the results back onto your Linux servers once you’re finished.
    Here’s an example usage:
$ npm install -g ifs
$ (ifs is added to your PATH; go anywhere on your system)
$ ifs -help
$ ifs [arguments...]
... starting ifs on 0.0.0.0:1337
    Here’s a quick demo video to show you how easy ifs is compared to the alternative
Using Instant File Server (IFS) – Video Demo
      If you hate having to move heaven and earth to do something as mundane as edit a damn file, IFS is the werewolf-destroying silver bullet you desperately need.
      ifs is built using Node.JS and it has a tremendously slim codebase, so anyone can edit it or extend it if they wish. ifs is licensed under the MIT permissive license.

    Fork the ifs source code on Github – we accept pull requests!

    If you enjoyed this post, make sure you subscribe to my RSS feed!



    New Open Source Project: MVC.Utilities

    August 14, 2011 07:33 by Aaronontheweb in ASP.NET, Open Source // Tags: // Comments (0)

    I announced this on Twitter late last week, but I open-sourced a number of common helpers and service interfaces that I use throughout all of my production ASP.NET MVC applications and wrapped it into a project I call MVC.Utilities.

    MVC.Utilities has a number of helper classes for the following:

    1. Encryption;
    2. Caching;
    3. Authentication;
    4. Routing;
    5. Security for user-uploaded files;
    6. and display helpers for things like Hacker News-style datetimes.

     

    If you want more details and examples of what the library does exactly, be sure to check out the MVC.Utilities Wiki on Github. All of these services are meant to be Dependency-Injected into ASP.NET MVC controllers and are designed as such.

    The library has a lot of room to expand and grow, and I encourage your ideas and contributions to the project! Just create a fork of MVC.Utilities on Github and send me a pull request; props to Wesley Tansey for already submitting a couple of patches!

    Installing MVC.Utilities via NuGet

    MVC.Utilities is available via NuGet – type in the following command via the Package Manager Console in Visual Studio to have it added to your project today:

    Install-Package mvc-utilities

    If you enjoyed this post, make sure you subscribe to my RSS feed!



    How to Make it Easy for New Developers to Adopt Your Open Source Project

    James Gregory is one of my heroes in the .NET community – he’s the creator of Fluent NHibernate, my favorite new ORM (Object-Relational Mapper) for my ASP.NET MVC projects. James expressed some dismay earlier today when a newbie Fluent NHibernate developer posted a $100+ bounty for a few basic usage examples; having just recently taught myself the very things the developer was asking for, I can empathize.

    NHibernate (Fluent NHibernate runs on top of it) is a bear of an open source project for new developers – it’s a mature project that solves a problem with a massive surface area and the NHibernate community is comprised of experienced developers who’ve been using it for years. It can be difficult for new developers to get involved, as the learning curve is steep even for just adopting the project, let alone contributing to it.

    So how can we make it easier for new developers to adopt advanced, mature projects like NHibernate?

    Here are some ideas:

    1. Offer new developers a low-friction adoption path – in many of the projects I’ve tried to adopt over the years, I’ve had a difficult time figuring out what the best way to start using some of them itself. The thing that is going to kill adoption of your project is friction – if a new developer encounters resistance just trying to build a basic “Hello World” equivalent on top of your project, they’re going to lose interest and try to find something else. Gently guide your developers down on low-friction path to adoption.

    2. Describing the functionality of features isn’t enough; explain their intent – NHibernate has a ton of content for in its Wiki for new developers, but few of them explain the intent behind NHibernate’s powerful features – what new developers really need to understand is why and when they should use certain facets of your project’s functionality, not so much how that functionality works.

    3. Use real-world examples in the documentation – one issue that frustrates me when I’m trying to learn a new technology is when all of the 101-level examples are canned. I don’t want a Fibonacci sequence example or anything else that I would never use in the real-world. Show me the hard stuff; show me how to build a human user-interaction around your project’s technology; show me how to use it in a middle use-case; and show me how to integrate your technology into projects that resemble production scenarios.

    4. Include reference materials – .NET developers get to use object explorer as an option of last resort for reference materials, which is a plus. However, the goal of open source project leads should be to make it easy for new adopters to find the right piece of functionality to suit their needs, and a reference guide is often the best way to do it.

    5. StackOverFlow first – I was trying to troubleshoot a small Fluent NHibernate issue I ran into earlier today (will blog about it soon), and I saw a number of questions identical to my own on StackOverFlow – all of them unanswered. The first place your users are going to turn when they run into trouble is Bing and Google, and StackOverFlow will dominate the results; answering someone’s question on StackOverFlow doesn’t solve a problem for just the person who asked it – it answers the question for many other people who may run into the same issues down the road.

    Open source is fun, and it's fun for new developers too. Onboarding new developers to your project isn't as sexy as writing a patch or coming out with a new version, but it's ultimately better for the longevity of the community.

    If there's anything else I should add to this list, please add them in the comments!

    If you enjoyed this post, make sure you subscribe to my RSS feed!



    Search

    About

    My name is Aaron, I'm an entrepreneur and a .NET developer who develops web, cloud, and mobile applications.

    I left Microsoft recently to start my own company, MarkedUp - we provide analytics for desktop developers, focusing initially on Windows 8 developers.

    You can find me on Twitter or on Github!

    Recent Comments

    Comment RSS

    Sign in