Aaronontheweb

Hacking .NET and Startups

Tradeoffs in High Performance Software

July 15, 2014 15:56 by Aaronontheweb in .NET, Open Source // Tags: , , // Comments (0)

I’ve spent down the past week tracking down an absolutely brutal bug inside Akka.NET. Sometimes the CPU utilization of the system will randomly jump from 10% to 100% and stay pegged like that until the process is recycled. No exceptions were thrown and memory / network / disk usage all remained constant (until I added monitoring.)

I couldn’t reproduce this CPU spike at all locally, nor could I determine the root cause. No one else had reported this issue on the Akka.NET issue list yet, probably because the product is still young and MarkedUp In-app Marketing is the only major commercial deployment of it that I know of (hopefully that will change!)

I had to hook up StatsD to MarkedUp’s production Akka.NET deployments to figure out what was going on ultimately1 – using that plus the combination of a DebugDiag dump I was able to isolate this down to a deadlock.

I didn’t have a meaningful stack trace for it because the deadlock occurred inside native CLR code, so I had to actually look at the public source for .NET 4.5 to realize that the issue was caused by an edge-case where a garbage collected Helios IFiber handle could accidentally make a BlockingCollection.CompleteAdding() call on the Fiber’s underlying TaskScheduler (possibly leaked to other Fibers) via the IFiber,Dispose() method which would gum up the works for any network connection using that Fiber to dispatch inbound or outbound requests.

Took a week to find that bug and a few hours to fix it. What a bitch.

Before I pushed any of those bits to master, however, I decided to run Visual Studio’s performance analysis toolset and test the impact of my changes – a mix of CPU performance and memory consumption testing. This revealed some interesting lessons…

Tradeoff #1: Outsourcing work to the OS vs. doing it yourself

First, the reason we had this performance issue to begin with: Helios is C# socket networking middleware designed for reacting quickly to large volumes of data. Inside a Helios networking client or a reactor developers have the opportunity to specify the level of concurrency used for processing requests for that particular connection or connection group – you can run all of your IO with maximum concurrency on top of the built-in thread pool OR you can use a group of dedicated threads for your IO. It was this latter type of Fiber that caused the problems we observed in production.

Why would you ever want to build and manage your own thread pool versus using the built-in one? I.E. manage the work yourself instead of outsourcing it to the OS?

In this case it’s so you can prevent the Helios IConnection object from saturating the general-purpose thread pool with thousands of parallel requests, possibly starving other parts of your application. Having a handful of dedicated threads working with the socket decreases the system’s overall multi-threading overhead (context-switching et al) and it also improves throughput by decreasing the amount of contention around the real bottleneck of the system: the I/O overhead of buffering messages onto or from the network.

In most of applications, outsourcing your work to the OS is the right thing to do. OS and built-in methods are performance-optimized, thoroughly tested, and usually more reliable than anything you’ll write yourself.

However framework and OS-designers are not all-seeing and don’t offer developers a built-in tool for every job. In the case of Helios, the .NET CLR’s general purpose thread-scheduling algorithm works by servicing Tasks in a FIFO queue and it doesn’t discriminate by request source. If a Helios reactor produces 1000x more parallel requests per second than the rest of your application, then your application is going to be waiting a long time for queued Helios requests (which can include network I/O) to finish. This might compromise the responsiveness of the system and create a bad experience for the developer.

You can try working around this problem by changing the priority of the threads scheduled from your application, but at that point you’ve already broken the rule of outsourcing your work to the OS – you’re now in control of scheduling decisions the moment you do that.

Taking the ball back from the OS isn’t a bad idea in cases where the built-in solution wasn’t designed for your use case, which you determine by looking under the hood and studying the OS implementation.

Another good example of “doing the work yourself” – buffer management, such as the IBufferManager implementation found in Jon Skeet’s MiscUtil library or Netty’s ByteBuffer allocators. The idea in this case: if you have an application that creates lots of byte arrays for stream or buffer I/O then you might be better off intelligently reusing existing buffers versus constantly allocating new ones.. The primary goal of this technique is to reduce pressure on the garbage collector (CPU-bound problem), which can become an issue particularly if your byte arrays are large.

Tradeoff #2: Asynchronous vs. Synchronous (Async != faster)

I have a love/hate relationship with the async keyword in C#; it is immensely useful in many situations and eliminates gobs of boilerplate callback spam.

However, I also hate it because async leads droves of .NET developers to believe that you can inherently make your application faster by wrapping a block of code inside a Task and slap an await / async keyword in front of it.2 Asynchronous code and increasing parallelism inside your application doesn’t inherently produce better results, and in fact it can often decrease the throughput of your application.

Parallelism and concurrency are tools that can improve the performance of your applications in specific contexts, not as a general rule of thumb. Have some work that you can perform while waiting for the result of an I/O-bound operation? Make the I/O call an asynchronous operation. Have some CPU-intensive work that you can distribute across multiple physical cores? Use N threads per core (where N is specific to your application.)

Using async / parallelism comes at a price in the form of system overhead, increased complexity (synchronization, inter-thread communication, coordination, interrupts, etc…) and lots of new classes of bugs and exceptions not found in vanilla applications. These tradeoffs are totally worth it in the right contexts.

In an age where parallelism is more readily accessible to developers it also becomes more frequently abused. And as a result: one of the optimizations worth considering is making your code fully synchronous in performance-critical sections. 

Here’s a simple example that I addressed today inside my fork of NStatsD which resulted in a 40% increase in speed.

This code pushes a UDP datagram to a StatsD server via a simple C# socket – the original code looked like this:

foreach (var stat in sampledData.Keys)
{
   var stringToSend = string.Format("{0}{1}:{2}", prefix, stat, sampledData[stat]);
   var sendData =  encoding.GetBytes(stringToSend);
   client.BeginSend(sendData, sendData.Length, callback, null);
}

The code here uses one of the UdpClient’s asynchronous methods for sending a UDP datagram over the wire – it takes this code about 2.7 seconds to transmit 100,000 messages to a StatsD server hosted on a VM on my development machine. Having done a lot of low-level .NET socket work with Helios, I’m suspicious when I see an asynchronous send operation on a socket. So I changed the call from BeginSend to just Send.

foreach (var stat in sampledData.Keys)
{
   var stringToSend = string.Format("{0}{1}:{2}", prefix, stat, sampledData[stat]);
   var sendData = Encoding.ASCII.GetBytes(stringToSend);
   client.Send(sendData, sendData.Length);
}

This code, using a fully synchronous send method, pushed 100,000 messages to the same StatsD server in 1.6 seconds – an improvement of 40%. The reason why? BeginSend operations force the OS to maintain some state, make a callback when the operation is finished, and release those resources after the callback has been made.

The Send operation doesn’t have any of this overhead – sure, it blocks the caller until all of the bytes are buffered onto the network, but there’s no context switching or caching objects to be included in the async state. You can validate this by looking at the .NET source code for sockets and compare the SendTo vs. BeginSendTo methods.

The point being: when you have thousands of write requests to a socket in a short period of time the real bottleneck is the socket itself. Concurrency isn’t going to magically make the socket write messages to the network more quickly.

Tradeoff #3: Memory vs. Throughput

I rewrote Helios’ DedicatedThreadFiber to use a BlockingCollection to dispatch pending work to a pool of dedicated worker threads instead of using a custom TaskFactory – a functionally equivalent but a mechanically significant change in how this Fiber’s asynchronous processing works. After making a significant change like this I test Helios under a heavy load using the TimeService example that ships with the Helios source.

Each TimeServiceClient instance can generate up to 3,000 requests per second and usually I’ll have 5-15 of them pound a single TimeServiceServer instance. On my first test run I noticed that the first TimeServiceClient instance’s memory usage shot up from about 8mb to 1.1GB in a span of 45 seconds. “Holy shit, how did I create such an awful memory leak?” I wondered.

Turns out that the issue wasn’t a memory leak at all – the TimeServiceClient could produce requests 2-3 orders of magnitude faster than its Fiber could process them (because, again, it was waiting on a socket), so the BlockingCollection backing the Fiber would grow out of control.

I decided to test something – if I capped the BlockingCollection’s maximum size and have it block the caller until space frees up, what would the impact on memory and performance be?

I capped the Fiber initially to 10,000 queued operations and I was surprised with the results – throughput of the system was the same as it was before but memory usage was only 5mb as opposed to 1.1GB. Sure, some of the callers would block while waiting for room to free up in the BlockingCollection but the system was still operating at effectively the same speed it was before: the maximum speed at which the outbound socket can push messages onto the network.

I made a choice to limit memory consumption at the expense of “on-paper” throughput, but it wasn’t much of a tradeoff at all.

Hopefully a theme is starting to emerge here: your system is only as fast as your slowest component, whether it’s the file system, the network, or whatever. Consuming all of your system’s memory in order to queue pending I/O socket requests faster doesn’t improve the speed of that slow component.

You’re better off blocking for a picosecond and conserving memory until that slow resource becomes available.

However, imagine if I had taken a different approach and decided to use more memory to group related socket messages in batches before we sent them over the socket. That might result in an improvement in total message throughput at the expense of increased memory consumption – I’d have to write some code to ensure that the receiver on the other end of the socket could break up the batched messages again, but there’s potential upside there.

Tradeoff #4: Heavyweight Resources vs. Lightweight Resources

Lightweight resources are primitives, structs, objects, and so forth. Heavyweight objects are threads, processes, synchronization mechanisms, files, sockets, etc… anything that is usually allocated with a resource handle or implements the IDisposable interface is a good candidate for a “heavyweight” object.

After I fixed the issue I described in tradeoff #3 – we had another memory-related performance problem inside our Executor object, used for executing operations inside a Fiber. When we terminate an Executor, we do it gracefully to give it some time to wrap up existing operations before we shutdown.

Helios offers two tools for “timed operations” – a lightweight Deadline class and a heavyweight ScheduledValue class. Both of these classes have the same goal: force some value to change after a fixed amount of time.

The Deadline class accomplishes this by comparing its “due time” with DateTime.UtcNow – if DateTime.UtcNow is greater than due time then the Deadline is considered “overdue” and starts evaluating to false.

The ScheduledValue is a generic class backed by a built-in .NET Timer object – you tell it to change its value from A to B after some amount of time, which the Timer does automatically once it elapses.

We use Deadlines most frequently inside Helios because they’re simple, immutable, and don’t pose any risk of a resource leak. Deadline is what we were using inside the Executor object’s graceful shutdown mechanism.

Every time a Fiber dequeues an operation for execution it checks the Executor.AcceptingJobs property (which is determined by the Executor’s internal DeadLine) – if this expression evaluates to false, the Fiber stops processing its queue and releases its threads. In other words, this property is evaluated hundreds of thousands of times per second – and as my performance profiling in Visual Studio revealed, generated a TON of GC pressure via all of its DateTime.UtcNow calls, which allocates a new DateTime structure every time.

I tried, in vain, to rewrite the Deadline to use a Stopwatch instead but wasn’t able to get reliable shutdown times with it. So I decided to use our heavyweight ScheduledValue class in lieu of Deadline – it uses a Timer under the hood but doesn’t allocate any memory or use any synchronization mechanisms when its value is polled.

This resulted in a significant drop in memory usage and garbage collection pressure for high-volume Fibers, and the ScheduledValue cleans up after itself nicely.

Even though the ScheduledValue requires more upfront resources than a Deadline, it was the superior performance tradeoff because it doesn’t have any additional overhead when its value is frequently polled by executing Fibers.

If the scenario was a little different and we had to allocate a large number of less-frequently polled objects, the Deadline would win because it wouldn’t require a large allocation of threads (used by the Timer) and heavyweight resources.

Wrapping Up

High-performance code isn’t black magic – in a general sense it comes down to the following:

  • Correctly identifying your bottlenecks;
  • Understanding the limitations and design considerations of built-in functions;
  • Identifying where work can effectively be broken into smaller parallelizable units; and
  • Making resource tradeoffs around those limitations, bottlenecks, and opportunities for beneficial parallelism.
    One thing I have not talked about is the cost of developer time vs. application performance, which is the most important resource of all. I’ll save that for another post, but as a general rule of thumb: “don’t performance-optimize code until it becomes a business requirement.”

1I open sourced our Akka.NET + StatsD integration into a NuGet package – Akka.Monitoring

2I complained about this habit before in “10 Reasons Why You’re Failing to Realize Your Potential as a Developer

If you enjoyed this post, make sure you subscribe to my RSS feed!



The Profound Weakness of the .NET OSS Ecosystem

July 3, 2014 13:58 by Aaronontheweb in .NET, Open Source // Tags: , , // Comments (33)

I’m in the process of writing up a lengthy set of blog posts for MarkedUp about the work that went into developing MarkedUp In-app Marketing, our real-time marketing automation and messaging solution for Windows desktop applications (and eventually WP8, WinRT, iOS, Android, Web, etc…)

During the course of bringing this product to market, I personally made the following OSS contributions:

    There’s more to come – I just finished a Murmur3 hash implementation this week (not yet OSS) and I’m starting work on a C# implementation of HyperLogLog. Both are essential ingredients to our future analytics projects at MarkedUp.

I really enjoy contributing to OSS, especially on Akka.NET. It’s a successful project thus far, attracting several strong contributors (Roger Alsing, the developer who originally started Akka.NET, is a genius and overall great person) and lots of support from big names in .NET like Don Syme and others.

But here’s the kicker – MarkedUp is a very small company; we’re operating a business that depends heavily on doing distributed computing in .NET; and I have lots of tight deadlines I have to hit in order to make sales happen.

I didn’t make any of these contributions because they made me feel all tingly and warm inside – I don’t have the free time for that any more, sadly. I did it because it was mission-critical to our business. So here’s my question: in the ~15 year history of .NET, no one built a reactive, server-side socket library that’s actively maintained? It’s 2014 for fuck’s sake.

Over the course of working on this product, I was constantly disappointed by a .NET OSS landscape littered with abandoned projects (looking at you, Kayak and Stact) and half-assed CodeProject articles that are more often factually wrong than not.

This week when I started work on HyperLogLog1 I fully expected that I was going to have to implement it myself. But I was incredulous when I couldn’t find a developer-friendly implementation of Murmur3 in C# already. The algorithm's been out for over three years – I could find a dozen decent implementations in Java / Scala and two barely-comprehensible implementations in C#, neither of which had an acceptable license for OSS anyway. I had an easier time porting the algorithm from the canonical C++ Murmur3 implementation than I did following the two C# examples I found online.

So I had to ask myself:

Do .NET developers really solve any hard problems?

We came to the conclusion early on that the right architecture for MarkedUp In-app Marketing depended on a successful implementation of the Actor model. We thought to ourselves “this is a programming model that was invented in the early 70s – surely there must be a decent actor framework in .NET we could leverage for this.”

We weren’t just wrong, we were fucking wrong. We found a graveyard of abandoned projects, some half-assed Microsoft Research projects that were totally unusable, and a bunch of conceptual blog posts. Nothing even remotely close to what we wanted: Akka, but for .NET.

So we spent two weeks evaluating migrating our entire back-end stack to Java – we already depend heavily on Cassandra and Hadoop, so being able to take advantage of 1st party drivers for Cassandra in particular really appealed to us. However, we decided that the cost of migrating everything over to the JVM would be too expensive – so we went with an slightly less expensive option: porting Akka to .NET ourselves.

Why in the hell did it fall on one developer working at a startup in LA and one independent developer in Sweden to port one of the major, major cornerstones of distributed computing to .NET? Where the hell was Project Orleans (released by Microsoft AFTER Akka.NET shipped) five years ago?

Did none of the large .NET shops with thousands of developers ever need a real-time distributed system before? Or how about a high-performance TCP socket server? No? What the hell is going on?

.NET is the tool of choice for solving rote, internal-facing, client-oriented problems

There’s a handful of genuinely innovative OSS projects developed by .NET programmers – and they largely deal with problems that are narrow in scope. Parsing JSON, mapping POCOs, dependency injection, et cetera.

The number of projects like MassTransit, i.e. projects that address distributed computing, are rare in .NET – I can’t even find a driver for Storm in C# (which should be easy, considering that it uses Thrift.)

The point I made four years ago this very day about why .NET adoption lags among startups is still true – .NET is the platform of choice for building line of business apps, not building customer-facing products and services. That’s reflected strongly in it’s OSS ecosystem.

Need a great TCP client for Windows Phone? Looks like there are plenty of those on NuGet. Or a framework for exporting SQL Server Reporting services to an internal website? A XAML framework for coalescing UI events? .NET OSS nails this.

The paramount technical challenge facing the majority of .NET developers today looks like building Web APIs that serve JSON over HTTP, judging from the BUILD 2014 sessions. Distributed computing, consistent hashing, high availability, data visualization, and reactive computing are concepts that a virtually absent from the any conversation around .NET.

And this is why we have a .NET OSS ecosystem that isn’t capable of building and maintaining a socket server library, despite being the most popular development platform on Earth for nearly 10 years2.

Compare this to the Java ecosystem: virtually every major .NET project is a port of something originally evented for the JVM. I’m looking at you, NAnt, NUnit, NuGet (Maven), NHibernate, Lucene.NET, Helios, Akka.NET, and so on.

You know what the difference is? There’s a huge population of Java developers who roll hard in the paint and build + open source hard shit. The population of people who do this in .NET is miniscule.

We are capable of so much more than this

The tragedy in all of this is that .NET is capable of so much more than the boring line of business apps Microsoft’s staked it’s business on and CRUD websites.

The shortcomings of .NET’s open source ecosystem are your fault and my fault, not Microsoft’s. Do not point the finger at them. Actually, maybe blame them for Windows Server’s consumer-app-hostile licensing model. That merits some blame.

But the truth is that it’s our own intellectual laziness that created a ghetto of our OSS ecosystem – who’s out there building a lightweight MMO in .NET? Minecraft did it with Java! How about a real-time messaging platform – no, not Jabbr. I mean something that connects millions of users, not dozens.

We don’t solve many hard problems – we build the same CRUD applications over and over again with the same SQL back-end, we don’t build innovative apps for Windows Phone or Windows 8, and there’s only a handful of kick-ass WPF developers out there building products for every day consumers. Looking at you, Paint.NET – keep up the good work.

Don’t make another shitty, character-free monstrosity of a Windows Phone application – no, it’s not“new” because you can write it using WinJS and TypeScript instead of C#.

Do something mind-blowing instead – use Storm to make a real-time recommendation app for solving the paradox of choice whenever I walk into a 7-11 and try to figure out what to eat.

There are brilliant developers like the Akka.NET contributors, the Mono team, Chris Patterson, and others who work on solving hard problems with .NET. YOU CAN BE ONE OF THEM.

You don’t need Ruby, Node.JS, or even Java to build amazing products. You can do it in .NET. Try a new way of thinking and use the Actor model in Akka.NET or use the EventBroker in Helios. Figure out how to fire up a Linux VM and give Cassandra or Storm or Redis a try. Learn to how work with raw byte streams and binary content. Learn how to work with raw video and audio streams.

The only way to clean up our acts and to make an ecosystem worth keeping is to stop treating our tools like shit shovels and start using them to lay alabaster and marble instead. .NET is more than capable of building scalable, performant, interesting, consumer-facing software – and it falls on us, individual and independent developers, to set the example.


1here’s the only known HyperLogLog implementation in C#, which appears factually correct but out of date and not production-usable

2no, not going to bother citing a source for this. Get over it.

If you enjoyed this post, make sure you subscribe to my RSS feed!



Business to Business Services Are What Will Make Dogecoin Succeed

March 16, 2014 08:45 by Aaronontheweb in Cryptocurrency // Tags: , // Comments (1)

dogecoinFollowing on from my previous post about the second / third generation cryptocurrencies advancing the start of the art, I’ve spent a lot of time participating in /r/dogecoin on Reddit and seeing dozens of new businesses start accepting Dogecoin every day.

I’ve mined a fair bit of Dogecoin so far (albeit on laptops, not dedicated mining rigs) and purchased a substantial amount, so I’ve been looking for some ways to spend or invest it.

Here’s some of the purchasing that I’ve done or am considering at the moment:

There’s a steady clip of new merchants and stores announcing support for Dogecoin on Reddit and elsewhere every day – to the tune of a couple dozen per day based on my casual observation of the forums.

One thing that 100% of these stores have in common is that the goods they’re pricing in Dogecoin are consumer goods, sold directly to individuals. And that’s probably the natural place for a new type of payment technology to begin getting traction, since the barriers to entry are fewest.

However, there are millions of consumer businesses that simply won’t take the currency risk of accepting Dogecoin (or Bitcoin) directly – they might try using a service like Coinbase or BitPay, which mitigate the currency risk by just pricing everything indexed to some fixed fiat (USD, EUR, etc) value and send the fiat back to sellers immediately once the transaction clears.

But the core problem that early crypto-accepting businesses like Overstock solve with Coinbase et al today is that they can’t cover any of their business costs with Dogecoin or any other cryptocurrency. If I’m Overstock’s CEO, I might be able to accept cryptocurrency from customers but I can’t use it to:

  • Pay the salaries of my employees;
  • Pay for critical infrastructure costs like large-scale hosting, CDNs, etc;
  • Pay for the COGS of the inventory sold to customers, including shipping and so forth;
  • Pay for marketing and advertising needed to drive new users to the site and grow the business;
  • Pay taxes and licensing fees;
  • Pay for service providers like accountants and attorneys;
  • Pay for the rainbow of various insurance premiums that every business has to have; and
  • Pay for real-estate and other capital expenditures.

All of these expenses are really business-to-business transactions, if you consider employees as sole proprietor service providers for the sake of this example.

 

Not being able to pay for any of those things without having to convert Dogecoin back into USD and being subject to rapidly changing market conditions is the real-world definition of cryptocurrency risk. And so the strategy of choice for mitigating it is to simply never pay for any of these services in Dogecoin – set the price for all of your goods in USD and let the customers take the currency risk at the point of sale.

However, that strategy ultimately limits the utility and value of the cryptocurrency to business-to-consumer transactions only – still really useful, but also makes the USD / Dogecoin exchange rate the sole determinant of the currency’s value, since prices for goods are still really set in USD.

 

An alternative and much more interesting approach is to start selling business-to-business services in Dogecoin directly. It’s a process that will take much more time to develop (decades) but it ultimately leads to a path where the value of Dogecoin goods can be wholly priced in cryptocurrency. And therefore: leads to a day when the value of Dogecoin is based off of its buying power indexed to real goods and services, not speculation.

I run a B2B startup at the moment - MarkedUp Analytics; if I could pay my (considerable) Amazon Web Services bill in Dogecoin alone and there were tools available to make it easy for my accounting firm to reconcile Dogecoin-based transactions in Quickbooks, I’d be in.

Growing a B2B Dogecoin ecosystem from the ground up can be bootstrapped. It starts with a few SaaS products that make it easy to do order fulfillment and process transactions – Moolah has already made massive strides on that front. Immediately after that comes accounting tools – something as simple as a bank statement plus historical fiat fair market values is probably all that’s needed until the ecosystem grows. Those are the basics that businesses selling to consumers need.

From there, the next best place to enter is marketing and advertising services – and I would start by building marketing / advertising services that make it easy for businesses to find consumers who are want to pay with Dogecoin.

Trying to get MailChimp or MarkedUp to accept Doge is the wrong place to begin – focus on building advertising networks or product listing services that make it easy for the early adopter businesses to find Dogecoin customers. Because from there, you can get a virtuous cycle that can start growing the ecosystem based off of its own organic growth, rather than hit-driven PR and speculation.

From there, if the segment of customers “willing and able to pay in Dogecoin” grows sustainably, anything is possible. Cracking the really tough nuts, like paying salary or buying real-estate with Dogecoin, will take many years of consistent growth in order to be achievable.

Dogecoin is entering a phase now where the first business-to-business services need to emerge in order to help support the business-to-consumer companies selling coffee, Steam keys, and the like. That’s how the long view of the ecosystem becomes realizable.

If you enjoyed this post, make sure you subscribe to my RSS feed!



Bitcoin Paved the Way, but it’s Not the Future of Cryptocurrency

March 1, 2014 10:29 by Aaronontheweb in Cryptocurrency // Tags: , , // Comments (1)

bitcoinUntil recently, I was extremely skeptical of cryptocurrency in general. In the midst of the investment speculation and mania in late 2013, when the price of Bitcoin first climbed to $500-$800, it smelled too much like digital Beanie Babies for my blood. So whatever interest I had in learning about the technology and its disruptive potential was buried underneath multiple layers of speculation sediment.

Two things piqued my interest, however: 1) I attended a Bitcoin meetup in Santa Monica and heard more about it and 2) the Jamacian bobsled team raised money to attend the Winter Olympics with a huge amount of support from Dogecoin.

So my interest in cryptocurrency grew and as an entrepreneur I wondered about what industries it could disrupt.

I imagine a world where I can run a legitimate business; open a merchant banking account; and accept orders from customers all over the world in a matter of minutes – without having to seek the permission of a lumbering, bureaucratic bank or credit card processing service that doesn’t give a shit about my business.

There’s a lot of exciting possibilities for cryptocurrency in general – it will take years, but it’s a real possibility that shouldn’t be dismissed.

That being said – there’s an empirical rule that I subscribe to when it comes to emerging technologies: “never bet on the first generation of a ground-breaking technology.”

I’m sure there are some counter-examples (I can’t think of any off-hand,) but they are few and far between for the simple reason that the inventors behind the new technology don’t have crystal balls and can’t anticipate every critical market factor. By and large - it's the second and third generation technologies that win. Windows, Office, iPod / iPhone, Google, etc...

So I suspected that Bitcoin would be slathered with “first generation” problems, and I wasn’t disappointed.

For most people Bitcoin is a speculative investment, not a currency

Investors all over the world buy and sell fiat (sovereign) currencies like commodities, and there’s investment markets and exchanges for it such as Forex.

The big difference between a fiat currency like the United States Dollar and Bitcoin though, is the the number of dollars in circulation moving from person-to-person versus sitting dormant in an investment account waiting for the value of the Euro to fall is orders of magnitude larger. The amount of dollars held for foreign currency trading is a drop in the bucket compared to how much is spent buying goods, being loaned, et cetera.

Bitcoin, on the other hand, has been on the receiving end of a speculation bubble unlike anything I’ve ever seen since the rise of the Beanie Baby. People who successfully acquire Bitcoin, whether through mining it or buying it, hold onto it rather than spend it.

Bitcoin Days Destroyed - from blockchain.info on 3-1-2014

The chart above is from the official blockchain.info charts and it shows “Bitcoin days destroyed” – if someone received 100BTC 7 days before they sold it, it would destroy 700 days worth of bitcoin trading days during that waiting period.

The maximum number of Bitcoin that can ever be minted is 21 million, if I recall correctly. The number of days destroyed on the chart for the past year indicates that since the value of Bitcoin really skyrocketed in October, 2013, bitcoins are finally starting to move as people sell them for USD or other fiat currencies.

And this has been true of Bitcoin in general – since its value started to take off, it’s been viewed as an investment speculation rather than a money transfer mechanism used for buying goods / services. I don’t even feel the need to look for a citation for that last sentence.

The other factor that really contributes to “Bitcoin as an investment” is its inherent deflationary policy – if someone loses a hard drive with $6m worth of Bitcoin on it, that currency is gone forever and the total available amount of Bitcoin anyone can ever have decreases. Scarcity with Bitcoin will always increase once the last coin gets mined, as currency will be lost or destroyed over time with no possibility of replacement.

So when you combine both of these factors – fever pitch speculative investment and guaranteed increase in scarcity, you get sensational claims about Bitcoin being worth $100,000 per coin by 2016. Which in turn creates more speculation and blah blah.

So why would I turn around and buy a laptop off of Overstock with Bitcoin if I think that Bitcoin is going to be worth 100x what it is today? Wouldn’t I just keep holding onto it, like the chart above indicates? Well, yes!

Bitcoin as an investment, today, is driven by the greater fool theory of investment – that some other idiot will buy the Bitcoin you bought at a greater price because he can sell it to another idiot. Bitcoin will have real value when people start actually buying real goods and services with it, which they largely are not right now.

Compare this to Dogecoin – a currency with an unlimited theoretical number of coins in circulation (designed to eliminate the threat of lost currency), where the value of each individual coin is so low that people are actively inventing ways to spend them. Hence why Dogecoin’s currency transaction volume is several times higher than Bitcoin, despite the fact that the Doge is only a few months old.

Overstock and Playboy and other companies might be accepting Bitcoin for purchases1, but we have yet to see if people will buy. With Dogecoin on the other hand, people already are.

I believe that Bitcoin’s deflationary policy will ultimately make it a loser in the long-run, when it comes to businesses being willing to accept BTC at face value rather than index it to the dollar. Ultimately the BTC speculation bubble will implode and correct itself down to the actual good / services buying power of the coin.

An inflationary policy, not unlike what most fiat currencies do, is what has historically been successful at increasing volume (and volume == buying power == value for cryptos) and I don’t see any reason why that would not hold true for cryptocurrencies as well.

Mining Arms Race

Every post-Bitcoin cryptocurrency implementation uses “mining” as a form of expanding the currency over time although “by how much?” and “until when?” varies.

The mining process looks like this, in summary:

  • Every cryptocurrency writes all of its transactions into a public ledger, the “blockchain” in Bitcoin’s case, and before money can officially move from one account to another this transaction has to be validated and agreed upon by a majority of miners;
  • All Bitcoin transactions use elliptic curve cryptography to validate transactions – every Bitcoin address comes with a public key that is available for everyone to see, but to successfully transfer currency from one account to another then the sender must sign the transaction in the blockchain with their private key using a cryptographic hash signature, SHA-256 in the case of Bitcoin;
  • Miners listen for new transactions on the blockchain and are able to authenticate them by verifying that the hash is valid – they can do this without knowing the sender’s private key, which is what makes cryptocurrency simultaneously secure and public.
  • Once a miner has validated a hash, it writes back to the blockchain “I think this hash is valid” – the blockchain will wait for a consensus of miners to all agree that the transaction is valid before officially adjusting the totals in each account.
  • The miners are rewarded for their efforts by discovering new “blocks” of previously undiscovered coins, which will yield a number of new coins that awarded to the miner – miners can also be given transaction fees under some circumstances.
    Mining was initially the easiest way to acquire Bitcoin or any other cryptocurrency – take a computer that has a decent graphics card and you can mint a few thousand of them in a week. However, over time the difficulty – the amount of work required to discover a new block of coins, increases dramatically.
    So the amount of new coins you can discover drops. However, miners are still supposed to be theoretically encouraged to mine as a result of the increase in value of the currency – that way the miners can break-even and maybe even make a profit on the costs of electricity, hardware, and bandwidth needed to mine.

One of the issues that Satoshi or any of the early Bitcoin inventors could not have anticipated is the rise of ASIC (Application-specific integrated circuit) miners – machines custom-built for executing the SHA-256 algorithm specifically, and they can do it several orders of magnitude faster than a general-purpose computer can at a fraction of the cost.

These machines are really only available to people who have lots of resources and relationships with custom foundries in China, where the hardware can be made, organizations like cex.io and others.

These machines nuked the difficulty of the current Bitcoin blocks beyond a point where the average person can't realistically mine and make a return to even cover the cost of electricity used while mining, but more over it introduced the possibility of a 51% attack – a mining pool, or a cartel of pools, that controls 51% of greater of the total hash rate for Bitcoin has the power to do all sorts of things to Bitcoin’s stability.

For instance, there was a proposal on /r/Bitcoin this week (I can’t find the link at the moment) to have the mining pools point to and validate a new, special “emergency block” containing 750,000 new Bitcoins in order reimburse the people who lost their Bitcoins during the Mt. Gox fiasco. This proposal is being ignored because it would set a terrible precedent, but imagine what might happen when Bitcoin’s last block is discovered and the block rewards diminish to the point of just transaction fees?

What if GHash.IO or any of the other mining pools who control access to ASIC miners decide to improve their returns by not validating CoinBase or OverStock’s Bitcoin transactions unless they pay the mining pool a percentage of revenue? What about rewarding themselves with new blocks, per the Mt. Gox proposal?

It’s a real possibility – there would be outrage from the community of course, but outrage won’t buy you ASIC mining capacity to dilute the share percentage of a mining cartel.

This problem is largely due to the simplicity of the SHA-256 algorithm – it’s easy to build ASIC computers for it.

Virtually every other major altcoin since, beginning with Litecoin, has adopted the scrypt hashing algorithm to attack this very issue and make it uneconomical for any one party to dominate the net hash rate of any given currency.

Scrypt is memory-hard – it is designed to make it extremely expensive to execute large-scale hardware attacks by requiring large amounts of memory to compute each individual hash, which makes it suitable for graphics card (GPU)-based mining – an activity that individuals can participate in economically.

vertcoinThere are other coins, such as Vertcoin, that have gone beyond even what Litecoin and its derivatives (such as Dogecoin) have done to make it uneconomical (different from impossible) to use ASICs for mining, thus defeating the threat of a mining cartel seizing control over the behavior of the public ledger.

Unless nVidia or AMD wants to get into the business of operating its own mining pools, it’s not supposed to be economically feasible for any one organization to buy 51% of all GPUs used in mining operations.

So in conclusion, the potential for a 51% attack with Bitcoin is very real – the miners who are able to secure relationships with custom ASIC foundries in China can make the rules at-will with BTC.

Not true for the second generation cryptocurrencies like Dogecoin and Litecoin, which fundamentally makes them less risky and more predictable in the long run.

Conclusion

The most common argument I hear from Bitcoiners isn’t really an argument at all; they say words, but what I hear is “well, I invested {x} into Bitcoin therefore it must succeed.”

The real argument that Bitcoin has going for it is momentum – it’s paving the way for cryptocurrencies to actually emerge as things that people really use to buy goods and services. In the long run, cryptocurrency will succeed in transforming how we buy and sell.

But I don’t think it will be Bitcoin that emerges as the leading standard, for the reasons I stated above.

Overall, I’m just happy that there’s a future in-store for me where I can do about my business without having to deal with the bureaucratic paralysis of banks when it comes to accepting orders from customers around the world.

Bitcoin is an issue that has a lot of emotionality and dollars attached to it – please keep it civil in the comments.


1although they really aren’t – Coinbase accepts all of the currency risk and pays them in USD, but that may change in the future.

If you enjoyed this post, make sure you subscribe to my RSS feed!



The Taxonomy of Terrible Programmers

The MarkedUp Analytics team had some fun over the past couple of weeks sharing horror stories about software atrocities and the real-life inspirations for the things you read on The Daily WTF. In particular, we talked about bad apples who joined our development teams over the years and proceeded to ruin the things we love with poor judgment, bad habits, bad attitudes, and a whole lot of other bizarre behavior that would take industrial pyschologists thousands of years to document, let alone analyze.

So I present you with the taxonomy of terrible software developers, the ecosystem of software critters and creatures who add a whole new meaning to the concept of “defensive programming.”

At one point or another, every programmer exists as at least one of these archetypes – the good ones see these bad habits in themselves and work to fix them over time. The bad ones… simply are.

The Pet Technologist

My personal favorite.

A pet technologist is born when they make the fatal mistake of falling in love with a piece of technology. It’s not a gentle, appreciative, “man this is a well-designed framework” sort of love – it’s inseparable, unrequited, Misery-obsessive love. And just like with spying, falling in love is a liability in our business.

No matter what the question is, you can trust that the pet technologist will have an answer: his or her pet technology.

“Hey, we need to implement a content management system in Rails, which database should we use?” Mongo.

“Multi-tenant blog engine?” Mongo.

“Business-critical compliance system?” Mongo.

“Inventory management system?” Mongo.

“Electronic medical records system?” Mongo.

“Distributed data warehouse?” Mongo.

And so forth.

They will invent reasons to include their pet technology in any project you work on, regardless of whether there’s a practical reason for it or not. They will vehemently, emotionally fight any decision against including their pets. Sometimes they might even resort to not telling anyone they’re using it, and will try to sneak it in at the last minute.

The Arcanist

Anyone who has worked on a legacy system of any import has dealt with an Arcanist. The Arcanist’s goal is noble: to preserve the uptime and integrity of the system, but at a terrible cost.

The Arcanist has a simple philosophy that guides his or her software development or administrative practices: if it ain’t broke, don’t fix it – to an extreme.

The day a piece of software under his or her auspices ships, it will forever stay on that development platform, with that database, with that operating system, with that deployment procedure. The Arcanist will see to it, to the best of his ability. He may not win every battle, but he will fight ferociously always.

All change is the enemy – it’s a vampire, seducing less vigilant engineers to gain entry to the system, only to destroy it from within.

The past is the future in the Arcanists’ worldview, and he’ll fight anyone tries to upgrade his circa 1981 PASCAL codebase to the bitter, tearful end.

The Futurist

The Futurist is the antithesis of the Arcanist – today is the future, and any code written with yesterday’s tools fills the Futurist with unparalleled disgust and shame. The Futurist’s goal is not noble – it’s to be seen as new and cutting edge. The Futurist’s measure of success is Hacker News karma and well-attended User Group meetups from his war stories, not effective programming.

The Futurist will breathlessly tell you with spittle-flying gasps about the latest JavaScript-does-something-it-shouldn’t experimental project he read about on Hacker News thirty minutes ago and has already forked to his Github. He’ll squeal like a teenage girl at a Justin Bieber concert every time Microsoft Research or the Server and Tools Team releases an obscure alpha of some technology he knows the name of but doesn’t understand.

The Futurist is responsible for more reverted commits than any other developer, and is often flabbergasted when his attempts to upgrade your database driver package to v1.0.13-alpha-unstable-prelease-DONOTUSE are rejected. His pleadings of “but it has Java Futures, so we get pure async!!11!1” do not stir the vigilant release manager.

The Futurist cares not for quaint, passing concerns about stability, maintainability, or teachability – it doesn’t matter to him if it’s impossible to hire Erlang developers. New is everything.

The DevOps Engineer, the QA Engineer, and the Release Manager are the natural enemies of the Futurist.

The Hoarder

The Hoarder is a cautious creature, perpetually unsure of itself. The Hoarder lives in a world of perpetual cognitive dissonance: extremely proud of his work, but so unsure of himself that he won’t let anyone see it if it can be helped.

So he hides his code. Carefully avoiding check-ins until the last possible minute, when he crams it all into one monolithic commit and hopes no one can trace the changes back to him. His greatest fear is the dreaded merge conflict, where the risk of exposure is greatest.

The Hoarder will openly tell you that his work is awesome but in confidence knows his code might suck. It probably does – but his fear of facing that possibility is what makes him who he is. The Hoarder is responsible for many last minute bugs sprinkled throughout the catacombs of the code base. His fellow engineers, tired of slowly going insane from invasion of subtle bugs, rose to fight him.

They invented tools like git blame and other weapons of accountability, and vengeance.

Ultimately, the Hoarder is damned and doomed – but there is hope for him in the short run. The day accountability comes for him at one job, his Dice resume is updated for another. The Hoarder will live to fight another day.

The Artist

A cousin of The Hoarder and The Futurist, the Artist pours his soul into every thoughtfully constructed line of code. The Artist is an emotionalist – his software is an expression of himself, a living embodiment of the genius he represents.

The Artist considers the important questions, like will my JavaScript be more syntactically beautiful without semicolons? How can I turn this perfectly readable for..each statement into a single line of LINQ? Will wrapping this block in a promise make it more… elegant?

The Artist and his code are one, which creates inevitable problems in the real world. Every JIRA issue containing documented, indisputable proof of a bug in his code is an act of artistic censorship on the behalf of users who “just don’t understand” or jealous colleagues who envy his genius. His lower lip quavers for minutes each time a UserVoice ticket announces its presence in HipChat, titled with the name of a feature he owns – even if the ticket in question is replete with the gentle patience and understanding of an early adopter.

The Artist is not long for this world, unable to have an objective discussion about his body of work with even the most sympathetic of colleagues, he withdraws from the company of his fellow developers and metamorphoses into The Island.

The Island

The Island is the ultimate loner in the taxonomy of terrible software developers, as he desires above all things to be left in peace with his favorite text editor and devoid himself of all human contact. The ideal condition for the Island is one where communication with the outside world is kept at a minimum and strictly at his convenience. Just code, no humans.

Unfortunately, reality and ideal often being far afield, the Island has to seek the company of others in order to live. So he is forced to communicate with co-workers or clients, and it is a tremendous burden for him indeed.

So he hides – he’ll miss meetings, fail to return phone calls, stay signed off of IM, keep the email client closed, and so forth. He’ll gladly spend hundreds of man-hours Gandalfing the documentation and project specs rather than simply ask someone on his team questions.

The Island is usually also a Hoarder – keeping his releases close to the vest until it’s absolutely necessary to share them. Anything to avoid people and their judgments.

And like the Hoarder, the Island is doomed. Software is a team sport and does not suffer those who do not play by its rules.

The “Agile” Guy

The “Agile” Guy is a utilitarian, who ultimately seeks to improve the efficiency and productivity of himself and his team. Unfortunately, both his understanding of “agile” philosophies and his implementation strategy are hilariously inflexible and rigid, an irony which is completely lost on him.

The “Agile” Guy’s intentions are noble: to improve the way software is made. He’ll introduce kanban boards with precisely four tiers for exactly every project and a meticulously calculated method for determining the exact number of business points and sprint points for every issue the team can encounter.

Any issue which takes longer than four hours must be broken up; any sprint longer than two weeks must be truncated. NO EXCEPTIONS.

All personnel must pair program with their designated pair programming partners at all times, no exceptions. All git commits must be done in this exact format and hours of work against a JIRA issue must be logged at regular intervals. No status updates are allowed at standups, which are strictly cut off at 10 minutes.

The “Agile” Guy forgets the purpose of the agile process to begin with – to be flexible and dynamic, and instead imposes round-hole order on square-peg problems. The JIRA board turns into a ghetto, a wasteland of broken promises and infeasible commitments. And from dusk to dawn the developers toil, pair programming all the way, while their “Agile” Guy overload looks proudly over his new empire, forged on discipline and process.

But the “Agile” Guy creates powerful enemies in his wake: the actual programmers responsible for getting things done, men and women who live in a world without luxuries of time or realizable schedules. Men and women who try to create order from chaos themselves, but are often at the mercy of failing networked file systems and poorly implemented drivers. Such intrepid souls do not suffer having the will of idiots imposed on them. Dissent and strife spread throughout the team; what follows is rebellion.

All “Agile” Guys meet their end by being torn down like the statue of a deposed dictator, complete with an Ewok luau in the Endor moonlight and the burning wreckage of their kanban Death Star smoldering in the distance.

The Human Robot

The Human Robot tries his best and his intentions are good. But he has a handicap: he interprets everything literally. The Human Robot is the world’s first true organic computer, which also means that every user-space detail of their work must be explained literally, byte-for-byte.

The Human Robot has his uses – he can find (and patch himself!) the subtle race condition created by minor JVM differences on your particular version of the Linux kernel, but ask him to implement a new feature and a monster is born. The Human Robot, unable to grasp concepts such as figurative language, imagination, abstraction, and creative interpretation, is stuck in a world where he can only process commands.

Your product lead asks the Human Robot to create a button that allows the end-user to share their documents via email with another end user; a week later the Human Robot delivers a fully functional high-throughput transactional email server embedded inside your application.

When a Human Robot is confronted about an issue with his or her work on the product, they will respond with the following sentence every time: “your requirement was not found in the specification for this project. We require additional pylons.”

This handicap rears its head most clearly in the team dynamics of a software organization. The Human Robot is the sort of person who needs a four-page-long decision tree and a finite state machine diagram to help him understand what does and does not qualify as sexual harassment in the workplace.

Human Robots often tend to be conference organizers (see PyCon 2013) and moderators on StackExchange.

The Stream of Consciousness

The Stream of Consciousness programmer is related to the Illiterate in that he too cannot read code. However, what’s fundamentally different about the Stream of Consciousness is that he can’t read his own code in addition to that of every other developer on the team.

In fact, the Stream of Consciousness programmer is best described as a “forward-only” cursor – the only way for them to solve their problems is to write new code, every time. Code reuse and refactoring are alien programming practices that the Stream of Consciousness will tell you they “understand,” but that’s only because they know the names of those concepts.

The Stream of Consciousness will happily add a third and fourth new interface to your application for writing to the filesystem, because their cursor has already moved past the first and second interfaces they wrote last week. Their code will be totally free of circular references, but that’s only because nothing ever gets used more than once.

The easiest way to determine if you’re working with a Stream of Consciousness programmer is to read their source code and look for comments along the lines of “hmm, I wonder if this works?” and “I really wish the kitchen had more non-dairy creamer.”

The Illiterate

As the name suggests, the Illiterate has a massive problem when it comes to reading other people’s source code.

The Illiterate, a close cousin of the Island, understands basic programming constructs and has a full grasp on the syntax of his / her preferred programming languages, but is totally blind when it comes to code written by other developers. We call this “code-blind” in extreme cases.

The illiterate can understand the basic “hello world” example, but beyond that he or she never developed the capacity to understand another programmer’s intent or the “Find Usages” button in Visual Studio. So the Illiterate is forced to work around this alien “other programmer” code in all of his or her day-to-day assignments – often duplicating the work of other developers and contributing to code bloat.

When confronted by other developers and asked “why didn’t you use our standard interface for rendering a dialog?,” the Illiterate will simply stare at his or her shoes and mumble inaudibly.

The Agitator

The Agitator, like the Human Robot, is a social retard. But unlike the Human Robot, has no good intentions. An Agitator is not born, they are forged through years of suffering through undesirable work environments and programming assignments.

Having been through shit work for years and years, the Agitator believes he or she now knows best and is determined to run things they see fit, whether they actually have the authority to or not.

The goal of the Agitator is to establish dominance and control over their work environment through the use of force and intimidation.

The Agitator sees themselves as the confluence of Grace Hopper and Che Guevara, a brilliant technical visionary casting off the chains of oppression by an ignorant and ineffective ruling class. The Agitator is seen by his or her teammates as an idiotic wannabe-alpha bully who gets into high volume arguments over when it’s appropriate to use Pascal case versus CamelCase.

The Agitator will routinely try to talk down peers and superiors both in an attempt to assert dominance. In many cases he will win, seeding dissatisfaction and discord in the workplace in the process.

In most cases the Agitator is beyond help, and management has no choice but to enforce the “No Asshole” rule and terminate him.

If you enjoyed this post, make sure you subscribe to my RSS feed!



Win32 Errors: How to Format GetLastError() Output into Readable Strings

November 13, 2013 14:55 by Aaronontheweb in C++, Win32 // Tags: , // Comments (0)

I’ve been doing a moderate amount of native Win32 C++ programming over the past few weeks, and occasionally I’ve needed to set up some debug points to print errors that occur during file and memory I/O.

When something goes wrong inside the Win32 API, some methods will return a system error code directly (such as all of the registry methods) and others will simple return a NULL pointer or a 0-length DWORD and require you to poll the GetLastError method yourself.

Unfortunately, these error codes are just long integers (DWORDs) and don’t contain any of that human-friendly information that I’m used to for .NET exceptions.

I created a Gist on Github that shows how we do it and have also included the code below:

DWORD dLastError = GetLastError();
LPCTSTR strErrorMessage = NULL;
    
FormatMessage(
FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS | 
FORMAT_MESSAGE_ARGUMENT_ARRAY | FORMAT_MESSAGE_ALLOCATE_BUFFER,
        NULL,
        dLastError,
        0,
        (LPWSTR) &strErrorMessage,
        0,
        NULL);

//Prints debug output to the console
OutputDebugString(strErrorMessage);

If you enjoyed this post, make sure you subscribe to my RSS feed!



You Succeed Once You Stop Giving a Shit

September 21, 2013 13:29 by Aaronontheweb in General // Tags: // Comments (2)

This post is about how to find success in any situation and draws entirely from my own experiences. Your mileage may vary.

July was a rough month for me this year – I endured simultaneous failure on all fronts. I had put on a shitload of weight, ended a relationship, and market conditions changed unfavorably for MarkedUp. All of this occurred within a couple of weeks at the tail end of the month.

I am well-accustomed to stress and handle it well. I’ve had things not go my way before, and I’ve always found my footing again.

This time something was different. Maybe it’s because of how deeply invested I am in MarkedUp or because of everything going wrong at once, I don’t know. But the difference this time I walked away with a major dent in my self-confidence and self-sureness. That’s a new one.

I wasn’t even aware that my confidence took a dip at the time. I figured that out more recently, when I started noticing a huge increase in my anxiety and apprehension when it came to routine things.

I started asking myself, “wait a second – this shit never used to bother you. What’s changed?”

Then I remembered the truth. Let’s flash back a few years.

An Unusual Habit

I was an immensely awkward kid my first two years of college. You know the formula: under-socialized kid who spent more time in front computers than people during his developmental years suddenly thrust into a co-ed dorm with babes and booze aplenty, 2,000 miles from the nearest person he knows, unsure of himself, finding his way in the dark.

In particular, I was utterly terrible with women. Terrible enough that my Freudian defense mechanism kicks in with full force when I try to relive the full horror of my own awkward blunders with the opposite sex. Jesus.

Second semester freshman year, I discovered alcohol and spent more time partying1 than studying, receiving poor grades for the first time in my life.

When Sophomore year rolled around my advisor put the ever-loving fear of God into me that if I did not clean up my act then there was no chance I’d ever make it to a decent Computer Science graduate program. I immediately dropped the partying and focused on my school work, and somehow my social life managed to get even worse than it was the year before.

Worst of all, my grades improved but not by a big enough margin to move the needle. I still wasn’t succeeding. Failure on all fronts. Sound familiar?

So I made a decision.

At the very end of Sophomore year, I signed up for study abroad the following summer in Berlin, during the 2006 World Cup.

None of the people on that trip from Vanderbilt knew me, and certainly none of the other students in Berlin. I had a clean slate, and since I had fucked my last one up during freshman year of college, I was going to do things right this time, damn it.

The decision I made was simple: I would not give a single iota of shit about my schoolwork and not take myself seriously. And drink beer. Can’t forget the beer drinking.

The results were astonishing: I had an easy time making friends, socializing at parties and clubs, and my grades were actually pretty good to boot. FWIW, alcohol tolerance was insane by the time we left Berlin, but we’ll save that story for another time.

So I doubled down at the start of junior year. I decided I would not care about anything other than getting the most out of my college experience. I joined a fraternity, took graduate-level courses, dated some fun and interesting women, and pretty much felt like a bad ass. My GPA junior year was a 3.8 – and the amount of effort I put into my schoolwork was a fraction of what I had put in the year before.

Senior year, same thing – I stopped giving a shit about everything by default and with the confidence that I could turn any situation in my favor when I needed to. And I did.

By the time I left Vanderbilt, I felt like a bad ass. But when I entered the real-world and got a job, everything changed. I had to start all over again and build myself back up piece by piece.

Set Your Own Odds

The point of that story is that my ability to succeed personally, physically, and professionally was inversely related to my level of emotional investment into whatever I did. It was that way all along.

I still put in a lot of hours on my school work, I worked out often at the gym, and I spent a lot of time trying to come up with fun ideas for dates.

I put a high level of effort into things that mattered, but for very different reasons than my freshman and sophomore year. I put effort into those things because I wanted to, not because I felt like I needed to.

Not giving a shit simply means doing what you want to do without any emotional investment in the outcome. Put differently, you gradually develop habits to counter your mind’s self-protective “flinch” reflexes.

Negative outcomes and consequences were still possibilities with everything I did, but I hardly thought about them. After all, I live in a part of the world where it’s considered a really bad day if you lose Internet access for 24 hours – what’s the worst that could happen, really?

Once you stop worrying about the negative possibilities, every opportunity has nothing but upside. Ask the cute girl / guy at your gym out for drinks – the worst thing that can happen is a slightly awkward story that you and your friends can laugh about over drinks later. It’s all upside from there.

Stop giving a shit. Stop caring about what other people will think about you if you fail. Worry only about what you’ll think of yourself if you don’t try. No one likes a pussy.

Bold without Knowing It

So, jumping back to the present. Once I realized what was going on, I ran through my process for putting my brain back into “don’t give a shit” mode:

Work towards concrete goals – we can’t control our calendars and circumstances at every given moment, but we are always in charge of deciding what’s important to us. Remember what those goals are. Create a path for realizing them. All of them. Follow it. Even if it takes years. Ignore everything else.

Step into every punch – an old boxing metaphor; if you’ve ever tried boxing, your first natural reflex when someone tries to punch you in the face is to jerk your head back out of the way.

If you let that happen, you drop your cover and leave your face exposed, which allows experienced boxers beat the shit out of you.

So the antidote is to train boxers to step into their opponents’ punches with their guards up and use counter attacks or controlled evasions, instead of flinch reflexes.

The same goes for every day life – when your chest tenses up right before you send a critical email, condition yourself to just press send and don’t think twice. Do the opposite of what your pain-avoiding reflexes tell you.

Practice taking risksread my full post about the subject. Start habitually taking risks on small things, so you’re prepared for the really important stuff in the future.

Decide how you want to react to things – from the Seven Habits of Highly Effective People: “you can’t control everything that happens, but you can always control how you react to them.”

My (chosen) default response to most things: “meh.” Celebrate victories and learn from defeats, but don’t dwell on either.

Find the upsides – as mentioned earlier, every day risks offer very little for us to be afraid of beyond mild embarrassment and maybe some financial / temporal setbacks here in the comfortable first world. So look at your opportunities, even the small ones, and use the upside as your primary decision making metric, rather than fear of failure. You’ll make bolder, better decisions that your 80-year-old self will appreciate.

Find the things you like about yourself, then show them off – there’s an engaging, interesting person inside all of us. Find him or her and let the rest of the world in. Me? I’m pretty sure I can make just about anyone laugh.

So even though this post makes me feel a little vulnerable and reads like dust jacket of a self-help book, I’m publishing it anyway. If it helps someone else find their stride, that’s all that matters to me.

Nothing ventured, nothing gained.


1FWIW, my social skills did not improve as a result of said partying

If you enjoyed this post, make sure you subscribe to my RSS feed!



Being Right is Always the Wrong Choice

August 27, 2013 03:48 by Aaronontheweb in General // Tags: // Comments (3)

It was about four or five years ago that I had an intrinsic need to be “right” all the time.

I couldn’t let it go when someone made a mistake, or slighted me, or disputed the quality / direction of my work. Everyone else was wrong. I wasn’t alone in this regard either: a number of the people around me did the same thing.

Being right

Being right made me feel superior; it made me feel better than the idiot who did that thing wrong; it made me feel moral; it made me feel righteous.

When someone made a mistake, I felt compelled to point it out – even if it meant interrupting a speaker’s presentation during a team meeting. People started spending a lot more time perfecting their talking points and PowerPoints – time that should have probably been spent on something that actually impacted the bottom line for the business.

When someone criticized my work, I deflected their criticisms or manufactured my own reasons for why their criticisms were invalid. My work is a fucking masterpiece, after all. The quality of my work didn’t improve, and I didn’t learn anything unless it was the hard way. People didn’t care about what I had to say about their work either, since they knew I didn’t listen.

I went throughout all of my work fervently believing that what I was doing was correct and righteous – there was no room for self-doubt in this guy’s super powerful brain. I made a lot of mistakes that could have been avoided, and looked the part of the arrogant idiot on a climb-down after each one.

But at least I was “right” and everyone else was wrong.

I have an assertive personality, so my need to be “right” manifested this way. You can easily substitute this with passive-aggressive behavior, which is what most “right” people do.

Being effective

My life changed early in my career when one of my mentors dropped the following bombshell on me after I had a friendly but needlessly assertive altercation with another employee:

You can choose to be right, or you can choose to be effective. It is a binary choice. You pick one, or the other. They are separate circles on the Venn diagram with zero overlap in between.

I choose “effective” every time.

There hasn’t been a day in my life since, nearly five years ago, that I haven’t said this line to myself over the course of any given day. “You can choose to be right, or be effective.” I don’t always make the correct choice, but I’m a thousand times more effective today than I was five years ago.

I stopped caring about people’s small mistakes. And if I did care, it stopped mattering to me who made the mistake or who fixed it; as long as we took care of it.

People became less afraid of sharing their ideas with me and more empowered in their roles the longer we went without the appearance of the “You’re wrong, I’m right” stick.

If someone took issue with the way I was doing something, I would break down the problem for them and ask for data-supported suggestions on how to do something better. In many cases, their criticisms were valid and we changed whatever I was doing. Other times, their suggestions were based on bad data and we didn’t implement them. We stopped taking it personally.

Gradually, everyone put their weapons down and became less worried about accepting, acknowledging, and offering criticism. People began to share their successes with each other, and everyone felt good.

I stopped worrying about if I was doing the “right” thing when it came time to take care of business – I did the best I could, owned up to any problems I created, and tried to fix them. I gave myself some room to question whether I was doing the most effective thing at any given time.

People in the office became less afraid of making a mistake and tried new things. People offered a simple “well, that didn’t work – my bad” when something went wrong, and people stopping caring if it did. We became simultaneously more productive and accountable.

Being comfortable with yourself

The contrast is drastic between the “right” period and the “effective” period. The team became a “we” once people stopped caring about being right. Everybody felt a lot better and accomplished a lot more.

Those changes pale in comparison to how my feelings about myself changed. I no longer felt superior to anyone – because I stopped needing to. All of the feelings of superiority and righteousness that come from being “right” are a façade that hide internal weakness and insecurity.

When I changed my goal from preserving my own sense of superiority to just trying to do the best I can, everything became much more impersonal and objective.

Ultimately, this shift in priorities helped me become radically more comfortable with myself. There are still things that bother me and things I feel aggrieved about, but I rarely do I think about who’s right and who’s wrong anymore. There’s simply nothing to gain by it.

I owe every thing good that’s happened to me since I graduated from college to this change in thinking. It’s empowered me to become a leader and made me much more comfortable in my own skin than I was just a few years ago.

Being “right” is a war of attrition waged by the weak against the strong; it’s designed to systemically give weaker people leverage over others by gradually instilling doubt and undermining moral authority.

Comfortable, strong people will recognize this behavior for exactly what it is: whimpering and squeaking from small people who feel like shit about themselves.

When you stop needing to play the “who’s right and who’s wrong” game, every encounter you have with a “right”-minded person makes you think “there’s a piece of work; probably not going anywhere fast.”

You can choose to be right or be effective. Being right is always the wrong choice.

If you enjoyed this post, make sure you subscribe to my RSS feed!



Instant File Server: turn any directory into a webserver with a simple command

August 14, 2013 03:46 by Aaronontheweb in Node, Open Source // Tags: // Comments (4)

Our engineering team has been neck-deep in configuration hell lately. Editing 2000-line Solr configuration files, trying to get Apache Oozie integrated into DataStax Enterprise, Cassandra 1.2 upgrades, and more – and the one thing in common with all of these tasks is the prevalence of enormous XML configuration files.

Having wasted countless hours trying to use tools like SCP and various Sublime Text plugins to try to edit (or hell, even view) the configuration files on our dozens of Linux machines, I finally had a “fuck this shit” moment this week and wrote instant-fileserver, a stand-alone file server that you can start using a single command on any directory on any operating system.

instant-fileserver (ifs) allows you to:

  • expose the contents of any file system via HTTP;
  • view individual files as well as directory listings;
  • create, read, update, or delete any file using ifs’ dead-simple RESTful API;
  • create multiple ifs instances using a simple command; and
  • safely edit any file from the comfort of your Windows / OS X machine and push the results back onto your Linux servers once you’re finished.
    Here’s an example usage:
$ npm install -g ifs
$ (ifs is added to your PATH; go anywhere on your system)
$ ifs -help
$ ifs [arguments...]
... starting ifs on 0.0.0.0:1337
    Here’s a quick demo video to show you how easy ifs is compared to the alternative
Using Instant File Server (IFS) – Video Demo
      If you hate having to move heaven and earth to do something as mundane as edit a damn file, IFS is the werewolf-destroying silver bullet you desperately need.
      ifs is built using Node.JS and it has a tremendously slim codebase, so anyone can edit it or extend it if they wish. ifs is licensed under the MIT permissive license.

    Fork the ifs source code on Github – we accept pull requests!

    If you enjoyed this post, make sure you subscribe to my RSS feed!



    Cassandra Summit Talk: Real Time Analytics with Cassandra, Hive, and Solr

    July 18, 2013 04:41 by Aaronontheweb in Cassandra, Hadoop // Tags: , , , // Comments (0)

    I spoke at the Cassandra Summit this year about how we use Cassandra, Hive, and Solr in production at MarkedUp Analytics.

    Planet Cassandra recently made all of the videos and slides available and I thought I would share. Enjoy!

    Slides:

    If you enjoyed this post, make sure you subscribe to my RSS feed!



    Search

    About

    My name is Aaron, I'm an entrepreneur and a .NET developer who develops web, cloud, and mobile applications.

    I left Microsoft recently to start my own company, MarkedUp - we provide analytics for desktop developers, focusing initially on Windows 8 developers.

    You can find me on Twitter or on Github!

    Recent Comments

    Comment RSS

    Sign in