Startup Dharma

“Do important things” is often invoked as a rallying cry in these pages, but this time I want to talk about something more important than innovation, invention, entrepreneurship, and all the rest. I want to talk about dharma. More specifically, I want to talk about your dharma.

Classically speaking, dharma represents both cosmic law and order – our universal duty - as well as reality itself. Upholding your dharma, then, refers to both your ultimate responsibility, and upholding the truth.  It is no accident that I say your dharma. The truth, while in one sense absolute, is also deeply personal, and rooted in the enduring power of the individual.

With commitment to the truth as the first principle, your code of conduct is simple: When you see something that's broken or bad, you have to say something about it or fix it yourself. Just as importantly, when you hear something, listen. It’s not just about the success of the organization, but also a moral imperative not to let anyone you care about fly off a cliff.

In practice, this is extremely painful. Honest, unadulterated feedback is as emotionally alien as it is intellectually obvious, whether giving or receiving. Confronting the truth together is a social endeavor, yet it flies in the face of all social convention and pleasantries. Unlike you or me, the truth doesn’t have feelings – but that is precisely why it’s the truth.

Of course, it’s easier to face hard truths when we talk about collective failures. These are important to address, and can be invaluable object lessons for the organization writ large. Individual failures, however, are the ones you, and only you, can control. Accordingly, the most painful and most vital incarnation of the truth is individual feedback – all in the service of discovering and fulfilling your dharma.

This matters on multiple levels. In practical terms, nothing happens unless you make it happen. Day to day, the bias towards action is one of the most valuable things you can institute.  Without your concerted action, things like planning, analysis, strategy, et cetera are just distractions from an empty center.

However, dharma is also about the unlocking the essence of the individual. Facing your dharma means stripping away the pretense, delusion, and distractions to reveal who you are and what you are meant to be doing. You uphold your dharma in the service of both the individual and the collective. For the whole to be greater than the sum of its parts, the parts cannot seek anonymity and cover in the whole.

Likewise, true feedback comes from a foundation of investment in the individual. The underlying intentions need to include the opportunity to grow from mistakes and the willingness to help someone get there. We all like to talk about investing in people, but it’s important to internalize that hiring isn’t the end of the road. The hard part starts after - especially for the most innately talented individuals. If you don’t give them feedback, you’re just as guilty of coasting on their talent as they are, and you will inevitably reap the consequences.

As many a wise master has observed, there are countless paths to dharma – indeed, there are as many forms of dharma as there are seekers. Everyone arrives at the truth in a different way, as evidenced by leaders as diverse as Ray Dalio, Prof. Carole Robin, and Peter Thiel.

Ray Dalio’s Principles is more than required reading at Bridgewater, and Bridgewater’s culture of “radical transparency” is almost infamous for the degree to which honest feedback is emphasized. Dalio’s most basic principles states: 

“Truth - more precisely, an accurate understanding of reality- is the essential foundation for producing good outcomes.” 

It seems simple enough, but the real genius of Principles is how he mediates between the truth as an absolute and the individual experience: 

“Above all else, I want you to think for yourself - to decide 1) what you want, 2) what is true and 3) what to do about it.” 

Dalio also caveats that “you can probably get what you want out of life if you can suspend your ego”, and the same can be said of feedback. For most of us, this will be the hardest battle.

One of Peter Thiel’s great maxims is “Listen carefully to smart people with whom you disagree.” Thiel is a renowned contrarian, but he didn’t hone his worldview in a vacuum. One of his greatest strengths has been assembling teams with the built-in structural tension needed to confront bias and complacency head-on and do transformative things. To be frank, this includes the ability pre-select for thick skin.  No one who was at PayPal in the early days would describe it as a touchy-feely place – but factoring in the type of talent it attracted, that was part of the genius of the design. Pre-eBay PayPal practiced a form of directness that probably wouldn’t have flown at most other companies – but look at the record of the PayPal mafia versus any other group of corporate alumni.

Professor Carole Robin of Stanford’s Graduate School of Business is best known for her popular “Interpersonal Dynamics” course, affectionately nicknamed “Touchy Feely”. As Professor Robin describes, “"It's about learning how to create productive professional relationships," and feedback is a key ingredient. Robin’s approach may seem like a high-empathy yin to the low-empathy yang of radical transparency or the PayPal model, but many of the basics are the same. Robin advises doing it early, and above all practicing often. She also emphasizes the need to avoid shaming and to “stay on your side of the net” by not making the critique personal – in other words, don’t aim for the ego.  Finally, listening is crucial – in Touchy-Feely speak, “It takes two to know one". 

Recognizing there are many paths to dharma, where do you start? The most important thing is to take that first step, practicing feedback early and often, and making it a non-negotiable component of every consequential effort. To have any chance of sticking, it has to become the new normal. 

One of the great tragedies of working life is the tendency to treat feedback like taxes: a necessary evil to be addressed annually or quarterly. Too often, feedback is also synonymous with either punitive or back-patting exercises. You need to inoculate people against these associations by starting early, before there’s a crisis. Of course, as new people arrive, you will be forced to begin the acclimation process from scratch, because organizations that practice truthful feedback as a way of life are rare, and individuals for whom it comes naturally are rarer still.  

Another complication is that people tend to be lopsided in their feedback. Those with lower empathy have the easiest time giving feedback. It’s intuitive, even reflexive, but these people tend to be terrible at giving feedback in a diplomatic way.  This is your opportunity to suspend the ego, assume it’s not a personal attack, and consider the substance of what is being said. Eventually, you realize that seemingly low-empathy individuals are often just carrying out their dharma. Make no mistake, it is a gift.

On the other hand those with high empathy are best suited to diplomatically give feedback, but struggle to make it appropriately critical because the very thought of doing so causes pain.  An empathetic style can also be a gift, but only when personal sensitivity is complemented by the courage to overcome the inertial bias against criticism. Above all, recall that this is the real world.  There is no perfect Goldilocks balance. The key is to get started with the ingredients you already have.

You should also consider the source – except when you shouldn’t. Remember Peter Thiel’s smart people who disagree with you. With any luck, you will have colleagues who possess deep credibility in areas you don’t, and you should make extra effort to listen to them. On the other hand, sometimes incisive and true feedback will come from people with no apparent legitimacy. When your ego cries out “who the hell are you?”, turn the other way and focus on the substance of the criticism.

What if you’re wrong? This is always a possibility, giving or receiving, but because you are already thinking critically, it’s not a meaningful risk. If there is any possibility in your mind that something is wrong, confront it together. Either you avert disaster, or you discover why it was in fact right. Both are preferred outcomes.

Feedback is especially hard at any meaningful scale. The larger you get, the tougher it is to guarantee a high standard of intellectual honesty, while cracks in the foundation become increasingly subtle and imperceptible. In many ways, it’s good to maintain a healthy reserve of fear of what you might become - look no further than our political system to see what happens when the truth is focus-grouped beyond all recognition.

As with almost any worthy endeavor, the pursuit of your dharma involves constantly raising the bar. It is never easy to ask people to be more than they have been, and to address when something has stopped working, or never did. It is doubly hard because these realizations often come when people are working their absolute hardest. As painful as it is to admit that someone’s best isn’t good enough, it doesn’t make it any less true. In fact, it becomes that much more important.

It’s fine to say failure is not an option in moments of bravado, but you know inside that abolishing failure – at least the lower-case kind – is not only unrealistic, but leads to denial and paralysis. It’s entirely reasonable, on the other hand, to insist that you won’t accept failure without feedback. Only by confronting the day-to-day truth can you hope to unlock the greater truth of your highest potential, as an organization and as individuals. That is good karma. 

 

Optics and the Suppression of Innovation

One of the more pernicious, and also subtler, difficulties of governance is something I’ll call the tyranny of optics. Across the organizational spectrum, you find systems that are designed to appear transparent, fair, and free of conflicts of interest. Yet all too often, the result is gridlock and bad outcomes for the honest actors, while actual corruption is only pushed deeper underground. It’s the ultimate bitter irony: instead of functional compromise, you get institutionalized disaster.

The legacy government acquisitions system is a perfect example. The driving force is typically not a desired outcome, but rather a long list of requirements established to pass the eye test. The unintended consequences of these requirements, combined with their tendency to stifle innovation, result in the worst of all possible worlds - for the mission, the taxpayer, and the many people doing their best to both produce and acquire high-quality technology.

One of the greatest pitfalls is contracting on a cost-plus basis. This is largely a function of optics, as well as the inherent difficulty of placing value on high-tech innovation (and the age-old confusion of cost with value). The problem is that a fixed profit margin means you can only make money by increasing revenue – there’s no incentive to increase efficiency, even though efficiency is the whole basis of Moore’s Law. In essence, you substitute accounting for accountability, and the effect is that the true value of technology, and the true potential for innovation, are obscured by the very mechanism meant to ensure transparency. It’s also worth emphasizing that for the vendor, it’s about simple math, not corruption. When you can only make money on the top line, a rational actor has no choice but to conform or find a different business.

Furthermore, the system is designed to evaluate the surface qualifications of a vendor to perform work at the government’s risk – have they done something like this before for similar clientele? When building massive hardware investments such as aircraft, this might seem like a reasonable question (though the success of SpaceX has chipped away significantly at the conventional wisdom). When applied to information technology, it’s much more obvious what an arbitrary standard this is - imagine if Larry Page and Sergey Brin had been subjected to these considerations when they were raising capital. The consequence is that the number of “qualified” contenders remains flat over time. This, in turn, creates in an anti-competitive vicious cycle where the presumed ability to deliver is based on perceived qualifications, rather than those qualifications being based on the actual ability to deliver.

Of course, technology projects fail all the time – but because optics are paramount, there’s no willingness for the customer or vendor to admit failure. Instead, we keep sinking money into the same projects until any resolution seems palatable, or the original need is forgotten. Paradoxically, the system demands perfection, yet actual failure is shockingly acceptable – so long as the vendors are “qualified”. Because these failures are overseen by familiar faces, the vetting committee still boasts a perfect record. It’s like a dystopian version of Blackstone’s formulation: better ten credentialed companies should fail than one startup. Consequently, no one is willing to take the kind of development risks that could yield transformative discoveries. Failures that amount to sunk costs are acceptable, while the ones that could really teach us something are unthinkable.

A highly respected veteran of Congress and the Executive Branch once told me that one of the more underreported challenges of DC was that killing earmarks only removed much-needed grease from the system, predictably causing the machinery to grind to a halt. Ironically, earmarks connoted a certain honesty because everyone knew what was going on -The practice allowed for plenty of valuable give-and-take - the real problem was that in many cases the optics were just too shaky.

Since the earmark moratorium, we’ve been treated to an endless game of budgetary chicken that has certainly led to worse outcomes for taxpayers than earmarks ever did. Meanwhile, conflicts of interest haven’t gone anywhere – they’ve just reappeared in the form of more insidious slush funds and legislative blackmail techniques. Technology acquisitions and Congressional deal-making might appear to be very different beasts, but in both cases, the substance of compromise and pragmatism has been replaced by the rigid ideology of covering your backside at all costs. When optics are the primary concern, you can’t even have token cooperation, let alone the partnership needed to solve hard problems.

Bill and Melinda Gates’ recent Wall Street Journal editorial, Three Myths on the World’s Poor, exposes the tragic result of focusing on optics above everything else. Only a small percentage of foreign aid is lost to corruption, but that part always receives vastly disproportionate attention. If the absence of any perceived impropriety became the design criteria for providing aid or philanthropy, we’d only hurt the very people who need the most help. As the authors poignantly ask, “Suppose small-scale corruption amounts to a 2% tax on the cost of saving a life. We should try to cut that. But if we can't, should we stop trying to save those lives?”

The tax metaphor also helps to expose the rampant cynicism that preys on optical controversies. Almost no one would consider a small tax, or other nominal costs of doing business, a good reason to abandon an overwhelmingly profitable enterprise. Why should the criteria be impossibly strict when we stand to gain lives as opposed to dollars? Perhaps better than anything else, the humanitarian aid challenge reveals the logical conclusion of elevating optics above everything else: since a perfect solution is impossible, we’re better off doing nothing.

Every election cycle, someone promises to run the government like a business. Setting aside whether this is desirable or feasible, the obvious challenge is that the optics become most restrictive when the government bears the risk (as businesses generally do). Yet vast opportunities exist for government to transfer risk from taxpayers to suppliers. Imagine a marketplace where vendors can only compete if they guarantee an outcome or your money back. Optics would revert to their proper place: still a factor, but far from being the first or only consideration.

By ending the charade of demanding perfection, we can stop wasting time on the fantasy of eliminating risk and instead focus on the real work of managing it. When you practice the art of the possible, paint will inevitably splatter – but to a realist, the result is infinitely more attractive than an ideal that will never be achieved.

A Lesson From the Affordable Care Act Rollout

Without commenting at all on the policy wisdom of the Affordable Care Act, it’s clear that the rollout of Healthcare.gov has been disastrous. This has been chronicled more diligently elsewhere, but can be summed up by noting that, while Healthcare.gov was plagued with bugs, crashes, and general confusion, a team of three college students replicated much of the desired functionality of the site in a few days.  Of course, the alternative site, HealthSherpa, does not handle the user or data scale of healthcare.com or perform the most complex operations of applying for coverage, but the contrast between a site built for free and the ~$600+ million obligated for healthcare.gov is sobering.

We can draw a few lessons from this affair.  The first is that it represents a deep structural problem of government IT projects. The process used to bid out and build healthcare.gov was not, contrary to what you might have heard, especially unique or nefarious.  On the contrary, it represents the norm for large federal IT projects: mandating what should be straightforward products to be built from scratch in many ponderous phases, replete with massive sets of requirements and a commensurately high number of billable hours. 

The major difference is that this time, the users are the American people. The frustration of grappling with subpar technology is the same experienced daily by some of the most vital people in our public service ranks.  Soldiers, intelligence analysts, law enforcement officers, and veterans care workers, to name just a few, are routinely forced to implement tools that are barely functional, told to simply “make it work”.  This is by no means meant to minimize the headaches associated with healthcare.gov – on the contrary, it points to the need for real, systemic change.

There are two fundamental flaws at work in the legacy government IT acquisitions model. The first is that the same procedures used to acquire tanks and aircraft carriers are used to build software. Yet software development is by nature a creative, generative, iterative process, not a static set of requirements that won’t change significantly over the lifecycle of the product.  And while good software is never truly finished, the essential building blocks can often be delivered right away - the key is that you’re creating a basis for iteration and creative enhancement, not obediently following the same blueprint for years at a time.

The second, and subtler, flaw is the failure to recognize that America in general, and Silicon Valley in particular, are unique in the ability to build software.  Many remarkable advantages of American life have contributed, in turn, to our dominance in software development. Pondering an increasingly data-driven future, our abundance of software talent has to be considered one of America’s most strategic resources, and leveraged and fortified accordingly. Sadly, in the current IT acquisition landscape, armies of contactors are paid by the hour to produce a crude facsimile of what our best software artists could create for a tiny fraction of the cost - but ignoring such a precious asset would be a mistake at any price.

One great irony of the healthcare.gov fiasco is that a major rationale for the Affordable Care Act was the idea that Americans can do better than the legacy healthcare system – only to see what should have been a slam-dunk website rollout crippled from the beginning by the IT acquisitions machine, another legacy system. Regardless of one’s views about the law itself, though, one saving grace is made clear: if we want to do better, doing what we’re already the very best at seems like a good place to start.

Quantum Mechanics of Software

One of the most fundamental human desires to believe that something is either A or B, and many complex endeavors are compromised from the beginning by treating the A/B split as a first principle. Binary logic may explain well-understood processes, but eventually the old rules cease to apply, as with the failure of classical physics to explain phenomena at atomic and subatomic scales.  To understand quantum theory, you have to accept the wave-particle duality, and even then, it turns out that no one really knows why light exhibits both wave and particle properties.  We can observe, even predict, but not quite explain.

Startups are subject to similarly misunderstood dualities.  Simple minds want to know if winning depends more on doing A or B:  Should we move fast, or ship quality? Build footprint or monetize?  Optimize on breadth or depth?  The winner, however, realizes that you have to figure out a way to do both.  How this is accomplished is highly contextualized in practice, but it begins with the realization that you cannot have one without the other and hope to succeed.  If it were as simple as doing only one thing well, the success rate of venture capital would be much greater than 10%. And when you do succeed, as in quantum mechanics, recognizing that things work a certain way is more important than knowing why (for the purposes at hand, at least).

A venture also displays both continuous and discrete elements.  From a wide angle, the growth curve or product lifecycle may resemble a wave function, but it’s also extremely iterative, and is most efficient when individual iterations occur at consistent intervals.  Likewise, one characteristic is often expressed through the other, much as particle emissions are dependent on wave functions. The focus and abstraction needed to go broader also allows you to go deeper effectively.  Similarly, in the course of developing a vertical solution, you often end up sharpening your intuition about how slice the problem horizontally.

When striving to achieve both A and B, you often need to consciously set up opposing forces to achieve your goals.  For example, you need hackers who are relentlessly focused on solving the customer’s problems, even if they’re comparatively poor at productization and long-term code stability, and you need artists who are relentlessly focused on productization and pristine architecture even if their sense of customer urgency leaves a lot to be desired.  How you make them work together productively is an art - there is always some violence, but it starts by recognizing you need both, and accepting that their interactions only need to be productive, not harmonious.  The results of this type of particle collision are very difficult to know ex ante, so the safest bet is to find the best exemplars you can of each type – people you would want to work with individually.

The need to harness opposing forces sometimes extends beyond types of goal orientation to personality types (though these often go hand in hand).  Again, it’s up for debate why this is the case, but the anecdotal evidence is extensive.  The classic example from quantum physics is Richard Feynman and Murray Gell-Mann’s collaboration on the theory of beta decay.  Feynman was famously mischievous and irrepressible, while Gell-Mann was almost painfully serious and methodical.  While they frequently found each other exasperating, their tension was tempered by strong mutual respect – an obvious but sometimes overlooked component in organizational design.

Conventional high-tech wisdom posits that among the qualities of “better”, “faster”, and “cheaper” you can only pick two.  With the right team, you can do extraordinary and counterintuitive things. You can be better, faster, and cheaper – you just can’t be better, faster, cheaper, and also comfortable, which is the true contradiction. At the risk of resorting to truisms, doing hard things is hard - comfort is simply not part of the equation.  As Feynman himself once quipped, “You don’t like it, go somewhere else!

 

Soup Tasting

Ray Dalio’s Principles is required reading at Bridgewater, and contains plenty of wisdom that resonates well beyond its original context. Far down on the list, at #139, we find: 

... 139) “Taste the soup.” A good restaurateur constantly tastes the food that is coming out of his kitchen and judges it against his vision of what is excellent. A good manager needs to do the same.

Soup tasting is hard and requires you to pierce comfortable levels of abstraction.  Often where there are bad outcomes, there is a gross lack of soup tasting, both because of inertial unwillingness to take a bite and because of ineffective gustation.

Amazon’s Jeff Bezos is the archetypal soup taster (among many other outstanding talents).  Bezos is renowned for the depth of his knowledge and the clarity of his insights (especially when making snap judgments), but equally important is his ability to get to the crux of seemingly complex matters in five questions or less.  It’s easy to forget how many complex decisions Amazons has faced over the years, and the fact that their success is often taken for granted is largely a tribute to Bezos’ ability to ask the right questions so incisively and consistently.

The importance of soup tasting seems intuitive enough, but how you develop the ability to taste soup well is one of the more underrated challenges of leadership for a number of reasons.  To begin with, there is never just one kind of soup.  The metaphor applies equally well to the commercial success of your business and the view from inside.  At the same time, not all soup is equally important, and even the most astute taster’s capacity is limited, so you need a focal point. As Bezos has often described, “We start with the customer and we work backwards.”

More fundamentally, soup tasting is largely about overcoming bias, which is generally a very difficult process.  It needs to be about fearless inquiry, not seeking reassurances. Anyone who has done any actual cooking has probably had the experience of asking someone else if a dish tastes funny, while silently convincing himself that the answer is no. Of course, if it does taste funny, being polite does the aspiring chef no favors.  For soup tasting to have any value as an exercise, you can’t be afraid of what you might discover. 

Soup tasting is as much art as science, and as such it is hard to turn it into a predictable framework.  Still, some basic principles apply:

  • It all starts with probing.  Any time you are presented with an assertion, whether it’s a project plan, forecast, or report, review it tenaciously.  If something isn't clear to you, probe down. If something strikes you as particularly important, probe down deeper. If there are implicit assumptions, challenge them.  Think of the annoying little kid who responds to everything by simply asking “why?” It seems repetitive, but if you proceed from the right starting questions you will quickly get to the heart of the matter.
  • Get closer to the problem.  Something about the soup seems off.  Now you need to taste it some more.  The first step in getting close to the problem is simply a more thorough probing.  If that doesn’t do the trick, you need to go down two or three levels, either by honing in on the most important things in your area of credibility, or by asking someone who is credible.  By the way, assessing who has credibility in what areas, beyond just being aware of their reputations, is its own important form of soup tasting.
  • Measure.  Soup-making, both literal and figurative, requires experimentation, and it’s one of the hallmarks of the Amazon approach.  Bezos places a premium on experiments that are measurable and produce hard data.  As he explained in Fast Company, “The great thing about fact-based decisions is that they overrule the hierarchy. The most junior person in the company can win an argument with the most senior person with a fact-based decision.”  At the same time, as Bezos will quickly tell you, “there’s this whole other set of decisions that you can’t ultimately boil down to a math problem” – hence you need to master the art as well as the science.

It’s also well worth considering what soup tasting is not:

  • It’s not micromanagement.  This means telling people how to do something without tasting the soup for yourself, or telling them how to do something in an area where you lack credibility.  
  • It’s not distrust.  Distrust is not a productive default position, but neither is blind trust. Real trust is developed by consistent soup tasting – as the old saying goes, “trust, but verify”.  Knowing which issues to escalate as priorities, and how to escalate them as a team, is also an art form, honed through soup tasting interactions.
  • It’s not indefinite, nor is it an end in itself.  You need to find the middle ground between an excessively laissez-faire approach and never-ending inspection.

The more soup you start to taste, the more you'll want to taste, but as with anything, you can overdo it – just as you can proofread too long, and you’re bound to miss something obvious. It is critical to cultivate credible soup tasters throughout the organization, but the transition from soup taster to meta-soup taster is a tough one. It only works if your trust has been validated, and requires a great deal of intellectual honesty to avoid indulging in wishful thinking, feel-good exercises, or just shedding responsibility. 

In the end, soup tasting is how you know what is true – “overcoming bias” and “intellectual honesty” are really just fancier ways of expressing this.  And the truth matters more than anything else.  In his introduction to Principles, Dalio states,

“I also believe that those principles that are most valuable to each of us come from our own encounters with reality and our reflections on these encounters – not from being taught and simply accepting someone else’s principles…. So, when digesting each principle, please…

…ask yourself: “Is it true?”

 

All soup tasting, ultimately, is a variation on this one simple yet profound question.

The Nature of Goal Orientation

I think a lot about what specific competencies are needed when starting something, but even more fundamentally, how does someone approach work (and life)? My experience is that there are goal-oriented people and there are process-oriented people.  Finding goal-oriented people is one of the most crucial determinants of startup success - no amount of expertise can substitute for goal orientation.

There is implicit bias in both orientations, but not all biases are created equal.  Goal orientation subordinates process to outcomes.  As a result, there is sometimes a tendency to ignore or undervalue the importance of frameworks, checklists, and details, though in my experience truly goal-oriented people are quite intuitive at abstracting useful and repeatable approaches from their experiences. Planning and process are also not the same thing – done right, planning is simply the division of larger goals into smaller ones.  Even so, goal orientation is a vastly preferable bias.  You can learn organization (and the most effective people are constantly re-learning it), but motivation is much harder.  By the same token, consultants can help to improve your processes, but they can’t define your goals for you.

Process orientation, on the other hand, actually subverts your goals, under the subtle guise of helping you achieve them.  Uncritical acceptance of process creates an alibi for failure.  When things go wrong, a process-oriented person thinks “I did all I could while following the process to the letter, so maybe it just wasn’t meant to be.” Without a healthy institutional skepticism, process easily becomes a goal in itself. To be fair, processes and goals can both be destructive if they are not subject to revision, but process is fundamentally tied to predictability and routine, whereas goals require constant thought and re-examination to remain effective.

The most inventive organizations are more concerned with limiting process than perfecting it. Apple’s revitalization began when they started to re-imagine a hardware problem (personal devices) as a software problem. If process had been the dominant consideration, Apple would have kept refining their old product lines until they faded into irrelevance.  By the same token, many enormous failures affecting society writ large can be attributed in part to relying on process while ignoring the substance (Enron, the subprime collapse, countless failed technology acquisitions).

Everyone claims to be goal-oriented (it’s probably one of the top resume clichés), but the norm is that people want to be told what to do.  Freedom is scary, partly because it is new and unfamiliar, but mostly because the onus will be on you to succeed once the security blanket of process is taken away.  Truly meritocratic and goal-oriented organizations are also quite rare, so it’s easy to mistake boredom and frustration with bureaucracy for real self-determination.  During both Internet bubbles, countless career big-company employees decided they wanted to “join a startup”, without really asking why or realizing that they were trying to be different in the exact same way as everyone else (the word “join” isn’t an accident either).   Ironically, when asked by hiring managers what they would bring to the table, these people would typically deliver lengthy homages to their current company’s processes. 

One of the most interesting things about goal and process orientation is what part is constitutional and cultural.  Some people are natural insurgents, who will orient and achieve the goal so intuitively that they may not even appear disruptive to the casual observer.  Others have been raised in cultures that value conformity and process.  Just as many genes are only expressed when the right stressors are present, a naturally goal-oriented person may not emerge until landing in the right environment. The converse is much less common, however – process-oriented people tend to be exposed fairly quickly in truly goal-oriented environments where there is little concept of playing along.

The conflict between goal and process orientation is exceptionally relevant to planning one’s career.  We’ve all seen picture-perfect, cookie-cutter resumes that are obviously a result of process orientation,.  What’s more interesting is when people try to design rules and processes to reverse-engineer a major career shift.  There are plenty of “experts” who will tell you to get experience in the private sector before doing a stint in government (or vice versa), or that you should learn “fundamentals” at a  Fortune 500 company before joining an early-stage startup.  With all due respect, these people completely miss the point of having goals.  It should be more obvious with really unorthodox career arcs, but even so, many people are apt to read about Steve Jobs and think “Ok, so I should drop out of college, but take a calligraphy class, and get fired from my own company before making a triumphant comeback.”

Of course, there are plenty of perfectly good environments for process-oriented people.  The problem is when they land in the wrong place and both the person and team suffer.  It really comes down to honestly understanding your strengths and weaknesses, as an individual and as an organization. 

On the Joy of Renting

Ownership is the essence of the American Dream – or is it? The mortgage crisis certainly led many people to rethink the virtues of owning a home, but even in less dramatic markets, it’s a fair question.  There are many assumptions to be challenged and hidden costs to be considered.  Warren Buffett continues to bet heavily on housing, while Yale economist Robert Shiller contends that housing is an investment fad, with no net appreciation in the US market over 100 years.  Of course, as author of the Shiller post points out, most of us are living in our homes, and the benefit is partly intangible.  But how much does the intangible actually depend on ownership as opposed to just being there?

Rental has always been a popular alternative for major, long-term-use assets with high maintenance costs.   Traditionally this has meant homes and cars, but they are just the beginning.  The convergence of low-friction technology, on-demand efficiencies, expanding tastes, and shrinking wallets has led to the explosion of the sharing economy, as reported by The Economist.  There are countless examples, each with its own intricacies: Rent The Runway, Amazon Web Services/EC2, ZipCar, Uber, even BlackJet. It’s about deciding not to own something you really don’t need to own yourself (and achieving better financial health as a result).).  Increasingly, we have the option to spread out the long-term maintenance cost, which actually exceeds the acquisition cost for more assets than people tend to realize, while maintaining high availability. 

The sharing economy ranges from necessities such as housing and transportation to luxuries such as designer dresses and private jets but necessities quickly become luxuries when acquired carelessly.  This is especially pertinent for government, but it’s not always obvious which costs justify themselves.  Traditionally, the Forest Service, Coast Guard, police, et cetera all maintained their own helicopters, for example.  Even if they were grounded 90% of the time, no one wanted to give up ownership if they had a choice.  Now that states are going broke, sharing is a much more palatable option, but it’s not just about cutting costs – you have to re-examine the incentives.  In government, one of the major drivers of ownership is funding.  It’s easier to get large capital funds for new assets because they are assumed to be investments— and investment has a return.  It’s much harder to get operational funding because that is a cost - and costs are bad, right? (how many times have you heard the renting is throwing money away?)  But what if that helicopter fleet is just a really bad investment? It becomes a lot easier to make that case if you can get a helicopter on short notice, probably based on a retainer and/or hourly use fee (similar to ZipCar).   

 

Separating the emotional appeal of ownership (as difficult as that may be), my thesis is that it is generally a bad idea to own an asset unless you have a specific and special competency to own it.  This is the same for everything: housing, cars, servers - and especially software.

Cars are a tricky case, famously depreciating (up to 10%) the minute you drive them off the lot (a phrase so commonplace you probably finished it in your head). Many of us don’t know how to truly maintain our cars beyond the basics.  For occasional drivers, there is the lesser option, such as ZipCar, but US infrastructure is still designed around individual drivers, and giving up your car can be very difficult if you don’t live in a city.  However, something like Sebastian Thrun's self-driving car work could someday open up a whole new world of on-demand transportation that is more efficient and safer than anything we have now.  Think about it: 97% of the time, your car is sitting around, taking up space, idle.

Servers, beyond the fixed costs, require hardware maintenance, networking, power and cooling.  Many servers require replacement after just a few years.  It’s much easier and lower overhead to simply rent the capacity you need - unless you are Google, Amazon, or the like, and have a special competency that requires you to maintain your own servers.

Software is often perfectly suited to on-demand delivery for predictable use cases, and software-as-a-service (SaaS) certainly qualifies as one of the major technology waves of recent years.  More and more, the prevailing sentiment is “why buy software when you can rent it?”, as reflected in Salesforce’s now-iconic logo. 

Of course, not all software needs can be satisfied by SaaS.  Then the relevant question is whether to build or buy, as opposed to rent or own, but the underlying considerations are similar (if quite a bit more complex).  My guiding principle is that you shouldn’t be building your own software unless you have a particular competency that requires it, or need to develop such a competency.

In keeping with the theme of recognizing our own biases, it’s important to separate the emotional resonance of ownership from the practical reality.  With software, the reality is that code depreciates incredibly fast, not to mention the continuous iteration and improvement required for software to stay relevant. Ownership bias is perhaps most frequent (and outsized) in government, where the idea of “owning” the code base has become hugely and irrationally popular.  In the vast majority of cases, “building” and subsequently owning your own software actually means contracting with private vendors to develop complex, bespoke systems that cost 10, even 100 times as much as an off-the-shelf product. 

There is an attractive yet perniciously false idea that once you build the software, it’s yours, free and clear.  The appeal is simple - people enjoy the feeling of ownership, and are naturally wary of being beholden to outside vendors.  But the reality is that you are paying down accrued technical debt all the time – just as you would maintain a house or car, except that a house or car isn’t expected to fundamentally change in a matter of months.  Furthermore, a bespoke project concentrates that debt with one client instead of amortizing it across all customers the way a productized solution does.  In a very cynical way, bespoke developers are smart to let the government own the source code. Not only does this prevent other customers from re-using the IP (and saving money on development), but it also makes the ongoing maintenance costs easier to justify because now, it’s their baby.

The final point is that if you are going to buy, you need to make sure that the seller has a specific competency in software.  It might seem obvious, but more than any other product, you want to buy software from a software company. Rolls-Royce can build world-class cars and jet engines alike, but there isn’t really an analog in the world of aerospace companies and systems integrators that also attempt to build software.  The product lifecycle, pace of innovation, maintenance considerations, and above all the deltas between good and great all make software unique among industries.

Mo Data, Mo Problems

If you’ve spent any time in high tech the last few years, you’ve probably heard the term “big data” more than you care to recall.  It’s become a constant refrain, and the subject of plenty of breathless cheerleading, much like “the cloud”, “social media”, and countless other trends that preceded it.  This is not to say that big data is not important, but context and meaning are essential.  Big data has many roles to play, but it’s not an end in itself, as Shira Ovide explains so concisely in her recent Wall Street Journal piece

“Data for data’s sake” is the first major weakness of the big data obsession cited by Ovide, and it’s probably the most salient.  This a classic case of valuing inputs over outputs – the idea that if we only collect enough data, good things will happen.  This sort of magical thinking is somewhat reminiscent of past crazes for purely A.I./algorithmic approaches to data science, but at least in those cases there was some concept of outputs and programmatic attempts at sense-making. 

Of course, big data also isn’t going anywhere, and many worthy analytical endeavors demand that we address it.  However, it is essential to distinguish between warehousing, searching and indexing, and actual analysis.  Focusing solely on storage and performance creates a sort of computational uncertainty principle, where the more we know, the less we understand.

As Ovide also notes, there is also a critical gap in analytical talent, which big data has done more to expose than mitigate.   Computing power can go a long way towards making big data manageable and facilitating insight – if paired with a sufficient dose of human ingenuity.  Simply put, humans and computers need each other.  "Pattern recognition” is frequently cited as a benefit of a big data approach, but computers can't learn to spot patterns they've never seen.  As a result, the value of the analyst in defining the correct patterns and heuristics becomes all the more important. 

Appropriately enough, the most valuable and elusive elements lurking within big datasets are often human: fast-moving targets such as terrorists, cyber criminals, rogue traders, and disease carriers who tend to slip through the cracks when algorithms are deployed as-is and left unattended.  The old playground retort that it “takes one to know one” actually applies quite well to these types of situations.

Human capital is a key part of the equation, but it’s not enough to acquire the right talent – you need to address the inevitable organizational challenges that come with retooling for a big data future.  Ovide notes that many companies are installing “Chief Analytics Officers”, and while I want to reserve judgment, the cynic in me suspects this reflects the bias of large organizations to centralize power and create new titles as a first line of defense against unfamiliar problems.  A chief analytics officer could be the catalyst to instill readiness and analytical rigor throughout the organization, but whether this reinforces or dilutes the perception that big data is everyone’s concern is a fair question.

More than anything else, I would analogize the challenges of big data to the differences between conventional warfare and counter-insurgency.  In conventional warfare, the targets are distinct and obvious.  In counter-insurgency, the enemy is hiding among the population.  Much as you can occupy an entire country without knowing what’s really going on outside the wire, you can warehouse and perhaps even index massive data stores without producing actionable insights.  Effective big data approaches, like effective counterinsurgency, require the right balance of resources, sheer power, ingenuity, and strong and constant focus on outcomes.  In the long run, the willingness to pursue a population-centric strategy may well prove to be the difference.

1776 The Ultimate Story of Entrepreneurship

David McCullough’s 1776 is, to my mind, the ultimate story of entrepreneurship.  Starting a company is challenging enough - now imagine starting a country!  Although many orders more complex, America’s founding has much to teach entrepreneurs of all varieties.  And given this heritage, it should also come as no surprise that the United States remains the best place in the world to start something new. 

One of the most valuable things 1776 imparts is an appreciation for the incredibly hard fight endured by the Continental army.  If your most recent lesson on the American Revolution came from a high school textbook, you might dimly recall a few triumphant battles and Valley Forge.  1776 paints a vivid picture of the sheer misery and constant trials of the war – trials few could have anticipated.  The Continental Army’s perseverance is even more impressive when you realize that the Treaty of Paris wasn’t signed until 1783.  For the modern reader, it’s a nuanced lesson: on one hand, you need to be realistic about the challenge ahead, but at the same time, you have no way of really knowing.

The parallels between startups and the Continental army are fascinating.  Some quick observations:

  • Chaos: Compared to the British army, the Continental army seemed completely chaotic. There were no well-defined roles and no visible hierarchy among these ragtag, shoeless countrymen who had taken up arms.  Of course, some of this chaos was real and some was perceived.  The relevant point when starting anything is not how to eliminate chaos, but rather which elements of chaos should be tackled in what order.  Do you address real organizational challenges, or just shuffle everyone’s title? This distinction escaped the British, who underestimated the strength and ability of the “rebels” simply because they looked like a mess. 
  • Meritocracy.  Nathaniel Greene and Henry Knox are two of the better examples.  Greene, a Rhode Island Quaker who had never been in battle before, became Washington's most trusted general due to his exceptional competence and dedication.  Knox was an obese 25-year-old who rose to the rank of Colonel.  He thought up the mission to secure artillery from Ticonderoga, without which the Continental army would have had no such capability. 
  • Talent: Despite Washington’s minor experience in the French and Indian Wars, his principal strength was not military strategy (in fact, his advisors staved off disaster more than once by convincing him not to do something).  His real superpower was his ability to quickly determine who was talented at what. 
  • Food: Food was critical to the Continental army.  Certainly there were times where they were on the move and hardly ate for days on end.  While food was always scarce, the fact that the Army was actually able to feed people with some consistency was critical. The modern startup is obviously not directly comparable, but we’ve seen time and again how providing food pays for itself many times over in terms of focus, productivity and commitment.

But more than simple observations and parallels, there are some real takeaways and strategies for anyone who aspires to start something extraordinary:   

Be Ruthless.

I was shocked by how many times during the course of battle the British would halt their movement to rest or make porridge or something completely non-essential.  There were countless occasions where the side with the advantage could have ended the war, had they only pressed on.  Their reasons should sound a cautionary note even now - stop because it is getting dark?  Stop because that was the plan (despite the ground truth)?  Worst of all: stop because we can finish the job more comfortably tomorrow. 

After routing the Americans and forcing them across a bridge, British General Cornwallis decided to rest.  The Americans retreated brilliantly and swiftly into the night. This was not the Continental Army's first such retreat, so it’s hard to imagine how Cornwallis did not realize the significant risk they posed. Why didn't he send out patrols? Most likely, he thought he would win tomorrow regardless, and preferred not to win under uncomfortable circumstances.  After the fact, he said that he would have kept going, whatever the risks, no matter the orders, if he had only known he would have caught Washington.  The lesson:  Be ruthless as a default setting, not just because victory is seemingly at hand.

Don't Get Overconfident.

Nearly every major mistake by either side in the 1776 campaign was a result of overconfidence.  Minor victories would lead commanders to discard their hard-won knowledge, resulting in terrible decisions.  The tendency to let encouraging signs override our better judgment is actually a fundamental human cognitive bias.  If you’re interested in learning how to recognize and defeat all manner of non-rational thinking, make it a point to read Overcoming Bias

Don't Waste Time Politicking.

General Charles Lee felt slighted that the less experienced George Washington was given command of the Continental army, and constantly sought to undermine him.  When Washington ordered Lee to bring his forces to New Jersey, Lee dawdled, and was captured by the British while seeking some female companionship in a tavern.  Lee was marched to New York in his nightgown, and soon defected.  Much more devastating, however, was a series of letters to Lee from Washington's close advisor and friend Joseph Reed, detailing Reed’s disappointment with Washington.  Why couldn’t Reed have an honest, face to face conversation with his brother in arms to sort through the issues?  In any vital endeavor, there is too much at stake to have closed communications or privately nurse resentments.

It ain't over 'til it's over.

Time after time, each side thought a specific battle was going to be decisive.  In retrospect, it is amazing how incredibly wrong they were, and how often.  So how do you respond? There is a fine line between being jaded and being realistic. Starting something invariably requires commitment in the face of uncertainty.  For this reason, I’d argue that it’s better to be optimistic (even if slightly naïve) than completely cynical, but again, the key is to be aware of our biases.

 

Business Schools and Employability

According to a recent Wall Street Journal article, business schools are placing increased emphasis on the employability of their students prior to admission.  I won’t speculate to what extent this is motivated by the need to protect their job placement statistics in a grim economy, but it’s worth considering the true consequences of this trend.  As the article notes, business schools have always considered the goals of the applicant – but to what extent are they curating these goals on the front end?  Even if we assume good intentions, the effect is to reinforce the status quo, making business school populations even more risk-averse and less entrepreneurial.

Ironically, this seems to be at least partly motivated by the banking collapse: “when the financial crisis upended the banking sector and sure-thing jobs on Wall Street disappeared, schools began formally tying input (applicants) to output (graduates).”  Why “ironically”?  Regardless of how much blame you want to assign to federal housing and lending policy as opposed to private sector recklessness, the financial crisis wasn’t brought on by entrepreneurial, non-linear thinking. Legions of conventionally smart people who had done everything right, rigorously following twenty year plans including name-brand firms and business schools, managed to get the biggest bets horribly wrong.  This is not meant to be flippant – current market conditions and job statistics are stubborn things that must be acknowledged.  However, if the lesson of the financial crisis is that we should double down on conventional wisdom, regardless of whether anything of value is created, then we’ve indeed learned nothing from the past five years.

As someone who frequently uses the frame of inputs vs. outputs, I took immediate notice of the wording above.  It would be encouraging to see an extremely input-focused sector more concerned with outputs, but I suspect they have confused the two in this case, merely trading one set of inputs for another (the addition of an MBA).  You can also think of this as commoditizing human capital, and this calls the entire purpose of an MBA into question.  Is business school meant to, help develop leaders, or serve as a finishing process on a prestigious kind of assembly line? 

The article goes on to state that “making employability too weighty a factor in admissions can backfire. “ According to Graham Richmond, a former admissions officer at University of Pennsylvania's Wharton School, “Looking at applicants through a narrow vocational lens may deter schools from accepting riskier candidates, such as entrepreneurs or career-switchers, in favor of more sure things, such as aspiring management consultants.”  The fact that aspiring management consultants are considered “sure things” is evidence of how much MBA culture values process over invention.  Candidates and schools understandably want assurances, especially in the wake of 2008.  The world is a chaotic place, even more so since the financial crisis (though I contend that it has always been so, and that the banking industry simply managed to insulate itself unusually well for as long as it could).  Obviously, you have to adapt to the current reality.  Yet I can’t help but wonder if by focusing on doing obvious, “safe” things, to the exclusion of risk-taking and creativity, the MBA community isn’t just constructing an elaborate playpen in which nothing new ever happens.