tag:shyamsankar.com,2013:/posts Shyam Sankar 2023-09-15T03:15:14Z Shyam Sankar tag:shyamsankar.com,2013:Post/1934375 2023-01-27T23:17:56Z 2023-07-11T04:16:07Z Innovation needs Customers, not Capital. In this return of great power competition, it is clear that we need to continue to innovate to provide deterrence from conflict. This requires military superiority and continuous technological innovation. When Palantir was founded there was no path to working with the Defense Department. Not a difficult path, no path. And there was exactly one path to working with the Intelligence Community, In-Q-Tel.  

In-Q-Tel is a venture capital firm that invests in commercial technologies that are relevant to national security. But this headline confuses that actual value proposition. America has plenty of capital for quality companies. A start up that can’t raise money is probably not a very good startup. It is one of America’s strengths - a deep and wide venture ecosystem. What In-Q-Tel provides that is a game changer is customers. The capital invested is often de minimis to the companies even if it is extremely valuable validation. The customer contracts, clearances, and commitments that In-Q-Tel furnishes are the fuel for innovation.  

Not enough is said about the US Govt as Customer and too much has been said and overstated about the US Govt as R&D financier. It is broadly acknowledged that in the present era the USG cannot outspend industry in R&D, acknowledging that it very much did in the early cold war. But perhaps that spending was never the key enabler outside of basic research.

Chris Miller’s Chip Wars covers the history of the semiconductor industry. In 1965, Military and Space applications where 95% of the chip market. Bob Noyce, then Co-Founder/CEO of Fairchild and future Co-founder/CEO of Intel, always envisioned a broader market than the military which meant he had to manage his R&D priorities (and he was of course right: the first Integrated Circuit for consumers was used in Zenith hearing aid that was initially designed for a NASA satellite). He declined most military R&D contracts (despite these customers representing 95% of his revenue) so he could stay in control of his R&D roadmap. He never let more than 4% of this R&D budget come from Govt contracts. So 96% of his R&D was self-financed from investors and profits.  

In Noyce’s own words “..there are very few research directors anywhere in the world who are really adequate to the job at assessing Fairchild’s work and they are not often career officers.” Noyce complained about the time spent writing progress reports for the bureaucracy for any of this government funded R&D. Now Fairchild had the unique luxury of being funded by the 1950’s equivalent of a billionaire so they could treat the military as the customer for their products rather than as their boss of their R&D from day one. 

Because of the difficulty of achieving scaled procurement in US Govt it is unfortunately still true that a uniquely hard head and patient sort of mission-driven capital is still required today. SpaceX, Palantir, Anduril all have billionaire founders whose commitment to the cause was required to survive the various valleys of death and incredibly long acquisition cycles. Notably 2 of the 3 had to sue the Govt to ensure the Govt didn’t de facto compete against industry.

Even in Noyce’s era, the Pentagon was more comfortable working with big bureaucracies than nimble startups, and as a result underestimated the speed by which Fairchild would transform the industry. DoD assessment praised RCA for having the best microsystems electronics miniaturization program while dismissively noting that Fairchild only have 2 scientists working on it internally while Lockheed had 50, implying that Lockheed was far ahead. Of course it was Fairchild’s R&D team that made the breakthroughs. 

Defense contractors thought of chips as the final product. Noyce and Moore were already dreaming of computers and phones. Noyce slashed prices to access the broader vision and market. Cost plus government contracting would never have created the price performance the USG needed to create the incredible military deterrence capability. And of course all that commercial innovation led to a semiconductor revolution that created vast American prosperity, the underpinning of our national security. 

We see a very different story with Drones. General Atomics invented the modern drone in the 1990s with the Predator. A Noycian figure would have seen the vast potential for not only commercial drone applications but also the consumer market. Instead for decades that vast R&D focus of these platforms was locked by Govt R&D programs. And with great effect. But without any American prosperity that should have followed GA owning or spinning out a Commercial subsidiary that should have been the DJI of America. Instead the vacuum let DJI fill it to serve CCP civil-military fusion aims. Now the hobbyist consumer’s drone purchase funds CCP R&D against America.

The root problem is that the acquisition system is unintentionally communist (Bill Greenwalt noticed the same). Profits are capped at a very low number on contracts. While the law says the government should favor fixed priced contracts in practice these acquisitions happen cost plus. This means the only way to make more money as a contractor is to find a way to spend more money in your costs. That won’t work. Noyce made MORE money as he lowered the cost with even greater the profit margins. He had the commercial incentive to do that and that benefited every customer, including Uncle Sam. Noyce could not have done that rationally if he was a government contractor - lowering the price would have been lower profit with government regulated profit margins. The reason we lost the market with Drones is that China executed with true capitalism with DJI while, ironically, America pursued communism.

“Selling R&D to the government is like taking your venture capital and putting it into a savings account. Venturing is venturing. You want to take the risk.” - Robert Noyce

Noyce ultimately left Fairchild, a company he co-founded and ran, because the billionaire financier of Fairchild didn’t think anyone but he should have equity in the company. Ironically at the time equity for employees was viewed as creeping socialism. So the talent left. Noyce and Moore left to found Intel. Remember that Moore’s Law is not actually a law at all. It is a goal. To the extent Intel achieved it, and they did for many decades, it is the consequence of incredible hard work in search of equally incredible reward — monetary and metaphysical.  

The problem with defense contracting is not the popular narrative that contractors make too much money, it is actually that they make too little money. The sums are large but the current system only incentivizes spending more money to drive up the fixed percentage of profit the communist procurement policies deem reasonable. Fairchild’s financier thought it was reasonable Noyce earned a salary and had no equity. So Fairchild lost and Intel and a family of ex-Fairchild companies flourished. The USG should focus on price, not profit. If innovators can provide capability for less money, why does the government care what the profit margin is? Innovators will need outsized profits to motivate progress.  

Bill Perry and his Assault Breaker program was only possible because of the commercial success of chips. He needed a 100-fold improvement in performance to deliver that capability. Perry complained that his critics were luddites, favoring DARPA’s continued spend on its own advanced chips... but it was the commercial chips that delivered the capability.

And in the modern era, for today’s great power competition, commercial technologies will be what delivers the future of warfare. We need look no further than the battlefield in Ukraine to see that.  


]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/1184487 2017-08-17T18:06:57Z 2023-09-15T03:15:14Z You have to engage with the world to change it

Technology is a wonderfully levered thing, but there is a dangerous temptation to believe that you can change the world without engaging with its institutions, in all their sprawling entropy (setting aside Silicon Valley's frequent delusions about what qualifies as changing the world). When considering how and if to engage with national and global institutions, I've found that people gravitate toward one of two philosophies: 

  1. The first holds that the world's institutions are fundamentally evil, and subconsciously people ascribe a certain malicious competence, e.g. the repressive government or ruthless corporate polluter. If this is the norm, your only choices are revolution/anarchy, or much likelier, doing nothing. I disagree, but people are free to believe this.
  2. The second viewpoint holds that the world's institutions are not fundamentally evil, but need help to become great - help that must be renewed as institutions change and people cycle through. Trying to be fair-minded, people often attribute institutional problems to incompetence as opposed to malice (though in my experience this is usually inaccurate or exaggerated). 

 What I've learned from my struggles in the second camp is that there are no easy answers - but the goal is better, not perfect. These are the convictions that have helped me to contend with all the complexity that follows.

  • My first principle is to get off the sidelines. Complaining is a dead end. The question needs to be “who will help them?” The answer may be you.
  • If so, the second principle is to engage with the world as it is: messy and gray. 
  • Overnight change is a myth. The goal is to make today better than yesterday and tomorrow better than today.

In order to bring about change, you need to cultivate some intellectual humility. But early on, it helps to not have excess amounts. Happily, most people lack humility in youth (not a judgment of millennials; it's just human nature). And this allows you to attempt epic things. Along the way, with any luck, some wisdom seeps in. As a student, I thought everything was dumb. Surely I knew a better way. As I got out into the world, I slowly realized I'd been asking the wrong question. I dwelled on the first order: why is this so broken? Eventually, I realized there was a more meaningful question: what must be true for this to make sense?  From there, you keep pulling the string until you find the underlying condition that requires fixing. 

Politics is a dirty word for idealists and engineers alike (doubly true if you happen to be both!). But you can't address the most pressing problems without also grappling with the common good, which necessitates political considerations. The reality is that politics and engineering each have roles to play:

  • Politics is about accepting the tradeoffs. We live on an efficient frontier (or so we hope), forcing us to examine the tradeoffs between X and Y. In the political sphere, preference for X or Y, roughly speaking, defines left and right.
  • Engineering is about innovations that push out the efficient frontier – we can have more of X and Y, and the political viewpoint merely serves up false tradeoffs that rip us apart.
  • One problem with politics is that in a democracy, both sides are always right, or at least deadlocked. X is important. Y is important, too. Structurally the only good resolution is to invest in more of both. At its best, engineering can render a painful tradeoff false - think power vs. efficiency, human vs. computational acumen, or the former truism that you can only pick two among better/faster/cheaper.
  • Beware thinking that engineering is a cure-all for political ills. In reality, engineers cannot recuse themselves from the painful organizational aspects of a problem they're trying to solve. Software is fairly unique as something you can open source for the public interest, but it's exceedingly rare that you get to be Johnny Appleseed, sprinkling free technology on grateful soil. The hard work happens at the intersection of products, problems, and people.

It's also worth unpacking some assumptions about the nature of institutions more generally. We tend to think of them as static, almost by definition. This is understandable, especially in the political realm. If you're graduating college this year, you have probably never known a time when bipartisanship and compromise were considered positives, not accusations to be hurled by primary opponents. In reality, though, institutions require people, and the people themselves are only as static as their motivations permit them to be.

On the flipside, one of the most important questions you can ask is whether you are working for a specific person/administration, or for the historic, enduring ideals of that institution.  This is a core principle of the US military, and regardless of one's feelings about war, it's a tradition worth emulating. In a democratic society, the failure mode is less likely a cult of personality, and more likely an institution that becomes insidiously focused on consolidating power and resources. Sometimes, institutions bloat past the point of no return, and must be broken up or declawed. Often, though, an infusion of new blood can restore focus to the real mission.

While it's clear that governments should advance causes beyond their own prosperity, I would argue this is not only possible but essential in the commercial realm. We don't remember Steve Jobs for making shareholders a boatload of money; we remember him for giving us new ways to communicate, learn, and experience art. There are many forms of value to be created, and striving for the perfect should not preclude exploring different forms of good. But forms aside, an institution's ability to do more than just enrich itself is also a great indicator of its prospects for enduring - and its worthiness of your involvement. 

On a related note, people often discount the ability to work through an institution, not just for it. I don't mean capitalizing on existing channels or infrastructure, but advancing a larger goal. You can think in terms of ceilings vs. floors. Tesla must meet certain safety and efficiency standards to qualify for tax rebates and remain street legal, but these requirements are the floor, not the ceiling. They're catalysts for more aggressive innovations, even those that don't yet have a market.

There is also the truism that if your work is black or white, it probably isn't that important - or worse, you are lying to yourself. The most pervasive problems are subtle, complex, and require deep engagement with both established institutions and emerging forces. It's easy to point to noble aims consumed by unintended consequences and be scared of getting involved. The Arab Spring fueled the rise of theocrats, not democracy. Even seemingly non-controversial developments can force hard choices. Let's say you develop a promising cancer treatment in a startup or research lab. Do you try to take it to market yourself, or through a huge pharma company? Do you optimize on keeping it affordable, or raise prices in order to enable you to expand your reach? Easy answers remain elusive, but to me, these examples are all arguments for more thoughtful engagement, not abdication. You can get off the sideline without getting on the soapbox.

Lastly, when pondering which institutions to join or serve, instead of asking “Why Institution X”, I would first ask, “Why me?” What gifts do I have to offer the world, and what kind of platform would allow me to answer the first question in the most meaningful way possible?


]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/1160629 2017-06-04T19:11:16Z 2023-05-23T21:33:44Z Position and Portfolio One of the most enduring values of Western civilization, lying at the heart of the American dream, is the concept of ownership. But in the context of material things, the virtues of ownership too often go unexamined, and the same holds true for ownership in the context of your work.

As an attempt at a definition, the only time you will ever truly own something is as an individual contributor. No one will edit the code out from under you. But as soon as you enter any sort of organizational context, you start to realize that ownership is largely a myth. The more responsibility you have, paradoxically, the more that responsibility is shared with others. From every direction, people will be editing your supposed creations.

That's not a bug - it's just the nature of the thing. As a practical matter, you need to be comfortable with people submitting pull requests against your ideas, and in fact this is part and parcel of growth. Increased responsibility is attended by both a need to drive more, and also to thrive in the shared ownership. The ability to handle this reality is a massive determinant of leadership potential, organizational health, and success as an enterprise. The inability to handle it manifests as stunted growth and pathological distrust.

Even if complete, individual ownership is largely a myth, it's still an exceptionally useful one in terms of understanding organizations and leadership. The traditional model is that you have a defined position with a defined portfolio. If this sounds stifling, it is. Its chief virtues are predictability and legibility- all other things being equal. And this is why conventional, status-driven organizations are so often ruthless and dysfunctional, more than the existence of the hierarchy itself: things change. You can only be really aggressive and formal about promoting leaders if you are equally aggressive about firing leaders. It simply isn't possible that you are always right. This is how most organizations avoid the alluring, yet messy, challenge of decoupling position and portfolio. They become factories, and as they grow, they become fractals of the old org chart.

The alternative model is the artist colony, in which position and portfolio are not only decoupled, but the position itself is an arbitrary construct. There is only the artist and the work. We see this influence in Facebook's engineering teams, where everyone from new grads to eminent veterans gets the title of “Software Engineer”. We've done the same at Palantir - with the added wrinkle that BD is all engineers as well! As a variation on the theme, recall that the Dalai Lama famously describes himself as just a simple monk.

Within the artist colony, there are leaders - and in fact, they are the rule and not the exception. Their portfolios can and will change radically over time. This decoupling allows the colony to seize more aggressively on the innumerable opportunities for leaders to drive things, without creating a corrupting permanence to whatever is being driven. In fact, I would argue that leadership can only be taken in times of flux. Otherwise, it's just more skillful maintenance. The architect of the palace does not mow its lawns, however necessary a task that may be.

The fundamental fluidity of portfolio and position constitutes the artist colony's greatest inherent advantage (and challenge). In a traditional organization, it's expected that taking away a portfolio implies the loss of position, and vice versa. In the artist colony, it's understood that this would end poorly for all, not just the artist in question. Broadly, the result would be creeping risk aversion and status-seeking. And, more often than not in my experience, just around the corner lies the perfect thing for the artist to create - while also meeting an existential business need.

Meeting these needs is not just a happy side effect of individual fulfillment, but a strategic recognition of the nature of progress. It's a constant struggle and dialogue. We need to force ourselves to the front when the things we are driving push us back, but true leaders are not content to blindly take the next hill. Each success creates the imperative to ask “now what?”. At a certain stage, the logical next step might well be an oversight role. But what's much more interesting, to me, is how often I've seen leaders ease their developmental angst and do great things for the company by pivoting from oversight to owning and executing something meaningful.

On a similar note, we're all familiar with stories of leaders not scaling because they couldn't delegate. But we should be more worried about leaders losing their ownership/execution muscle entirely. In many of these cautionary tales, I'd wager that the real takeaway should have been that Leader X didn't own/drive the right things, not that he/she didn't delegate effectively.

Another benefit of decoupling position and portfolio derives from still another paradox: despite the necessity of joint ownership (or perhaps because of it), some things simply require a dictator to get done. This is especially true in product development, where not only does halving the team often double the pace of progress, but the fulfillment of a coherent vision usually requires one actual visionary. Dictatorship as a set position would never work in an artist colony because it's antithetical to the nature of art and artists. But dictatorship can not only work, but thrive, in the context of portfolio.

It should be acknowledged that the artist colony, like the artist, is a fundamentally restless entity, and this approach is not a panacea of any kind. It requires constant engagement and examination, and betrayals of artistic principles within this world will sting infinitely worse, because so much more is at stake than status and money. Owning your work, then, is the opposite of owning your home: it's only true ownership for as long as you're actively paying it off.]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/933842 2015-11-16T16:00:05Z 2023-07-23T12:50:17Z The Case Against Work-Life Balance: Owning Your Future

Given my journey, you can imagine my first reaction to questions of work-life balance is fairly unsympathetic. I want to protest that, by legitimizing such a false dichotomy, you’re pre-empting a much more meaningful conversation. But I suspect that conversation is closer to the heart of this anxiety than most people realize. 

If you’re worrying about work-life balance at the beginning of your career, and you’re reading this, I’m guessing you’re not lazy. You’re not looking for an easy life (even if this seems like an appealing concept right after midterms). I’m willing to bet that what you’re really worried about is someone else owning your most precious possession: your future. 

Staring into the abyss of companies that glorify triple-digit hours (never mind the substance of the work), this makes intuitive sense. But having surveyed the landscape of high-tech hiring, I’m convinced you should be just as concerned about jobs that promise high stimulation and total comfort. When you let yourself be sold on easy hours, outrageous perks, and glib assurances about the project you’ll join and the technologies you’ll get to play with, you’ve just agreed to let your future become someone else’s.

I hate the construct of work-life balance for the same reason I love engineering: the reality is dynamic and generative, not zero-sum. It’s about transcending the constraints of simplistic calculations. Creating the life and the work you want are by no means easy challenges, but they are absolutely attainable. What’s not realistic is thinking you can own your future and be comfortable at the same time. Grit, not virtuosity, will be the biggest determinant of your success, for reasons I’ll explore in a bit. 

At the same time, grit and discipline aren’t enough. You need purpose. And I can state categorically that the purpose you discover, with all the sacrifice that entails, will be more motivating and meaningful than the one handed to you in the form of some glamorous project that, realistically, will succeed or fail regardless of your involvement. 

The catch, of course, is that true purpose doesn’t sit around waiting to be discovered. It requires constant pursuit. Here’s what I’ve learned from a decade and a half of sprinting.

There’s no time like now. As learning animals, we’re subject to various ages of cognitive potency. As a young child, your aptitude for acquiring a language or learning an instrument is at its peak. Accordingly, as a professional, your early 20s are the most formative stage. It is absolutely critical to make the most of this time because the pace of learning grows slower and more incremental as you age, whether we care to admit it or not. Of course, you can always learn new things, but most often the wisdom of experience is largely the result of earlier realizations having the time to compound into something richer.

The place of maximal learning is often at the point of significant pain. It’s not just about having a more pliable mind - grit, and its close cousin, resilience, are essential for taking your intelligence further than it can get on its own. And while intelligence compounds, grit degrades in the vast majority of cases. Regardless, grit isn’t something you can suddenly develop after a life of leisure. For these reasons, owning your future means choosing grit over the allure of a predictable pace.

Of course, you still need to hold a pace. Studies show that marathoners/endurance runners do tons of self-talk to push past the pain. “It’s a marathon, not a sprint” is a well-worn cliché, but it’s striking how often it’s invoked to rationalize comfort as opposed to promoting sustained excellence. Don’t think for a second that elite marathoners have trained to the point that a sub-six-minute mile pace is comfortable. It’s incredibly painful. What separates the truly elite is having found a purpose that makes the sacrifice acceptable.

At the same time, complete self-motivation is incredibly rare. It’s probably not a realistic goal, and that’s fine. Find the people who will sharpen your resolve as well as your ideas. Again, your first step matters. If you choose a job for work-life balance, chances are, so did everyone who came before. Talent is one thing when evaluating your future teammates, but ask yourself this: when you need models and inspiration to be more than you are, will you be able to find them?  Where will your gamma radiation come from?

You can find your zen in stressful, chaotic times. In fact, I’d argue this is the norm, even the ideal, for 20-somethings. Some adrenaline is good for your performance. Not having time to waste requires you to focus on the essentials and develop an innate sense of direction. That way, when you do eventually get to let your mind wander, it will be in rewarding directions. These days, I build in calendar blocks for “brain space”. That wouldn’t have made sense 10 or even 5 years ago – not because I have more free time now, but because, early in your career, you learn much more by doing than reflecting. And this can be the difference between creating your future and receiving it in a fancy envelope. 

At the limit, you probably should care about work-life balance – it’s not going to remain a static thing your whole life. But at the margin, as a new grad, you should focus on the most important problem. Find the thing that motivates you, work your ass off, learn as much as you can, and trust that today’s gains will compound well into the future – your future.

Working your ass off isn’t bleak – it’s quite the opposite. Provided there’s a purpose, sprinting at an unsustainable pace is an act of tremendous optimism. A mindset of premature retirement might sound rosy, but in truth it’s deeply cynical and extraordinarily insidious – much more so than being overpaid or overpraised, and much harder to correct.

But back to the concept of caring about work-life balance at the limit, how do you know where the limit is? Isn’t life fundamentally uncertain? Here’s what I’ve come to realize: you can’t pre-emptively retire without doing the work that makes you appreciate the chance to rest. Maybe you can, but assuming you have something to contribute, it’s going to be an empty reward. Sacrificing your potential to comfort isn’t a hedge against an early death – it IS an early death. As Emerson wrote in Self-Reliance, "Life only avails, not the having lived. Power ceases in the instant of repose; it resides in the moment of transition from a past to a new state, in the shooting of the gulf, in the darting to an aim.” 

We’ve been told over and over to choose life over work in order to achieve balance. I’m urging you, especially at the dawn of your career, to  instead choose life over balance, and make the work so meaningful that you wouldn’t want it to exist as a distinct concept. This is how you ensure that your future remains yours.

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/921759 2015-10-24T22:13:41Z 2023-06-09T23:44:49Z Gamma Radiation: The Incredible Hulk As a Model for Personal Growth

If I've learned one thing from observing great individuals (and great companies), it's that greatness is inherently asymmetric. If that sounds dangerous, it is. Any scholar of counterterrorism or cyber war will tell you that asymmetric threats require asymmetric countermeasures, but more fundamentally, they require asymmetric people. When forming a team, I don't want to assemble a polite roster of cross-functional professionals. I want the X-Men: a medley of mutants united for good.

The Incredible Hulk, in particular, embodies the growth model I've come to believe is necessary for achieving greatness. For those of you who were popular in junior high school, the Hulk began as the mild-mannered, though brilliant physicist Bruce Banner, and was transformed into the Hulk after exposure to gamma radiations from a nuclear explosion. From then on, Bruce Banner would morph into the Hulk during times of extreme stress or exigency. While the Hulk’s ability to retain Banner’s intelligence evolved over the series, it’s safe to say he was was never the same again.

So what does growth for greatness look like? It begins with accepting unevenness, and reaches its potential through a conscious nurturing of extremes. But introspection and diligence are not enough.  Real growth is scary, hard, periodic, and responsive to your environment. The gamma ray might seem like an extreme metaphor for catalyzing growth, but if you want to truly achieve greatness, it’s much closer to the reality than the safe, comfortable models we're taught to accept. You need periodic radiation, not lifting a little more weight every day. In the short term, linear development predictably leads to linear results, and in the long term, factoring in drag and the insidious effects of growing comfortable, the result is decline, as Stephen Cohen eloquently described in his conversation with Peter Thiel and Max Levchin. Intelligence is compounding all the time, and correspondingly, so are complacency and missed opportunity.

In practice, it's usually not so straightforward to go looking for gamma rays out of the gate, but there are some obvious pitfalls you can avoid along the way. One of the most important: don’t fall prey to the illusion of growth promoted by the corporate ladder.  It’s a crutch as much as a way up (and tech roles/companies are NOT immune - if you see Software Engineer I, Software Engineer II, etc, that’s a ladder). The ladder can be partially explained by convenience, or convention, but ultimately it’s there to assuage your fears – not only of not reaching your potential, but of incubating a potential that doesn’t fit the bounds. While on the ladder, you can only fall so low or climb so high. It's a false frame, not only because hierarchy is such a poor proxy for impact, but especially for lulling you into thinking achievement falls within a standard distribution.

It would be disingenuous not to acknowledge that becoming a mutant is not all upside. Make no mistake, gamma radiation can hurt you. There is always the risk of failure, and win or lose, there will be scar tissue. In that sense the ladder is also a safety net. As an aspiring mutant, you shouldn’t let false bravado obscure this realization – just recognize that in choosing the ladder you’re explicitly shorting your potential and putting protecting your ego ahead of your outcome. As an aspiring Professor X, accept that there will be failures, and that you’ll need to make highly imperfect tradeoffs on false positives vs. false negatives when hiring and developing talent.

Mentorship is likewise critical when directing mutant powers towards the greatest possible good. The X-Men would not have become X-Men without Professor X’s School for Gifted Youngsters. But again, the standard model doesn't apply. To begin with, you need mutants to mentor mutants, and in many cases, to provide the initial dose of radiation. Otherwise, even the best institution of higher learning will predictably devolve into a lemming academy. 

Once mutation is in process, one of the greatest aspects of mentorship is, paradoxically, autonomy. This is especially important because extreme growth doesn't happen on schedule, but is subject to periods of intense activity. As a mentor of mutants, you need to be attuned to these periods, and when they come, confer even more autonomy. Above all, fight your instinct to handhold (hard to do when both hands are always clenched in a fist anyway!).

The final part of the equation is to seek out the greatest challenges you can, both in terms of meaning and difficulty. And this is perhaps the greatest beauty of the gamma radiation metaphor. It's not just about unimaginable intensity. It's about an external reality leaving an indelible imprint on your internal reality. There are some gifts that are only fully formed through creative destruction, and it’s these gifts, in turn, that allow you to create new external realities - in other words, to change the world.

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/836290 2015-04-05T21:29:04Z 2018-02-08T06:55:55Z Don't Let Techno-Hedonism Waste Your Potential

This post is about the current insanity in Silicon Valley, but I don't mean the valuations - at least not the ones everyone is talking about. Instead, I want to talk about how you value something much more important than common stock: yourself. 

Over the course of thousands of overwhelmingly positive interactions with top CS students over the past few years, what's scared me the most is the tendency to think of your future job primarily as a vehicle for certain types of projects. This is, in fact, one of the worst possible reasons to take any job.

In many ways this line of thinking isn't so surprising. Perhaps because the long-theorized tech crash hasn't happened, and most companies (even relatively innovative ones) think of hiring as filling slots, our economy continues to promote skills over aptitude and ability. And even the best schools are much more effective at teaching subjects than synthesis. As a result, even in an age when software engineers are starting to be properly valued, there is a real risk of being commoditized - ironically, by yourself. 

Apart from an earnest desire to cultivate "valuable" skills, however, is something I'll call techno-hedonism. Besides just thinking of your job in terms of projects, this means evaluating projects by how pleasurable they are to you versus how much good you're creating in the world. As a result, topics that could be invaluable as part of a greater whole - especially things like machine learning - become playthings. And this is how young people who honestly thought they were going to change the world end up being paid too much to quit to serve ads more effectively.  In the degenerate case your employer becomes something to be agnostic about, merely a vehicle to work on a specific project of hedonistic desire.

Rather than deciding based principally on the project, I would suggest there are two questions that should inform everything else: Do you believe in the institution? And do you believe in yourself?

Evaluating the institution involves many more questions, but I'd argue these few are most important: Is there a real opportunity to make a positive impact? If so, is the team equal to the challenge, or (more likely) on the path to getting there? Is there a possibility of surviving as a standalone entity - this is almost impossible to know ex ante, but if the stated goal is to get acquired that should tell you something. Do they have a real mission and culture, or just hedonism and homogeneity? Do they invest in an individual's growth, or just increased productivity? 

By believing in yourself I don't mean projecting an arbitrary level of confidence - it requires a willingness to critically assess your strengths and weaknesses and reconcile them with an emerging and constantly evolving sense of purpose. This cannot happen overnight. If you're betting on your ability to do something important, you'll learn - piece by piece -  to intuitively subordinate the process to the goal, and separate the act of discovery from the procedural. By contrast, if you're betting on your ability to stay fulfilled by repeatedly doing a series of tasks, however pleasurable, you're actually shorting yourself. 

It's not so difficult to see the surface characteristics of an institution for what they are - when you become enamored of a slick office space, at least you know you're being shallow. Becoming enamored of projects, on the other hand, feels like investing in your most important assets when in fact you may be stunting them.

I want to emphasize that this is not happy talk. It is unbelievably hard work. Having it all figured out now is the unrealistic part - and if you actually do succeed in your design, that's when the reality often proves to be bleakest.

Engineering is fundamentally generative. Specific implementations may be highly deterministic, but the defining character of the work is possibility. It's understandable to want to cling to certainties, especially after hearing what a dark and chaotic world it is for most of your conscious life. I say: embrace conscious ambiguity. The alternative is a predetermined future - one that truly belongs to the robots. You are not a lottery ticket - but neither are you an algorithm.

 

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/679585 2014-04-22T18:45:27Z 2018-09-01T01:10:45Z Startup Dharma

“Do important things” is often invoked as a rallying cry in these pages, but this time I want to talk about something more important than innovation, invention, entrepreneurship, and all the rest. I want to talk about dharma. More specifically, I want to talk about your dharma.

Classically speaking, dharma represents both cosmic law and order – our universal duty - as well as reality itself. Upholding your dharma, then, refers to both your ultimate responsibility, and upholding the truth.  It is no accident that I say your dharma. The truth, while in one sense absolute, is also deeply personal, and rooted in the enduring power of the individual.

With commitment to the truth as the first principle, your code of conduct is simple: When you see something that's broken or bad, you have to say something about it or fix it yourself. Just as importantly, when you hear something, listen. It’s not just about the success of the organization, but also a moral imperative not to let anyone you care about fly off a cliff.

In practice, this is extremely painful. Honest, unadulterated feedback is as emotionally alien as it is intellectually obvious, whether giving or receiving. Confronting the truth together is a social endeavor, yet it flies in the face of all social convention and pleasantries. Unlike you or me, the truth doesn’t have feelings – but that is precisely why it’s the truth.

Of course, it’s easier to face hard truths when we talk about collective failures. These are important to address, and can be invaluable object lessons for the organization writ large. Individual failures, however, are the ones you, and only you, can control. Accordingly, the most painful and most vital incarnation of the truth is individual feedback – all in the service of discovering and fulfilling your dharma.

This matters on multiple levels. In practical terms, nothing happens unless you make it happen. Day to day, the bias towards action is one of the most valuable things you can institute.  Without your concerted action, things like planning, analysis, strategy, et cetera are just distractions from an empty center.

However, dharma is also about the unlocking the essence of the individual. Facing your dharma means stripping away the pretense, delusion, and distractions to reveal who you are and what you are meant to be doing. You uphold your dharma in the service of both the individual and the collective. For the whole to be greater than the sum of its parts, the parts cannot seek anonymity and cover in the whole.

Likewise, true feedback comes from a foundation of investment in the individual. The underlying intentions need to include the opportunity to grow from mistakes and the willingness to help someone get there. We all like to talk about investing in people, but it’s important to internalize that hiring isn’t the end of the road. The hard part starts after - especially for the most innately talented individuals. If you don’t give them feedback, you’re just as guilty of coasting on their talent as they are, and you will inevitably reap the consequences.

As many a wise master has observed, there are countless paths to dharma – indeed, there are as many forms of dharma as there are seekers. Everyone arrives at the truth in a different way, as evidenced by leaders as diverse as Ray Dalio, Prof. Carole Robin, and Peter Thiel.

Ray Dalio’s Principles is more than required reading at Bridgewater, and Bridgewater’s culture of “radical transparency” is almost infamous for the degree to which honest feedback is emphasized. Dalio’s most basic principles states: 

“Truth - more precisely, an accurate understanding of reality- is the essential foundation for producing good outcomes.” 

It seems simple enough, but the real genius of Principles is how he mediates between the truth as an absolute and the individual experience: 

“Above all else, I want you to think for yourself - to decide 1) what you want, 2) what is true and 3) what to do about it.” 

Dalio also caveats that “you can probably get what you want out of life if you can suspend your ego”, and the same can be said of feedback. For most of us, this will be the hardest battle.

One of Peter Thiel’s great maxims is “Listen carefully to smart people with whom you disagree.” Thiel is a renowned contrarian, but he didn’t hone his worldview in a vacuum. One of his greatest strengths has been assembling teams with the built-in structural tension needed to confront bias and complacency head-on and do transformative things. To be frank, this includes the ability pre-select for thick skin.  No one who was at PayPal in the early days would describe it as a touchy-feely place – but factoring in the type of talent it attracted, that was part of the genius of the design. Pre-eBay PayPal practiced a form of directness that probably wouldn’t have flown at most other companies – but look at the record of the PayPal mafia versus any other group of corporate alumni.

Professor Carole Robin of Stanford’s Graduate School of Business is best known for her popular “Interpersonal Dynamics” course, affectionately nicknamed “Touchy Feely”. As Professor Robin describes, “"It's about learning how to create productive professional relationships," and feedback is a key ingredient. Robin’s approach may seem like a high-empathy yin to the low-empathy yang of radical transparency or the PayPal model, but many of the basics are the same. Robin advises doing it early, and above all practicing often. She also emphasizes the need to avoid shaming and to “stay on your side of the net” by not making the critique personal – in other words, don’t aim for the ego.  Finally, listening is crucial – in Touchy-Feely speak, “It takes two to know one". 

Recognizing there are many paths to dharma, where do you start? The most important thing is to take that first step, practicing feedback early and often, and making it a non-negotiable component of every consequential effort. To have any chance of sticking, it has to become the new normal. 

One of the great tragedies of working life is the tendency to treat feedback like taxes: a necessary evil to be addressed annually or quarterly. Too often, feedback is also synonymous with either punitive or back-patting exercises. You need to inoculate people against these associations by starting early, before there’s a crisis. Of course, as new people arrive, you will be forced to begin the acclimation process from scratch, because organizations that practice truthful feedback as a way of life are rare, and individuals for whom it comes naturally are rarer still.  

Another complication is that people tend to be lopsided in their feedback. Those with lower empathy have the easiest time giving feedback. It’s intuitive, even reflexive, but these people tend to be terrible at giving feedback in a diplomatic way.  This is your opportunity to suspend the ego, assume it’s not a personal attack, and consider the substance of what is being said. Eventually, you realize that seemingly low-empathy individuals are often just carrying out their dharma. Make no mistake, it is a gift.

On the other hand those with high empathy are best suited to diplomatically give feedback, but struggle to make it appropriately critical because the very thought of doing so causes pain.  An empathetic style can also be a gift, but only when personal sensitivity is complemented by the courage to overcome the inertial bias against criticism. Above all, recall that this is the real world.  There is no perfect Goldilocks balance. The key is to get started with the ingredients you already have.

You should also consider the source – except when you shouldn’t. Remember Peter Thiel’s smart people who disagree with you. With any luck, you will have colleagues who possess deep credibility in areas you don’t, and you should make extra effort to listen to them. On the other hand, sometimes incisive and true feedback will come from people with no apparent legitimacy. When your ego cries out “who the hell are you?”, turn the other way and focus on the substance of the criticism.

What if you’re wrong? This is always a possibility, giving or receiving, but because you are already thinking critically, it’s not a meaningful risk. If there is any possibility in your mind that something is wrong, confront it together. Either you avert disaster, or you discover why it was in fact right. Both are preferred outcomes.

Feedback is especially hard at any meaningful scale. The larger you get, the tougher it is to guarantee a high standard of intellectual honesty, while cracks in the foundation become increasingly subtle and imperceptible. In many ways, it’s good to maintain a healthy reserve of fear of what you might become - look no further than our political system to see what happens when the truth is focus-grouped beyond all recognition.

As with almost any worthy endeavor, the pursuit of your dharma involves constantly raising the bar. It is never easy to ask people to be more than they have been, and to address when something has stopped working, or never did. It is doubly hard because these realizations often come when people are working their absolute hardest. As painful as it is to admit that someone’s best isn’t good enough, it doesn’t make it any less true. In fact, it becomes that much more important.

It’s fine to say failure is not an option in moments of bravado, but you know inside that abolishing failure – at least the lower-case kind – is not only unrealistic, but leads to denial and paralysis. It’s entirely reasonable, on the other hand, to insist that you won’t accept failure without feedback. Only by confronting the day-to-day truth can you hope to unlock the greater truth of your highest potential, as an organization and as individuals. That is good karma. 

 

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/657563 2014-02-24T04:31:41Z 2020-06-11T01:40:43Z Optics and the Suppression of Innovation

One of the more pernicious, and also subtler, difficulties of governance is something I’ll call the tyranny of optics. Across the organizational spectrum, you find systems that are designed to appear transparent, fair, and free of conflicts of interest. Yet all too often, the result is gridlock and bad outcomes for the honest actors, while actual corruption is only pushed deeper underground. It’s the ultimate bitter irony: instead of functional compromise, you get institutionalized disaster.

The legacy government acquisitions system is a perfect example. The driving force is typically not a desired outcome, but rather a long list of requirements established to pass the eye test. The unintended consequences of these requirements, combined with their tendency to stifle innovation, result in the worst of all possible worlds - for the mission, the taxpayer, and the many people doing their best to both produce and acquire high-quality technology.

One of the greatest pitfalls is contracting on a cost-plus basis. This is largely a function of optics, as well as the inherent difficulty of placing value on high-tech innovation (and the age-old confusion of cost with value). The problem is that a fixed profit margin means you can only make money by increasing revenue – there’s no incentive to increase efficiency, even though efficiency is the whole basis of Moore’s Law. In essence, you substitute accounting for accountability, and the effect is that the true value of technology, and the true potential for innovation, are obscured by the very mechanism meant to ensure transparency. It’s also worth emphasizing that for the vendor, it’s about simple math, not corruption. When you can only make money on the top line, a rational actor has no choice but to conform or find a different business.

Furthermore, the system is designed to evaluate the surface qualifications of a vendor to perform work at the government’s risk – have they done something like this before for similar clientele? When building massive hardware investments such as aircraft, this might seem like a reasonable question (though the success of SpaceX has chipped away significantly at the conventional wisdom). When applied to information technology, it’s much more obvious what an arbitrary standard this is - imagine if Larry Page and Sergey Brin had been subjected to these considerations when they were raising capital. The consequence is that the number of “qualified” contenders remains flat over time. This, in turn, creates in an anti-competitive vicious cycle where the presumed ability to deliver is based on perceived qualifications, rather than those qualifications being based on the actual ability to deliver.

Of course, technology projects fail all the time – but because optics are paramount, there’s no willingness for the customer or vendor to admit failure. Instead, we keep sinking money into the same projects until any resolution seems palatable, or the original need is forgotten. Paradoxically, the system demands perfection, yet actual failure is shockingly acceptable – so long as the vendors are “qualified”. Because these failures are overseen by familiar faces, the vetting committee still boasts a perfect record. It’s like a dystopian version of Blackstone’s formulation: better ten credentialed companies should fail than one startup. Consequently, no one is willing to take the kind of development risks that could yield transformative discoveries. Failures that amount to sunk costs are acceptable, while the ones that could really teach us something are unthinkable.

A highly respected veteran of Congress and the Executive Branch once told me that one of the more underreported challenges of DC was that killing earmarks only removed much-needed grease from the system, predictably causing the machinery to grind to a halt. Ironically, earmarks connoted a certain honesty because everyone knew what was going on -The practice allowed for plenty of valuable give-and-take - the real problem was that in many cases the optics were just too shaky.

Since the earmark moratorium, we’ve been treated to an endless game of budgetary chicken that has certainly led to worse outcomes for taxpayers than earmarks ever did. Meanwhile, conflicts of interest haven’t gone anywhere – they’ve just reappeared in the form of more insidious slush funds and legislative blackmail techniques. Technology acquisitions and Congressional deal-making might appear to be very different beasts, but in both cases, the substance of compromise and pragmatism has been replaced by the rigid ideology of covering your backside at all costs. When optics are the primary concern, you can’t even have token cooperation, let alone the partnership needed to solve hard problems.

Bill and Melinda Gates’ recent Wall Street Journal editorial, Three Myths on the World’s Poor, exposes the tragic result of focusing on optics above everything else. Only a small percentage of foreign aid is lost to corruption, but that part always receives vastly disproportionate attention. If the absence of any perceived impropriety became the design criteria for providing aid or philanthropy, we’d only hurt the very people who need the most help. As the authors poignantly ask, “Suppose small-scale corruption amounts to a 2% tax on the cost of saving a life. We should try to cut that. But if we can't, should we stop trying to save those lives?”

The tax metaphor also helps to expose the rampant cynicism that preys on optical controversies. Almost no one would consider a small tax, or other nominal costs of doing business, a good reason to abandon an overwhelmingly profitable enterprise. Why should the criteria be impossibly strict when we stand to gain lives as opposed to dollars? Perhaps better than anything else, the humanitarian aid challenge reveals the logical conclusion of elevating optics above everything else: since a perfect solution is impossible, we’re better off doing nothing.

Every election cycle, someone promises to run the government like a business. Setting aside whether this is desirable or feasible, the obvious challenge is that the optics become most restrictive when the government bears the risk (as businesses generally do). Yet vast opportunities exist for government to transfer risk from taxpayers to suppliers. Imagine a marketplace where vendors can only compete if they guarantee an outcome or your money back. Optics would revert to their proper place: still a factor, but far from being the first or only consideration.

By ending the charade of demanding perfection, we can stop wasting time on the fantasy of eliminating risk and instead focus on the real work of managing it. When you practice the art of the possible, paint will inevitably splatter – but to a realist, the result is infinitely more attractive than an ideal that will never be achieved.

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/641104 2014-01-12T20:51:58Z 2018-02-08T06:55:55Z A Lesson From the Affordable Care Act Rollout

Without commenting at all on the policy wisdom of the Affordable Care Act, it’s clear that the rollout of Healthcare.gov has been disastrous. This has been chronicled more diligently elsewhere, but can be summed up by noting that, while Healthcare.gov was plagued with bugs, crashes, and general confusion, a team of three college students replicated much of the desired functionality of the site in a few days.  Of course, the alternative site, HealthSherpa, does not handle the user or data scale of healthcare.com or perform the most complex operations of applying for coverage, but the contrast between a site built for free and the ~$600+ million obligated for healthcare.gov is sobering.

We can draw a few lessons from this affair.  The first is that it represents a deep structural problem of government IT projects. The process used to bid out and build healthcare.gov was not, contrary to what you might have heard, especially unique or nefarious.  On the contrary, it represents the norm for large federal IT projects: mandating what should be straightforward products to be built from scratch in many ponderous phases, replete with massive sets of requirements and a commensurately high number of billable hours. 

The major difference is that this time, the users are the American people. The frustration of grappling with subpar technology is the same experienced daily by some of the most vital people in our public service ranks.  Soldiers, intelligence analysts, law enforcement officers, and veterans care workers, to name just a few, are routinely forced to implement tools that are barely functional, told to simply “make it work”.  This is by no means meant to minimize the headaches associated with healthcare.gov – on the contrary, it points to the need for real, systemic change.

There are two fundamental flaws at work in the legacy government IT acquisitions model. The first is that the same procedures used to acquire tanks and aircraft carriers are used to build software. Yet software development is by nature a creative, generative, iterative process, not a static set of requirements that won’t change significantly over the lifecycle of the product.  And while good software is never truly finished, the essential building blocks can often be delivered right away - the key is that you’re creating a basis for iteration and creative enhancement, not obediently following the same blueprint for years at a time.

The second, and subtler, flaw is the failure to recognize that America in general, and Silicon Valley in particular, are unique in the ability to build software.  Many remarkable advantages of American life have contributed, in turn, to our dominance in software development. Pondering an increasingly data-driven future, our abundance of software talent has to be considered one of America’s most strategic resources, and leveraged and fortified accordingly. Sadly, in the current IT acquisition landscape, armies of contactors are paid by the hour to produce a crude facsimile of what our best software artists could create for a tiny fraction of the cost - but ignoring such a precious asset would be a mistake at any price.

One great irony of the healthcare.gov fiasco is that a major rationale for the Affordable Care Act was the idea that Americans can do better than the legacy healthcare system – only to see what should have been a slam-dunk website rollout crippled from the beginning by the IT acquisitions machine, another legacy system. Regardless of one’s views about the law itself, though, one saving grace is made clear: if we want to do better, doing what we’re already the very best at seems like a good place to start.

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/606601 2013-10-06T02:30:21Z 2018-02-08T06:55:55Z Quantum Mechanics of Software

One of the most fundamental human desires to believe that something is either A or B, and many complex endeavors are compromised from the beginning by treating the A/B split as a first principle. Binary logic may explain well-understood processes, but eventually the old rules cease to apply, as with the failure of classical physics to explain phenomena at atomic and subatomic scales.  To understand quantum theory, you have to accept the wave-particle duality, and even then, it turns out that no one really knows why light exhibits both wave and particle properties.  We can observe, even predict, but not quite explain.

Startups are subject to similarly misunderstood dualities.  Simple minds want to know if winning depends more on doing A or B:  Should we move fast, or ship quality? Build footprint or monetize?  Optimize on breadth or depth?  The winner, however, realizes that you have to figure out a way to do both.  How this is accomplished is highly contextualized in practice, but it begins with the realization that you cannot have one without the other and hope to succeed.  If it were as simple as doing only one thing well, the success rate of venture capital would be much greater than 10%. And when you do succeed, as in quantum mechanics, recognizing that things work a certain way is more important than knowing why (for the purposes at hand, at least).

A venture also displays both continuous and discrete elements.  From a wide angle, the growth curve or product lifecycle may resemble a wave function, but it’s also extremely iterative, and is most efficient when individual iterations occur at consistent intervals.  Likewise, one characteristic is often expressed through the other, much as particle emissions are dependent on wave functions. The focus and abstraction needed to go broader also allows you to go deeper effectively.  Similarly, in the course of developing a vertical solution, you often end up sharpening your intuition about how slice the problem horizontally.

When striving to achieve both A and B, you often need to consciously set up opposing forces to achieve your goals.  For example, you need hackers who are relentlessly focused on solving the customer’s problems, even if they’re comparatively poor at productization and long-term code stability, and you need artists who are relentlessly focused on productization and pristine architecture even if their sense of customer urgency leaves a lot to be desired.  How you make them work together productively is an art - there is always some violence, but it starts by recognizing you need both, and accepting that their interactions only need to be productive, not harmonious.  The results of this type of particle collision are very difficult to know ex ante, so the safest bet is to find the best exemplars you can of each type – people you would want to work with individually.

The need to harness opposing forces sometimes extends beyond types of goal orientation to personality types (though these often go hand in hand).  Again, it’s up for debate why this is the case, but the anecdotal evidence is extensive.  The classic example from quantum physics is Richard Feynman and Murray Gell-Mann’s collaboration on the theory of beta decay.  Feynman was famously mischievous and irrepressible, while Gell-Mann was almost painfully serious and methodical.  While they frequently found each other exasperating, their tension was tempered by strong mutual respect – an obvious but sometimes overlooked component in organizational design.

Conventional high-tech wisdom posits that among the qualities of “better”, “faster”, and “cheaper” you can only pick two.  With the right team, you can do extraordinary and counterintuitive things. You can be better, faster, and cheaper – you just can’t be better, faster, cheaper, and also comfortable, which is the true contradiction. At the risk of resorting to truisms, doing hard things is hard - comfort is simply not part of the equation.  As Feynman himself once quipped, “You don’t like it, go somewhere else!

 

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/579671 2013-05-18T23:02:59Z 2023-07-05T06:05:32Z Soup Tasting

Ray Dalio’s Principles is required reading at Bridgewater, and contains plenty of wisdom that resonates well beyond its original context. Far down on the list, at #139, we find: 

... 139) “Taste the soup.” A good restaurateur constantly tastes the food that is coming out of his kitchen and judges it against his vision of what is excellent. A good manager needs to do the same.

Soup tasting is hard and requires you to pierce comfortable levels of abstraction.  Often where there are bad outcomes, there is a gross lack of soup tasting, both because of inertial unwillingness to take a bite and because of ineffective gustation.

Amazon’s Jeff Bezos is the archetypal soup taster (among many other outstanding talents).  Bezos is renowned for the depth of his knowledge and the clarity of his insights (especially when making snap judgments), but equally important is his ability to get to the crux of seemingly complex matters in five questions or less.  It’s easy to forget how many complex decisions Amazons has faced over the years, and the fact that their success is often taken for granted is largely a tribute to Bezos’ ability to ask the right questions so incisively and consistently.

The importance of soup tasting seems intuitive enough, but how you develop the ability to taste soup well is one of the more underrated challenges of leadership for a number of reasons.  To begin with, there is never just one kind of soup.  The metaphor applies equally well to the commercial success of your business and the view from inside.  At the same time, not all soup is equally important, and even the most astute taster’s capacity is limited, so you need a focal point. As Bezos has often described, “We start with the customer and we work backwards.”

More fundamentally, soup tasting is largely about overcoming bias, which is generally a very difficult process.  It needs to be about fearless inquiry, not seeking reassurances. Anyone who has done any actual cooking has probably had the experience of asking someone else if a dish tastes funny, while silently convincing himself that the answer is no. Of course, if it does taste funny, being polite does the aspiring chef no favors.  For soup tasting to have any value as an exercise, you can’t be afraid of what you might discover. 

Soup tasting is as much art as science, and as such it is hard to turn it into a predictable framework.  Still, some basic principles apply:

  • It all starts with probing.  Any time you are presented with an assertion, whether it’s a project plan, forecast, or report, review it tenaciously.  If something isn't clear to you, probe down. If something strikes you as particularly important, probe down deeper. If there are implicit assumptions, challenge them.  Think of the annoying little kid who responds to everything by simply asking “why?” It seems repetitive, but if you proceed from the right starting questions you will quickly get to the heart of the matter.
  • Get closer to the problem.  Something about the soup seems off.  Now you need to taste it some more.  The first step in getting close to the problem is simply a more thorough probing.  If that doesn’t do the trick, you need to go down two or three levels, either by honing in on the most important things in your area of credibility, or by asking someone who is credible.  By the way, assessing who has credibility in what areas, beyond just being aware of their reputations, is its own important form of soup tasting.
  • Measure.  Soup-making, both literal and figurative, requires experimentation, and it’s one of the hallmarks of the Amazon approach.  Bezos places a premium on experiments that are measurable and produce hard data.  As he explained in Fast Company, “The great thing about fact-based decisions is that they overrule the hierarchy. The most junior person in the company can win an argument with the most senior person with a fact-based decision.”  At the same time, as Bezos will quickly tell you, “there’s this whole other set of decisions that you can’t ultimately boil down to a math problem” – hence you need to master the art as well as the science.

It’s also well worth considering what soup tasting is not:

  • It’s not micromanagement.  This means telling people how to do something without tasting the soup for yourself, or telling them how to do something in an area where you lack credibility.  
  • It’s not distrust.  Distrust is not a productive default position, but neither is blind trust. Real trust is developed by consistent soup tasting – as the old saying goes, “trust, but verify”.  Knowing which issues to escalate as priorities, and how to escalate them as a team, is also an art form, honed through soup tasting interactions.
  • It’s not indefinite, nor is it an end in itself.  You need to find the middle ground between an excessively laissez-faire approach and never-ending inspection.

The more soup you start to taste, the more you'll want to taste, but as with anything, you can overdo it – just as you can proofread too long, and you’re bound to miss something obvious. It is critical to cultivate credible soup tasters throughout the organization, but the transition from soup taster to meta-soup taster is a tough one. It only works if your trust has been validated, and requires a great deal of intellectual honesty to avoid indulging in wishful thinking, feel-good exercises, or just shedding responsibility. 

In the end, soup tasting is how you know what is true – “overcoming bias” and “intellectual honesty” are really just fancier ways of expressing this.  And the truth matters more than anything else.  In his introduction to Principles, Dalio states,

“I also believe that those principles that are most valuable to each of us come from our own encounters with reality and our reflections on these encounters – not from being taught and simply accepting someone else’s principles…. So, when digesting each principle, please…

…ask yourself: “Is it true?”

 

All soup tasting, ultimately, is a variation on this one simple yet profound question.

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/518505 2013-04-27T18:37:28Z 2023-07-05T06:05:11Z The Nature of Goal Orientation

I think a lot about what specific competencies are needed when starting something, but even more fundamentally, how does someone approach work (and life)? My experience is that there are goal-oriented people and there are process-oriented people.  Finding goal-oriented people is one of the most crucial determinants of startup success - no amount of expertise can substitute for goal orientation.

There is implicit bias in both orientations, but not all biases are created equal.  Goal orientation subordinates process to outcomes.  As a result, there is sometimes a tendency to ignore or undervalue the importance of frameworks, checklists, and details, though in my experience truly goal-oriented people are quite intuitive at abstracting useful and repeatable approaches from their experiences. Planning and process are also not the same thing – done right, planning is simply the division of larger goals into smaller ones.  Even so, goal orientation is a vastly preferable bias.  You can learn organization (and the most effective people are constantly re-learning it), but motivation is much harder.  By the same token, consultants can help to improve your processes, but they can’t define your goals for you.

Process orientation, on the other hand, actually subverts your goals, under the subtle guise of helping you achieve them.  Uncritical acceptance of process creates an alibi for failure.  When things go wrong, a process-oriented person thinks “I did all I could while following the process to the letter, so maybe it just wasn’t meant to be.” Without a healthy institutional skepticism, process easily becomes a goal in itself. To be fair, processes and goals can both be destructive if they are not subject to revision, but process is fundamentally tied to predictability and routine, whereas goals require constant thought and re-examination to remain effective.

The most inventive organizations are more concerned with limiting process than perfecting it. Apple’s revitalization began when they started to re-imagine a hardware problem (personal devices) as a software problem. If process had been the dominant consideration, Apple would have kept refining their old product lines until they faded into irrelevance.  By the same token, many enormous failures affecting society writ large can be attributed in part to relying on process while ignoring the substance (Enron, the subprime collapse, countless failed technology acquisitions).

Everyone claims to be goal-oriented (it’s probably one of the top resume clichés), but the norm is that people want to be told what to do.  Freedom is scary, partly because it is new and unfamiliar, but mostly because the onus will be on you to succeed once the security blanket of process is taken away.  Truly meritocratic and goal-oriented organizations are also quite rare, so it’s easy to mistake boredom and frustration with bureaucracy for real self-determination.  During both Internet bubbles, countless career big-company employees decided they wanted to “join a startup”, without really asking why or realizing that they were trying to be different in the exact same way as everyone else (the word “join” isn’t an accident either).   Ironically, when asked by hiring managers what they would bring to the table, these people would typically deliver lengthy homages to their current company’s processes. 

One of the most interesting things about goal and process orientation is what part is constitutional and cultural.  Some people are natural insurgents, who will orient and achieve the goal so intuitively that they may not even appear disruptive to the casual observer.  Others have been raised in cultures that value conformity and process.  Just as many genes are only expressed when the right stressors are present, a naturally goal-oriented person may not emerge until landing in the right environment. The converse is much less common, however – process-oriented people tend to be exposed fairly quickly in truly goal-oriented environments where there is little concept of playing along.

The conflict between goal and process orientation is exceptionally relevant to planning one’s career.  We’ve all seen picture-perfect, cookie-cutter resumes that are obviously a result of process orientation,.  What’s more interesting is when people try to design rules and processes to reverse-engineer a major career shift.  There are plenty of “experts” who will tell you to get experience in the private sector before doing a stint in government (or vice versa), or that you should learn “fundamentals” at a  Fortune 500 company before joining an early-stage startup.  With all due respect, these people completely miss the point of having goals.  It should be more obvious with really unorthodox career arcs, but even so, many people are apt to read about Steve Jobs and think “Ok, so I should drop out of college, but take a calligraphy class, and get fired from my own company before making a triumphant comeback.”

Of course, there are plenty of perfectly good environments for process-oriented people.  The problem is when they land in the wrong place and both the person and team suffer.  It really comes down to honestly understanding your strengths and weaknesses, as an individual and as an organization. 

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/384947 2013-04-12T03:53:53Z 2022-10-23T02:57:03Z On the Joy of Renting

Ownership is the essence of the American Dream – or is it? The mortgage crisis certainly led many people to rethink the virtues of owning a home, but even in less dramatic markets, it’s a fair question.  There are many assumptions to be challenged and hidden costs to be considered.  Warren Buffett continues to bet heavily on housing, while Yale economist Robert Shiller contends that housing is an investment fad, with no net appreciation in the US market over 100 years.  Of course, as author of the Shiller post points out, most of us are living in our homes, and the benefit is partly intangible.  But how much does the intangible actually depend on ownership as opposed to just being there?

Rental has always been a popular alternative for major, long-term-use assets with high maintenance costs.   Traditionally this has meant homes and cars, but they are just the beginning.  The convergence of low-friction technology, on-demand efficiencies, expanding tastes, and shrinking wallets has led to the explosion of the sharing economy, as reported by The Economist.  There are countless examples, each with its own intricacies: Rent The Runway, Amazon Web Services/EC2, ZipCar, Uber, even BlackJet. It’s about deciding not to own something you really don’t need to own yourself (and achieving better financial health as a result).).  Increasingly, we have the option to spread out the long-term maintenance cost, which actually exceeds the acquisition cost for more assets than people tend to realize, while maintaining high availability. 

The sharing economy ranges from necessities such as housing and transportation to luxuries such as designer dresses and private jets but necessities quickly become luxuries when acquired carelessly.  This is especially pertinent for government, but it’s not always obvious which costs justify themselves.  Traditionally, the Forest Service, Coast Guard, police, et cetera all maintained their own helicopters, for example.  Even if they were grounded 90% of the time, no one wanted to give up ownership if they had a choice.  Now that states are going broke, sharing is a much more palatable option, but it’s not just about cutting costs – you have to re-examine the incentives.  In government, one of the major drivers of ownership is funding.  It’s easier to get large capital funds for new assets because they are assumed to be investments— and investment has a return.  It’s much harder to get operational funding because that is a cost - and costs are bad, right? (how many times have you heard the renting is throwing money away?)  But what if that helicopter fleet is just a really bad investment? It becomes a lot easier to make that case if you can get a helicopter on short notice, probably based on a retainer and/or hourly use fee (similar to ZipCar).   

 

Separating the emotional appeal of ownership (as difficult as that may be), my thesis is that it is generally a bad idea to own an asset unless you have a specific and special competency to own it.  This is the same for everything: housing, cars, servers - and especially software.

Cars are a tricky case, famously depreciating (up to 10%) the minute you drive them off the lot (a phrase so commonplace you probably finished it in your head). Many of us don’t know how to truly maintain our cars beyond the basics.  For occasional drivers, there is the lesser option, such as ZipCar, but US infrastructure is still designed around individual drivers, and giving up your car can be very difficult if you don’t live in a city.  However, something like Sebastian Thrun's self-driving car work could someday open up a whole new world of on-demand transportation that is more efficient and safer than anything we have now.  Think about it: 97% of the time, your car is sitting around, taking up space, idle.

Servers, beyond the fixed costs, require hardware maintenance, networking, power and cooling.  Many servers require replacement after just a few years.  It’s much easier and lower overhead to simply rent the capacity you need - unless you are Google, Amazon, or the like, and have a special competency that requires you to maintain your own servers.

Software is often perfectly suited to on-demand delivery for predictable use cases, and software-as-a-service (SaaS) certainly qualifies as one of the major technology waves of recent years.  More and more, the prevailing sentiment is “why buy software when you can rent it?”, as reflected in Salesforce’s now-iconic logo. 

Of course, not all software needs can be satisfied by SaaS.  Then the relevant question is whether to build or buy, as opposed to rent or own, but the underlying considerations are similar (if quite a bit more complex).  My guiding principle is that you shouldn’t be building your own software unless you have a particular competency that requires it, or need to develop such a competency.

In keeping with the theme of recognizing our own biases, it’s important to separate the emotional resonance of ownership from the practical reality.  With software, the reality is that code depreciates incredibly fast, not to mention the continuous iteration and improvement required for software to stay relevant. Ownership bias is perhaps most frequent (and outsized) in government, where the idea of “owning” the code base has become hugely and irrationally popular.  In the vast majority of cases, “building” and subsequently owning your own software actually means contracting with private vendors to develop complex, bespoke systems that cost 10, even 100 times as much as an off-the-shelf product. 

There is an attractive yet perniciously false idea that once you build the software, it’s yours, free and clear.  The appeal is simple - people enjoy the feeling of ownership, and are naturally wary of being beholden to outside vendors.  But the reality is that you are paying down accrued technical debt all the time – just as you would maintain a house or car, except that a house or car isn’t expected to fundamentally change in a matter of months.  Furthermore, a bespoke project concentrates that debt with one client instead of amortizing it across all customers the way a productized solution does.  In a very cynical way, bespoke developers are smart to let the government own the source code. Not only does this prevent other customers from re-using the IP (and saving money on development), but it also makes the ongoing maintenance costs easier to justify because now, it’s their baby.

The final point is that if you are going to buy, you need to make sure that the seller has a specific competency in software.  It might seem obvious, but more than any other product, you want to buy software from a software company. Rolls-Royce can build world-class cars and jet engines alike, but there isn’t really an analog in the world of aerospace companies and systems integrators that also attempt to build software.  The product lifecycle, pace of innovation, maintenance considerations, and above all the deltas between good and great all make software unique among industries.

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/327554 2013-04-03T23:24:02Z 2013-10-08T16:31:39Z Mo Data, Mo Problems

If you’ve spent any time in high tech the last few years, you’ve probably heard the term “big data” more than you care to recall.  It’s become a constant refrain, and the subject of plenty of breathless cheerleading, much like “the cloud”, “social media”, and countless other trends that preceded it.  This is not to say that big data is not important, but context and meaning are essential.  Big data has many roles to play, but it’s not an end in itself, as Shira Ovide explains so concisely in her recent Wall Street Journal piece

“Data for data’s sake” is the first major weakness of the big data obsession cited by Ovide, and it’s probably the most salient.  This a classic case of valuing inputs over outputs – the idea that if we only collect enough data, good things will happen.  This sort of magical thinking is somewhat reminiscent of past crazes for purely A.I./algorithmic approaches to data science, but at least in those cases there was some concept of outputs and programmatic attempts at sense-making. 

Of course, big data also isn’t going anywhere, and many worthy analytical endeavors demand that we address it.  However, it is essential to distinguish between warehousing, searching and indexing, and actual analysis.  Focusing solely on storage and performance creates a sort of computational uncertainty principle, where the more we know, the less we understand.

As Ovide also notes, there is also a critical gap in analytical talent, which big data has done more to expose than mitigate.   Computing power can go a long way towards making big data manageable and facilitating insight – if paired with a sufficient dose of human ingenuity.  Simply put, humans and computers need each other.  "Pattern recognition” is frequently cited as a benefit of a big data approach, but computers can't learn to spot patterns they've never seen.  As a result, the value of the analyst in defining the correct patterns and heuristics becomes all the more important. 

Appropriately enough, the most valuable and elusive elements lurking within big datasets are often human: fast-moving targets such as terrorists, cyber criminals, rogue traders, and disease carriers who tend to slip through the cracks when algorithms are deployed as-is and left unattended.  The old playground retort that it “takes one to know one” actually applies quite well to these types of situations.

Human capital is a key part of the equation, but it’s not enough to acquire the right talent – you need to address the inevitable organizational challenges that come with retooling for a big data future.  Ovide notes that many companies are installing “Chief Analytics Officers”, and while I want to reserve judgment, the cynic in me suspects this reflects the bias of large organizations to centralize power and create new titles as a first line of defense against unfamiliar problems.  A chief analytics officer could be the catalyst to instill readiness and analytical rigor throughout the organization, but whether this reinforces or dilutes the perception that big data is everyone’s concern is a fair question.

More than anything else, I would analogize the challenges of big data to the differences between conventional warfare and counter-insurgency.  In conventional warfare, the targets are distinct and obvious.  In counter-insurgency, the enemy is hiding among the population.  Much as you can occupy an entire country without knowing what’s really going on outside the wire, you can warehouse and perhaps even index massive data stores without producing actionable insights.  Effective big data approaches, like effective counterinsurgency, require the right balance of resources, sheer power, ingenuity, and strong and constant focus on outcomes.  In the long run, the willingness to pursue a population-centric strategy may well prove to be the difference.

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/243840 2013-03-27T23:46:35Z 2013-10-08T16:13:37Z 1776 The Ultimate Story of Entrepreneurship

David McCullough’s 1776 is, to my mind, the ultimate story of entrepreneurship.  Starting a company is challenging enough - now imagine starting a country!  Although many orders more complex, America’s founding has much to teach entrepreneurs of all varieties.  And given this heritage, it should also come as no surprise that the United States remains the best place in the world to start something new. 

One of the most valuable things 1776 imparts is an appreciation for the incredibly hard fight endured by the Continental army.  If your most recent lesson on the American Revolution came from a high school textbook, you might dimly recall a few triumphant battles and Valley Forge.  1776 paints a vivid picture of the sheer misery and constant trials of the war – trials few could have anticipated.  The Continental Army’s perseverance is even more impressive when you realize that the Treaty of Paris wasn’t signed until 1783.  For the modern reader, it’s a nuanced lesson: on one hand, you need to be realistic about the challenge ahead, but at the same time, you have no way of really knowing.

The parallels between startups and the Continental army are fascinating.  Some quick observations:

  • Chaos: Compared to the British army, the Continental army seemed completely chaotic. There were no well-defined roles and no visible hierarchy among these ragtag, shoeless countrymen who had taken up arms.  Of course, some of this chaos was real and some was perceived.  The relevant point when starting anything is not how to eliminate chaos, but rather which elements of chaos should be tackled in what order.  Do you address real organizational challenges, or just shuffle everyone’s title? This distinction escaped the British, who underestimated the strength and ability of the “rebels” simply because they looked like a mess. 
  • Meritocracy.  Nathaniel Greene and Henry Knox are two of the better examples.  Greene, a Rhode Island Quaker who had never been in battle before, became Washington's most trusted general due to his exceptional competence and dedication.  Knox was an obese 25-year-old who rose to the rank of Colonel.  He thought up the mission to secure artillery from Ticonderoga, without which the Continental army would have had no such capability. 
  • Talent: Despite Washington’s minor experience in the French and Indian Wars, his principal strength was not military strategy (in fact, his advisors staved off disaster more than once by convincing him not to do something).  His real superpower was his ability to quickly determine who was talented at what. 
  • Food: Food was critical to the Continental army.  Certainly there were times where they were on the move and hardly ate for days on end.  While food was always scarce, the fact that the Army was actually able to feed people with some consistency was critical. The modern startup is obviously not directly comparable, but we’ve seen time and again how providing food pays for itself many times over in terms of focus, productivity and commitment.

But more than simple observations and parallels, there are some real takeaways and strategies for anyone who aspires to start something extraordinary:   

Be Ruthless.

I was shocked by how many times during the course of battle the British would halt their movement to rest or make porridge or something completely non-essential.  There were countless occasions where the side with the advantage could have ended the war, had they only pressed on.  Their reasons should sound a cautionary note even now - stop because it is getting dark?  Stop because that was the plan (despite the ground truth)?  Worst of all: stop because we can finish the job more comfortably tomorrow. 

After routing the Americans and forcing them across a bridge, British General Cornwallis decided to rest.  The Americans retreated brilliantly and swiftly into the night. This was not the Continental Army's first such retreat, so it’s hard to imagine how Cornwallis did not realize the significant risk they posed. Why didn't he send out patrols? Most likely, he thought he would win tomorrow regardless, and preferred not to win under uncomfortable circumstances.  After the fact, he said that he would have kept going, whatever the risks, no matter the orders, if he had only known he would have caught Washington.  The lesson:  Be ruthless as a default setting, not just because victory is seemingly at hand.

Don't Get Overconfident.

Nearly every major mistake by either side in the 1776 campaign was a result of overconfidence.  Minor victories would lead commanders to discard their hard-won knowledge, resulting in terrible decisions.  The tendency to let encouraging signs override our better judgment is actually a fundamental human cognitive bias.  If you’re interested in learning how to recognize and defeat all manner of non-rational thinking, make it a point to read Overcoming Bias

Don't Waste Time Politicking.

General Charles Lee felt slighted that the less experienced George Washington was given command of the Continental army, and constantly sought to undermine him.  When Washington ordered Lee to bring his forces to New Jersey, Lee dawdled, and was captured by the British while seeking some female companionship in a tavern.  Lee was marched to New York in his nightgown, and soon defected.  Much more devastating, however, was a series of letters to Lee from Washington's close advisor and friend Joseph Reed, detailing Reed’s disappointment with Washington.  Why couldn’t Reed have an honest, face to face conversation with his brother in arms to sort through the issues?  In any vital endeavor, there is too much at stake to have closed communications or privately nurse resentments.

It ain't over 'til it's over.

Time after time, each side thought a specific battle was going to be decisive.  In retrospect, it is amazing how incredibly wrong they were, and how often.  So how do you respond? There is a fine line between being jaded and being realistic. Starting something invariably requires commitment in the face of uncertainty.  For this reason, I’d argue that it’s better to be optimistic (even if slightly naïve) than completely cynical, but again, the key is to be aware of our biases.

 

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/193578 2013-03-20T23:30:48Z 2015-05-11T01:32:09Z Business Schools and Employability

According to a recent Wall Street Journal article, business schools are placing increased emphasis on the employability of their students prior to admission.  I won’t speculate to what extent this is motivated by the need to protect their job placement statistics in a grim economy, but it’s worth considering the true consequences of this trend.  As the article notes, business schools have always considered the goals of the applicant – but to what extent are they curating these goals on the front end?  Even if we assume good intentions, the effect is to reinforce the status quo, making business school populations even more risk-averse and less entrepreneurial.

Ironically, this seems to be at least partly motivated by the banking collapse: “when the financial crisis upended the banking sector and sure-thing jobs on Wall Street disappeared, schools began formally tying input (applicants) to output (graduates).”  Why “ironically”?  Regardless of how much blame you want to assign to federal housing and lending policy as opposed to private sector recklessness, the financial crisis wasn’t brought on by entrepreneurial, non-linear thinking. Legions of conventionally smart people who had done everything right, rigorously following twenty year plans including name-brand firms and business schools, managed to get the biggest bets horribly wrong.  This is not meant to be flippant – current market conditions and job statistics are stubborn things that must be acknowledged.  However, if the lesson of the financial crisis is that we should double down on conventional wisdom, regardless of whether anything of value is created, then we’ve indeed learned nothing from the past five years.

As someone who frequently uses the frame of inputs vs. outputs, I took immediate notice of the wording above.  It would be encouraging to see an extremely input-focused sector more concerned with outputs, but I suspect they have confused the two in this case, merely trading one set of inputs for another (the addition of an MBA).  You can also think of this as commoditizing human capital, and this calls the entire purpose of an MBA into question.  Is business school meant to, help develop leaders, or serve as a finishing process on a prestigious kind of assembly line? 

The article goes on to state that “making employability too weighty a factor in admissions can backfire. “ According to Graham Richmond, a former admissions officer at University of Pennsylvania's Wharton School, “Looking at applicants through a narrow vocational lens may deter schools from accepting riskier candidates, such as entrepreneurs or career-switchers, in favor of more sure things, such as aspiring management consultants.”  The fact that aspiring management consultants are considered “sure things” is evidence of how much MBA culture values process over invention.  Candidates and schools understandably want assurances, especially in the wake of 2008.  The world is a chaotic place, even more so since the financial crisis (though I contend that it has always been so, and that the banking industry simply managed to insulate itself unusually well for as long as it could).  Obviously, you have to adapt to the current reality.  Yet I can’t help but wonder if by focusing on doing obvious, “safe” things, to the exclusion of risk-taking and creativity, the MBA community isn’t just constructing an elaborate playpen in which nothing new ever happens.

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/74476 2013-02-22T00:12:00Z 2013-10-08T15:37:34Z Calling All Computer Scientists in Southern Europe

One of the most startling yet largely under-reported facets of the European financial crisis is the rate of youth unemployment, especially in Southern Europe.  If you are a young person in Greece (58%), Spain (56%), Portugal (39%), Italy (37%), or France (27%) you are likely looking elsewhere already. There are certainly nearby places with a shortage of qualified workers (such as Germany), and when any job is scarce, it may seem a strange time to be seeking your ideal job.  

Yet, for those of you who studied engineering (especially computer science) that is exactly what I am suggesting.  Palantir is hiring aggressively in Palo Alto, New York, Washington, Los Angeles, London, Australia, New Zealand, Singapore, and beyond.  If you are not only technical, but also passionate about using technology to address problems that matter most in the word, Palantir (and I personally) would love to hear from you. Why Palantir?

Meritocracy: Silicon Valley has the highest concentration of great computer scientists of anywhere in the world.  If you are a gifted young computer scientist, you belong with a Silicon Valley company if not in the Valley itself.  Of all the great things about Silicon Valley, meritocracy may be the greatest differentiator.  There are no long apprenticeship or trainee programs at Palantir (though we are always learning).  Everyone is equipped to begin working on real problems within weeks. Good ideas don’t have to pass through a massive hierarchy - the best idea wins, regardless of whose idea it is.  

Save The World: Palantir is focused on solving the most important problems for the world’s most important institutions, and we are always exploring new uses for our platforms. Some of our major areas of application financial oversight, disease control, narco-trafficking, protection of children, cyber security, protection of civil liberties, and most recently,  disaster response. In the face of global austerity, we are helping governments to get the most out of limited resources, and working with financial regulators to prevent the next financial crisis before it happens.

These are uncertain and volatile times, especially for Europe, yet there has also never been a better time to be part of something extraordinary. 

Apply Here (the path to a better tomorrow): https://www.palantir.com/challenge/ 

 

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/74477 2013-01-13T01:31:00Z 2014-07-13T00:15:13Z The Soft Underbelly of Technology Services

I spend a lot of time thinking about delivery models for technology, especially in an age of shrinking budgets and growing complexity.  So I was struck to read that Avanade, a joint custom software venture between Accenture and Microsoft, had been sued by a customer for major cost overruns.  The key part:

The lawsuit said a software project estimated to cost $17 million and take 11 months instead mushroomed to $37 million over three years, and ScanSource said it still doesn’t have a Dynamics software up and running. Accenture has estimated it will cost $29 million more to complete the ERP project, according to ScanSource’s lawsuit.

What can be learned from this? There are quite a few things.  The cynics among us might point out that an overrun of $20 million and 2+ years is considered a bargain in some areas of government.  That is of course an outrage, but the important takeaway goes beyond the numbers, to the fundamental nature of the delivery model.  Let’s assume for this conversation that actors all good faith and very competent here.  I think that despite that, the model leads to these sorts of outcomes.

Not surprisingly, Avanade turns out to be in the business of renting labor.  Services is the exact wrong model – a catastrophically incorrect model, the more you think about it. These sorts of incidents are really a lagging indicator of the weakness in the model, but it’s taking a whole lot of innocent (and some not-so-innocent) bystanders with it.  More on them in a few.

There are many shortcomings to services model, but most fundamentally it’s the wrong incentive structure.  When you’re renting labor and other nebulous inputs, it’s almost a truism to point out that the longer it takes, the more the company prospers, and the bigger the project, the more room for abuse.  A contractor doing a bathroom remodel might employ a similar cost structure, but could never get away with overruns on a tenth the scale of those alleged in the Avanade lawsuit.  Of course, even if you have reliable cost safeguards in place, custom software development is inefficient, as I’ve often railed about in these pages.  It takes an army of consultants to deliver, and another army of consultants to maintain.  

It’s not all the services company’s fault, though – not even primarily.  In a sense everyone is complicit, from the services company, to the customer who doesn’t demand something better or structure payment to be a premium but based on success, to the tech giants who aren’t working to productize services.  Of course, if product companies dared to do so, the services companies of the world would throw a fit, and professional courtesy runs deeper than you might think in a theoretically competitive marketplace.  

When the world changes, you don’t always get a say in the matter, and evolution has a funny way of sneaking up on those who get too comfortable.  The first indications may just be bubbling to the surface, but two things are clear: services companies are under tremendous pressure, and product companies need to productize services. 

The first point makes sense from a valuation standpoint.  Mature tech companies such as Oracle and Microsoft have market caps of ~5-6x annual revenue, while the multiple is often less than 2x for services firms, even the upper tier.  Yet it’s still not obvious to all that services companies are living in the past (partly because many services companies are so good at convincing people they’re really technology companies).  Mostly, though, it’s because services companies still generate a lot of money.  It’s a dying model that’s still making people rich, so it’s easy to ignore the warning signs even if you see them.  And for an exponential trend, by the time you are 1% there, it is almost done.  You could almost analogize it to the SUV craze: consumers couldn’t get enough gas-guzzling SUVs, and American auto makers happily served them up for several years.  Suddenly (but not all that surprisingly), $3-4/gallon gasoline was a fact of life and those same automakers were all teetering on bankruptcy for giving the customers exactly what they wanted.

In terms of multiplying complexity and data problems, we’re entering an era of $10/gallon gas.  Even if you’re in the product business, if you’re not increasing your productivity per person, you are dying – in some cases more quickly and dramatically than the services dinosaurs.  And for this reason, product companies can’t just deliver products any more – they need to productize services on a continuous basis.  In short, they need to deliver outcomes.  Mere capabilities only work against well understood problems.  They won’t be sufficient for the types of challenges that grow appreciably bigger in the time it takes to read this blog post.  

If that sounds smug, it needs to be acknowledged that building a business based on outcome delivery, as opposed to a static product, is still extraordinarily hard.  Not only are the prevailing incentive and cost structures far behind, but technically speaking it’s a very rugged frontier.  This is perhaps best illustrated by software, where performance at scale, processing, security, stability, and interoperability are often much bigger challenges than achieving the desired functionality.  On the other hand, though, successful technology has always productized services of some kind, dating back as far as the cotton gin or even the wheel.  The entropy of the present and future data landscape adds an enormous degree of difficulty, but along with Moore’s Law, the single biggest lever of the knowledge economy is the ability to repackage experience and lessons learned into a better, more responsive product.  It may take years or even decades, and it’s entirely possible that the first mover will end up being a sacrificial lamb.  Sooner or later, though, the company that gets productization right will eat the legacy companies’ lunch.

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/74479 2012-12-01T23:49:11Z 2013-10-08T15:37:34Z Inputs vs Outputs

A defining difference between Silicon Valley and the Old World is that Silicon Valley is intensely focused on outputs as opposed to inputs.  While the shift to an outcome-based  economy remains a work in progress, the high-tech world tends to focus on tangible results, not ingredients.  It’s not just about a different way of thinking about business – it’s a matter of different societies and what they value.

One of the original inputs is ancestry.  No one in Silicon Valley will ask you who your parents were or what they did, whereas people absolutely will in the Old World and East Coast.  At some point in American history, having ancestors who came over on the Mayflower became an indicator of New England aristocracy – funny when you consider that the Pilgrims themselves were people of no social standing, building something from scratch.

Input bias is easy to observe in classically process-oriented companies (and societies).  Fixation on research and development is a prime example: the value of the final product is judged by the input (“it cost us $500million to develop this”) more so than the results. Spending in general is frequently touted as an absolute good or evil ipso facto, but it’s actually one of the least relevant data points on its own.  When we talk about confusing cost with value, we’re really talking about confusing inputs with outputs.

Wall Street is extremely focused on inputs, even though their efforts are ostensibly measured by outputs, and fairly straightforward ones at that.  On Wall Street, input doesn’t just refer to assets under management – it’s about name-brand firm experience, having an MBA from the right school, who designed your suit, even your business cards.  Ironically, Goldman Sachs, the biggest name on Wall Street, transformed itself from a struggling, undistinguished firm to the world’s top investment bank under the leadership of Sidney Weinberg, a junior high school dropout. Weinberg was originally hired at Goldman as a janitor’s assistant making three dollars a week – an anonymous and menial job, certainly, but a job at the firm judged solely on output.  

Where you went to school is an obvious input, but outputs matter for the endurance and success of the school itself, especially young schools.  How did Stanford, founded in 1891, achieve equal footing with the Ivies?  Money certainly helped, but intermingling with Silicon Valley and entrepreneurial culture played a much greater role than simply having wealthy donors.  From legendary engineering dean Frederick Terman, who mentored (and invested in) Hewlett and Packard, to the founding of Yahoo! and Google by Stanford grad students, to Peter Thiel’s recent course on entrepreneurship, Stanford and Silicon Valley have enjoyed a unique symbiosis.  In terms of clear outputs, a recent study found that companies founded by Stanford alumni create $2.7 trillion in annual revenue.   Beyond pure productivity, Stanford arguably introduced the concept of great entrepreneurs as a tangible output of a university, mentioned in the same sentence as Nobel laureates and world leaders.  The willingness of many of these great entrepreneurs to reinvest not only their money but also their wisdom and mentorship into the university is one of the great virtuous cycles in education.

Perhaps the ultimate input is age, and when a society values something simply for being old, it speaks volumes – especially when that something is itself.  The output that matters is enduring impact and relevance.  For the Old World, the danger is that reverence for the merely old is so deeply ingrained that by the time a society realizes it’s stagnating, it is exponentially harder to reverse the tide – witness the number of once-great empires of Europe struggling to stay afloat.  The United States is an obvious counterpoint (not that we can take that for granted), and I’ve often reflected that Silicon Valley values are really American values writ large, but there are new revolutions happening all the time, even in very old societies.  China and India were home to ancient and storied cultures, though neither was a world power as recently as the mid-20th century.  Today, in a post-imperial, post-Soviet world, they are major players, buoyed by high-tech explosions that would have been unimaginable fifty years ago.  Yet I would argue that such transformation only became possible when China and India collectively decided that only outputs, not the systems that produce them, are truly sacred.  

 

 

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/74480 2012-11-16T02:32:00Z 2013-10-08T15:37:34Z Focus on the First Derivative

In a fast growing company, everyone has less experience than they need for their roles, by definition.  This will continue to be true as the company scales, one's role changing in a fundamental way every 3-6 months, especially when it continues to defy expectations for months and years.  Ultimately, that’s all irrelevant.  In Silicon Valley, we like to talk about visionary leaders making momentous decisions amid great uncertainty, but what really matters is the first derivative of understanding: how are you and your team learning from the experience as it unfolds?  There are many considerations nested in this question – here are some of the most important:

How quickly are you learning? When you are operating within a tornado, speed counts for a great deal.  It’s often been said that even the right decision is wrong when taken too late, and this begins with learning. If the second and third order effects of your original challenge are already on an irreversible course by the time you’ve grasped the nature of the challenge, it’s no longer the same challenge.

Are people taking the same things away from failures? In an ideal world, everyone would not only draw the same conclusions from the experience, but they would also be the correct ones.  More often, the process is a lot messier, but that’s just reality – you learn together through give and take, not some mystical collective unconscious.  The key is that you are unified about your next move.  

Are you making meaningful abstractions, or just reacting to your immediate circumstances? Even when execution is everything, there is such a thing as being too tactical, and morale plummets when people can’t make abstractions (or they aren’t taken seriously).  It’s a delicate line, because your abstractions have to be actionable and part of a continuous cycle of learning and responding.

When dissent occurs, is it productive? Just because you eventually arrive at the same takeaways doesn’t mean there is no room for disagreement.  The question is whether it’s healthy and constructive, or pointed and personal.  The “team of rivals” concept has gained many adherents in recent years, but it’s important to remember that it’s above all a team.  Ideally, iron sharpens iron.

Three Two strikes and you’re out.  In certain areas, such as distribution, you don’t get many chances to course-correct when one approach fails, so extracting the right lessons from the first failure is paramount.  This is not to say that you should impose needless anxiety on these kinds of decisions, but be aware of what the stakes are.

Can you reform your model? Models can be extremely useful and necessary to consolidate your understanding of a complex world and plan accordingly.  However, they can also be an especially insidious kind of blindfold.  Adjusting your model, or abandoning it when necessary, can be incredibly difficult, because it requires you to first recognize and confront your inherent biases, and resist the tendency to rationalize away the model’s shortcomings.

In a hyper-growth environment, you will never have enough information, experience, or foresight.  The first derivative will be the only thing that matters.  We became the ultimate learning animals through many unforgiving eons of natural selection.  This new evolutionary challenge of warp-speed learning and adaptation may feel significantly more abstract, but once again, it all comes down to survival.

 

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/74481 2012-10-27T22:28:00Z 2023-07-18T01:23:47Z Calculus vs. Statistics

If you have more than a passing interest in the future – be it yours, your venture’s, or humanity’s writ large - Peter Thiel’s CS183 lecture #13, “You Are Not A Lottery Ticket” is a feast for thought.  Thiel interrogates the underpinnings and consequences of determinate and indeterminate worldviews in numerous contexts, including as they apply to startups. 

For the aspiring tech entrepreneur, one of the most useful frameworks Thiel invokes is that of calculus (determinate) vs. statistics (indeterminate).  In calculus, you make precise determinations, often concerning discrete futures.  You can figure out exactly how long it will take to drain even the most irregularly shaped swimming pool.  And this enables you to do things of vital importance.  As Thiel notes, when you send a rocket to the moon, you need to know where it is at all times – you can’t just figure it out as you go.  In statistics, on the other hand, there are no certainties.  It’s about bell curves, random walks, and drawing an often uncomfortable line of best fit between limited data points.  Thiel furthermore notes a powerful societal shift towards the belief that statistical thinking ways of thinking will (and should) drive the future.   

The example of landing a rocket on the moon is probably no accident. The 1950s and 1960s (coincidentally the first golden age of Silicon Valley) were a time of widespread American optimism.  The moon landing was a fundamentally optimistic venture that captured the American imagination (and quite literally would not have happened without calculus).  It only makes sense, then, that statistics would be the dominant modality of the cynical world we now inhabit.  If you look at the natural disasters, economic collapses, terrorist attacks, and disease outbreaks of the 21st century, some might seem more or less predictable by conventional wisdom, but the popular perception is that humanity was caught napping, apart from a few obscure Cassandras.  Especially in light of the truism that we’re usually planning for the crisis that just happened, it’s easy to see the appeal of the indeterminate/statistical model.  Statistics couldn’t have predicted exactly which bad things would happen, only that some bad things would happen.  

It’s enough to make you throw up your hands, yet this is exactly what Thiel is not arguing for.  This should come as no surprise. Thiel is a renowned contrarian, and many of his greatest interests reflect a healthy disregard for statistical/indeterminate thinking, life extension being a prime example.  The conclusion of the lecture begins with an acknowledgment that as we embrace the statistical worldview, society is sliding into pessimism, and without indulging in too much pop psychology, it’s easy to see how such thinking becomes self-fulfilling.  The lecture ends with an appeal to “definite optimism”, and posits that computer science offers the best hope.  CS is not only a great way to solve problems, but as Thiel observes, its fundamental determinism may have something to teach startup culture, which is widely presumed to be governed by indeterminacy.

Of course, software itself is greatly misunderstood, and this is one of the primary challenges computer scientists face as entrepreneurs. People who don’t understand software assume that its value is statistical by nature, and fundamentally unknowable (in contrast to hardware, for example).  If you’re a math phobic, single-variable calculus and E = mc2 are just two things you don’t understand, and the differences and relative complexities are immaterial.  To make matters worse, people who truly understand software are relatively rare, especially among those with purchasing authority, and this unknowable fallacy leads to a sort of permanent agnosticism in principle as applied to software.  Within the statistical frame, it’s assumed that two competing software packages lie in the same general area of the bell curve, and therefore the differences are negligible or at least unknowable.  You know that the value of software follows power laws and the differences between good and great are logarithmic, not linear, but the statistical frame ignores all of this.

One consequence is extreme risk aversion: if you believe that the relative merit of one kind of software isn’t calculable, you stick with what you already have, and this has plagued many otherwise forward-thinking institutions.  There is also the simple matter of what’s tangible.  To the layman, hardware seems straightforward, whereas software doesn’t (even if hardware may owe much of its performance to superior software).  As a result, hardware is often seen as a reasonable expenditure, whereas software isn’t.  No one blinks at a $50 million aircraft, even if that aircraft is agreed to be 1980s technology, whereas $50 million for software is not only unthinkable to many, but being newer and better may very well work against you, due to the unknowable fallacy.  

For the aspiring software entrepreneur, there are a few takeaways.  It’s a fact of life that software is misunderstood and undervalued.  However, that doesn’t mean quality doesn’t matter.  In fact, it matters more than ever.  The challenge is that when you are up against a heavy incumbent, it’s not enough to be 10% better – you have to be 10X better, because ultimately your success is dependent on enough people feeling strongly enough about your product to risk rocking the boat.  Earlier we discussed that the idea of any complex product being great enough to sell itself is a myth, and again, concluding that being great is unimportant is absolutely the wrong lesson.  Put another way, if you want to bring people around to viewing software through a calculus frame, you have to make their daily existence demonstrably better.  But wasn’t this always the goal?

This brings up a final point about determinacy: some things are worth doing regardless.  In the last CS 183 class, “Stagnation or Singularity?”, Thiel is joined by several guests, including Dr. Aubrey de Grey, gerontology expert and Chief Science Officer at the SENS Foundation.  De Grey makes the point that while we may have a fair idea what technologies will be developed, the timeline for development is much more tenuous and subject to various externalities.  However, he concludes (paraphrased), “In a sense, none of this matters. The uncertainty of the timeline should not affect prioritization. We should be doing the same things regardless.”

Once again, it all comes down to doing important things, and when this is the stated goal, the inherent pessimism of the statistical approach becomes apparent.  This applies to your own life as well as it does when building a company.  If you wanted to take the statistical view to its logical extreme and hedge against all possible uncertainties, you’d become a jack of all trades/master of none, and consciously choose not to go long on any particular superpower or world-changing problem. If the goal is to live an inoffensive, comfortable life, this might makes sense.  If you want to do anything of lasting value, this is crazy.  In some ways, it’s easier to grasp this concept when designing new technology or building a company – although it’s easy to suffer from too many features or too many business models, most entrepreneurs accept that trying to be all things to all people is a recipe for failure (as software development illustrates so neatly).  Technology needs a problem to solve.  You, on the other hand, are not a problem to be solved – yet what to do with your time and gifts is perhaps the most worthwhile problem of all.

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/74483 2012-10-15T00:21:00Z 2013-10-08T15:37:34Z Distribution

Execution is hard, and distribution is one of the hardest (and not surprisingly, least understood) aspects of execution.  Peter Thiel gives the subject of distribution an extremely thorough treatment in CS183 Lecture 9, “If You Build It, Will They Come?”, including mathematics, psychology, and market-specific models.  Rather than trying to summarize the extensive substance of the lecture, I’d like to focus on how you might think of the distribution challenge as an engineer, in the context of the Your Future series.

Thiel begins by addressing the most basic question – what is distribution? Surprisingly, many people can’t give you a coherent answer, and if they can, there’s a very good chance they underestimate its importance.   If we agree to define distribution as how you get a product out to customers, it becomes a bit more concrete why there’s so much misunderstanding around the topic.  It’s especially difficult when you’re creating software or other technologies that require meaningful user engagement.  If you think of distribution as just getting a product into users’ hands, you’ll likely fail – either because you assume that a product will get used just by virtue of being available, or that the product will remain in users’ hands once it’s reached them.

If you look at two of Thiel’s biggest success stories to date, PayPal and Facebook, you’ll find two companies that nailed distribution, and in very different ways.  It’s worth noting that online payments and social networking sites were both extremely noisy spaces when PayPal and Facebook joined the fray, and neither company had first mover advantage (though as Thiel discusses elsewhere, this may not be such an advantage after all).  Also significant is the fact that online payment processing and social networking sites are both fairly easy to prototype and hack away at.  Of course both PayPal and Facebook hired outstanding engineers and eventually encountered (and overcame) serious technical hurdles – security/fraud in the case of PayPal and scale in the case of Facebook – but I’d argue that those problems only emerged because they got distribution right first.  

As Thiel calls out early in the lecture, engineering bias works against you when it comes to distribution.  As engineers, we are conditioned to think that great products will just reach consumers by virtue of being great (and there’s a dangerous tendency to assume that your idea of “great” is representative).  The concept of a product being “so good it sells itself” is universally appealing - and universally incorrect.  It just doesn’t happen.  It is possible to create an environment where the best idea wins within the confines of your own company, and I urge you to retain this form of idealism, but any market is a fundamentally irrational place, and you need to make peace with that fact.  

Another major difficulty is that so many young engineers in Silicon Valley have been spoon-fed a massive user base, either because they joined a company that already had one, or they piggybacked on one.  Of course, this is a valid distribution channel – the path of least resistance is by no means the wrong approach.  The problem is that it skews the way you think about design and innovation.  Most engineers in non-entrepreneurial roles haven’t had to think about distribution at all.  And that’s fine, as long as you realize that you started on 3rd base and didn’t hit a home run—not for the sake of your ego, but for the sake of your next venture.  You have to approach the might distribution challenge with the humility it deserves, so suffer at her hands.

Whether distribution can really be “engineered” is a topic for another day, but it worth thinking about what makes engineering different from sales, and for the aspiring founder, this is one of the biggest takeaways from the lecture.  I’m not so much concerned with the merits of different distribution approaches as with recognizing the how the skill of distribution (to include sales) lines up with your and your team’s strengths and weaknesses.  It’s no secret that I’m a huge fan of engineering-driven companies, but it’s not enough to focus on your strengths – you also have to even out the competencies you lack, and chances are sales/distribution is among these.  

Why is this? As Thiel notes, sales is a fundamentally irrational enterprise, and engineers are concerned with rationality and truth-telling.  However, their general discomfort with and lack of aptitude for sales isn’t just about purity of spirit, but also about knowing what to look for.  In many cases, it’s not clear what quantifiable skills are actually involved in “sales.” (hint: this ain’t it: Crazy Ernie).  If you convince yourself that these skills aren’t important, or don’t have a place in the kind of utopian company you want to create, you not only ignore one of the central aspects of distribution, but also create a huge talent gap, because you need at least a few folks with these skills.  Think of it as a special sort of project management skill—the ability to get a distribution project across the finish line.  A crude model, but useful in framing the challenge for us engineers.

There are many risks inherent in the worthy goal of starting a company: team risk, innovation risk, technical execution risk, and business execution/distribution risk.  In addition to the first three, distribution is something you need to be thinking about in the foundational stage, not something to be revisited at an undefined point in the future.  Importantly and subtly, distribution risk affects innovation and technical risk in turn - and every form of risk is ultimately a team risk.  Feedback from the field/your customers becomes the fuel (to your creative mind’s spark) for iterating and conquering – you will be on an empty tank without distribution.  If you’re trying to start something, it’s almost more important to ask who on your team is credible in each of these areas than how you’ll specifically get there.  

 

 

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/74485 2012-09-27T14:27:00Z 2020-10-11T23:10:10Z Think Again About Sticking Around for a Masters Degree

Mark Twain was said to have remarked “I have never let my schooling interfere with my education”, and whether the quote itself is apocryphal, the sentiment should be applauded.  True education is a beautiful thing. A master’s degree, on the other hand, is not only a waste of time (with a few exceptions I’ll get to), but often epitomizes that proverbial interference.  

As I spend a lot of time talking to college students I encounter many who are signing up for the increasingly popular one-year master’s degrees.  I understand the appeal from the student’s perspective and I understand the appeal from a parent’s perspective.  In fact, I pursued one myself.  I had just finished undergrad, I didn’t have a compelling opportunity, and more importantly I had somewhere to get to (Silicon Valley).  For many of today’s brightest engineers, I don’t think a master’s degree makes any sense, and that was exactly the advice I gave my brother who just graduated.  In order to appreciate why a master’s might not be such a wise idea after all, it’s worth considering what makes education meaningful to begin with.  

To begin with, education creates opportunity.  This has probably been drilled into your head from an early age, and for good reason.   If your parents were the first generation in your family to attend college (or you are) this needs no explanation, and it’s a tragedy that education is taken for granted by anyone, let alone so many.

There is also much to learn – much more than most people would ever guess.  When I went off to college, I certainly thought I knew more than I did.  Being disabused of this notion may seem like the first step in one’s education, and it often is, but it’s a really a lesson worth relearning at any stage.  Then there’s the process of learning how to learn, and this is one of the primary reasons you go to school, independent of your field of study.  There are countless dimensions: learning to make abstractions and conceptualize, to interrogate a problem, to work inductively and deductively, to separate first principles from careless assumptions.  You need to experience breadth, both to strengthen your foundation, and to find subjects worthy of exploring in depth.  

Education also provides a unique platform to gain impactful life experience in a low-risk environment.  School is a place to build formative relationships, explore different paths, be in charge of your own time and activities, even start something if you are so inclined.  Perhaps most importantly, it’s a time to learn about your strengths and weaknesses with high upside and low downside.  Of course, college is costly, and time itself is far from trivial, but it’s much easier to avoid loss aversion and do something truly experimental when you’re not deep into your career.   The phrase “do something” is the key here – “finding yourself” has become sort of a cliché of indolence, but it’s while moving forward that you truly find yourself, and college can be the perfect place to do this.

Finally, education validates that your best years really are ahead of you.  High school is certainly a valuable experience, and in the best case can lay the groundwork for the level of exploration that college makes possible.  At the same time, it’s a small pond both socially and in terms of what you’re asked to do.  However triumphant or painful it is, it’s not a place to remain.  College may feel like a time for reinvention, but it’s really a time for original discovery.  

 To understand why master’s degrees are superfluous and even counterproductive for engineering students, I like to use the framework of getting somewhere versus getting something.  You can also think of this as a means to an end as opposed to an end in itself.  Education provides many things of intrinsic value, but much as nine months, give or take, is enough to prepare you for the outside world, so is one degree.

The exceptions tend to fall into the category of getting somewhere: for example to Silicon Valley or to the United States.  A master’s degree can also be a good way to test the waters of academia.  You can take classes with doctoral students and get a feel for the academic life without having to commit to a dissertation or taking on a teaching schedule.  For some friends, a master’s has been an informative gateway to a promising academic career, whereas others consider it worth the price of admission to have been persuaded that doctoral studies aren’t for them after only a year.  I am not coming down on one side or the other, only advocating for informed choices.

On the other hand, if you’re trying to obtain something – experience, distinction, deeper cultivation of your superpower – it’s usually better to just get that thing in its pure form.  If you want entrepreneurial experience, go out and get some – don’t learn about it in an MBA course.  If you want to be a better software engineer, don’t sign on for an extra year of TAing - work on challenging, real-world problems in a production environment, with peers who force you to raise your game.  

I’ve said before that learning for its own sake is not necessarily valuable, and this is especially true of master’s degrees in the workplace, especially degrees in one pure subject (as opposed to MBAs and other first professional degrees).  There is a lot of misleading and unsubstantiated chatter that a master’s degree makes you a more valuable employee ipso facto, and this is just not the case, as many people find out the hard way.

It’s also worth acknowledging that there is often a socio-cultural bias towards more education, especially among generations who experienced firsthand the power of education to achieve an objectively better life.  This is not a perspective to be discounted, but at the same time you need to recognize when the preference for more higher education is no longer contextualized and ignores the question of somewhere versus something. 

Finally, and most importantly, don’t do a master’s because college is fun and it will never get better than this.  If you care about your future as much as I imagine you do, that simply won’t be true (regardless of whatever sentimental projections people offer you).  The fifth year isn’t like the fourth.  Everyone is gone, and you realize that what made college meaningful was the people who went through the experience with you, not the buildings and the campus.  Moving on is not always easy, especially when you strip away the structure and predictability of school, but it’s simply time to forge new experiences.  If there’s one thing I’ve learned since leaving school, it’s that they can all be the best years of your life when you get out there and Do Important Things.

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/74488 2012-08-21T21:34:47Z 2013-10-08T15:37:34Z David is not Goliath, and should not be

When you are doing something truly disruptive, you are in a David versus Goliath situation (and this is especially true for technology).  The story of David is highly instructive for anyone who aspires to do world-changing things, and its lessons go much deeper than an inspirational tale of the little guy beating the big guy.  

Let’s begin with the obvious: David wins by not playing by Goliath's rules.  He doesn't out-muscle Goliath, instead fighting a lightweight, guerrilla style insurgency.  David is Exhibit A for the theory that speed, wits, and the ability to adapt can trump size, resources, and heavy armament.  After felling Goliath with his slingshot, he beheads him with his own massive sword (a gory but potent bit of symbolism often left out of the retelling).  

However, David’s selection as champion of the Israelites and his rise to field commander were unconventional, even revolutionary acts in themselves.  In fact, almost every key event of David’s ascendancy was highly unlikely.  It began when the prophet Samuel sought out Jesse of Bethlehem, believing that one of his sons would become king of the Israelites.  Samuel rejected each of Jesse’s grown sons in turn before Jesse reluctantly presented David, his youngest son and a mere shepherd.  Anointed by Samuel, David went to the court of King Saul, initially as his armor-bearer.  Yet it was as a musician that David made himself indispensable to Saul, healing his afflictions with his sublime harping.  When war broke out with the Philistines, David was not even asked to fight at first, instead going home to tend his father’s sheep.   When David arrived at the front to answer the call, he faced fierce opposition from within the Israelite ranks, chiefly from his own brothers.  I suspect that when Saul, not renowned for his piety, gave David permission to face Goliath, he was not 100% faithful, but instead thinking “this is so crazy it just might work.”

After numerous trials, including his betrayal by Saul, David was crowned King David.  He ruled unconventionally and brilliantly, true to his essence, and in doing so established the House of David and the true throne of Israel.  Famed as a warrior, he never forgot that he was also an artist, and crafted psalms as powerful in their own way as his armies.  This is not to say that David’s reign was a wholly peaceful one, or that his better judgment always prevailed.  He made his share of prideful mistakes, and suffered no shortage of tragedies, none more painful than the deaths of two of his sons.  However, David proved willing to build on his failings, and never stopped bucking convention.  When the time came to choose a successor, he passed over his heir apparent for Solomon (originally the product of adultery with the wife of one of David’s commanders).  Solomon, of course, built the great temple of Jerusalem, composed the Song of Songs, and became synonymous with surpassing wisdom.  Ultimately, the line of David exemplifies the divine ascendancy of the unlikely.

For technology entrepreneurs, the story of David is a highly attractive one, and the modern-day parallels are striking.  You can think of David’s slingshot as one of the original disruptive technologies – it’s lightweight, requires minimal training, and utilizes off-the-ground commodity hardware.  It is likewise fitting that the term “Philistine” has come to mean someone without any appreciation for art and learning, and this is especially true concerning the perception of software, perhaps the most misunderstood and underappreciated form of technology at the institutional level.  Of course, David himself is the most inspiring part of the story, a young, fearless, brash, but supremely talented leader who emerges from the least likely of places with the most counterintuitive blend of skills.  

However, those who would follow in David’s footsteps must beware the catastrophic, yet often subtle pitfalls along the path.  It is paramount that as David wins, he doesn't become Goliath.  For leaders who emerge from the tornado of the hyper-growth phase, this is deceptively easy to do, and the annals of technology are piled with the cautionary examples of companies born from innovation that faded into irrelevance by allowing themselves to become the hated establishment.  David must be true to who he is, not by consciously choosing to remain small and irrelevant, but by resisting Goliath’s arrogance and vulnerabilities - even while embracing growth and influence. 

I spend a lot of time working with large and important institutions to help them solve their biggest problems.  This tremendously rewarding, and as the sense of partnership and investment in their mission develops, it is tempting to want to be of them as well as work with them.  Yet you can only help them if you are true to David, and this requires you to maintain the unique identity and vantage point of the constant outsider.  And this is why massive institutions need the help of entrepreneurs, even if they don’t realize it at first.  This is inevitably a bumpy process, because the cultural bias is to keep David in a limited role, away from the front.  Eventually, though, it becomes clear that in order to do radically different things, they need radically different competencies and perspectives.  If it was simply a matter of finding better top-down management, they could promote from within.  To enlist a warrior psalmist is a different thing entirely.  

Of course, embracing unconventional wisdom is only the first step.   The far greater hurdle is how to institutionalize agile and independent thinking without becoming doctrinaire and inflexible about it – an ironic but all too common mistake.  Interestingly, this applies to both the century-old brand name that seeks to embrace entrepreneurial culture and the scrappy startup that suddenly finds itself with thousands of employees.  Once again, David rides to the rescue.  Consider the fundamental challenge faced by US Special Operations Command, a four-star headquarters with almost 60,000 personnel, charged with maintaining supremacy in lightweight, unconventional warfare.  Former commander General Bryan Brown, who enlisted as an infantry private and retired as one of the great visionaries of special operations, once remarked that USSOCOM needs its poets too. David knew it all along.

 

 

 

 

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/74489 2012-08-07T19:00:00Z 2013-10-08T15:37:34Z Vertical vs Horizontal Approaches

In the light of delivering outcomes, we should consider the respective merits of horizontal and vertical approaches.  The more common (and not necessarily incorrect) approach is to slice a problem horizontally and build layers to a stack based on objectives with clear left and right parameters.  This not meant to imply that there is a direct correlation between inefficient, services-based businesses and horizontal integration – indeed, many companies could (and do) achieve considerable efficiency through sophisticated slicing of problems and applying focus and discipline to a specific layer.  

However, this scenario presupposes that horizontal slicing actually makes sense for the problem at hand.  More often than not, this won’t work for truly world-changing problems – thorny, costly messes that have always defied incremental or partial solutions.   For these kinds of problems, you generally need a vertical slicing methodology, wherein the problem writ large is solved end to end.  

Complex problems are a lot like the classic peg-and-hammer game: knock one peg down and another one almost instantly pops up.  Focusing on one problem may cause another one to become more pronounced, or it may result in an entirely new problem emerging (the law of unintended consequences).  Once you begin to recognize that the underlying structure defies incremental approaches, the revolutionary no longer seems implausible – in fact, it often proves to be essential.  

One aspect that makes vertical approaches so hard is the amount of abstraction required.  Many business leaders pride themselves on seeing the bigger picture, but paradoxically, successful abstraction requires a concurrent grasp of concrete problems throughout the stack.  You can code in Java or C, but if you want to truly push the boundaries of performance, you need to understand what the computer is doing at the CPU level.  Conversely, nuts-and-bolts problem solving does not amount to a transformative business without some broader vision, and this requires abstraction.  In the vertical methodology, there are two main tiers of abstraction: that required to solve a specific problem end to end, and that required to generalize this solution to whole classes of problems.  Finally, it is worth noting that the abstraction challenge is equally crucial to management as well as technology, and morale tends to be worst when people can’t make meaningful abstractions – or feel they can’t.   

Apple is a great example of the success of the vertical approach, not only for its mastery of end to end considerations, but for rethinking the whole model of personal computing – a deft blend of granular and abstract problem solving.  In contrast to the Microsoft approach of focusing on operating systems and desktop software and letting hardware manufacturers fight it out, Apple chose to take responsibility for the entire experience, from the hardware to the user.  This required not only solving all the normal granular problems associated with each layer of the hardware, OS, and software stacks, but actually creating a continuous whole.  It’s about more than just a pleasing design – the idea that the user interface encompasses the device itself is central to Apple’s identity.  For Apple, the medium is the message, in three dimensions.  By slicing multiple user needs vertically, Apple ended up creating a whole new vertical, one it continues to dominate.

Where most companies saw problems they’d rather not touch, Apple saw opportunities – in device design, in platforms, and in challenging a monolithic company that conventional wisdom would suggest was unbeatable.   For years, Microsoft succeeded because they had created the most effective walled garden.   Although Apple’s OS had always had loyal adherents, Microsoft’s wall arguably started to erode with the introduction of the iMac and Macbook, which caused consumers to reconsider the fundamental relationship between form and function.  With the creation of the iOS developer platform and app store, the walled garden concept was turned inside out.  Yet I would argue that the platform, for all its merits, would have been exponentially less attractive if people were not already loyal to the device and the entire experience it represents – a shining validation of the power of vertical problem solving. 

There is one quite interesting twist to all of the above: in the course of developing a vertical solution so you will actually develop strong intuition about how the problem can be sliced horizontally. This is the entropy of the human approach and the way we naturally organize to solve problems in teams – i.e. frontend and backend, to use a very broad example.  Within each category, you can further refine the dynamics of the cross-functional teams that enable each horizontal slice. 

As a final note, it’s worth mentioning that while the vertical approach at its most successful may create the impression of an all-encompassing technology, an end-to-end solution almost is achieved through scrupulous attention to each layer of the stack.  However unified the whole may appear, the parts each play a discrete and necessary, not sufficient, role.  Developing vertical solutions is not a matter of finding a silver bullet, but rather the most effective combinations and permutations of thousands of lead ones.

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/74491 2012-08-06T01:32:00Z 2022-06-28T22:38:28Z Only in America

The remarkable ascendancy of China and India as high-tech powerhouses has illuminated a non-obvious but profound truth: The United States remains the best place in the world for high-end software development.   Despite widespread economic pessimism and rumors of a global power shift, talk of across the board decline is ill-founded.  When it comes to software, American exceptionalism is alive and well.  The first secret to our dominance is our engineers.  To anyone familiar with the basic mechanisms of software development, this is really no secret at all.  What is more surprising is how much our engineering culture owes to the heartland.  While our software industry may be concentrated among the coastal elites, the ethos that makes it possible was largely forged in the wheat fields and corn rows of the Midwest.  Properly applied, it is this ethos that will see us through current uncertainties and solidify our place at the top for generations to come.

 “Global” does not equate to “world-class”, and there is no better illustration of this than the booming software industries of China and India.  This is a period of unprecedented growth and opportunity - certainly compared to the landscape my own father faced when he left Tamil Nadu a generation ago.  At the same time, per-capita income in India is less than half of China’s, itself a seventh of that enjoyed in the United States.  Yet income disparity is merely a symptom of a larger trend: for all the high-tech jobs created in the past decade, China, India (and the rest of the developing world, for that matter) have yet to produce their own Microsoft, Oracle, or Google.  Instead, “body shops” offering outsourced IT services and low-end development are the rule.  On the value spectrum, such businesses are the equivalent of the call centers that now symbolize the tremendous growth and equally tremendous limitations of the new global economy.

When software development is a process of manufacturing, not invention, the resulting products are commodities, and the same holds true for software engineers.  This is the crux of the issue: great software is not a commodity, but a highly evolutionary thing, and great engineers are irreplaceable, not interchangeable.  The relative talents of software engineers, and the elegance of their output, lie on an exponential, not linear, scale.  The difference between the very top and the median is not 30%, not 300%, but rather 10,000%.  Of course, this phenomenon is not only illustrated by the IT factories of Bangalore and Shanghai.  Plenty of American tech companies persist in believing that an arbitrary number of decent software engineers equals one savant.

However, the creation of a high-end software industry requires more than just top technical talent.  In the absence of the right environment, innately gifted engineers will either flee or toil in obscurity.  This environment must be supportive of personal aspirations, while remaining a firm meritocracy in which the best idea wins.  A sense of fairness must prevail, especially as it relates to competitive marketplaces, respect for intellectual property, and recognizing top individual contributors while promoting a spirit of teamwork and cooperation.  Most importantly, new talent is attracted to this environment by the promise of just opportunity, not a pre-ordained lot in life.  These are the essential Midwestern values that drew the first homesteaders inland from the American colonies, sustained a young Thomas Edison through countless early failures, and, in our own century, have given rise to the greatest icons of the knowledge economy.

All of this helps to explain why, despite tremendous advances in technical education throughout Asia (and comparative stagnation in our own system), the United States continues to be the epicenter of world-class software development.  This is not meant to suggest that the recent educational achievements of Asia are anything less than remarkable.  However, the best measure of these achievements is not the number (or lack) of new Googles springing up across the Pacific, but rather the influx of highly qualified Asian engineers and students seeking opportunity in top American companies and universities.  China and India have succeeded admirably in creating cultures of productivity, yet a culture of invention remains elusive. The upshot, for American software leaders, is a workforce that is much more diverse than it is often gets credit for.  Ironically, it is the native Midwesterner that is often absent from the popular perception of what a software company looks like – but we should remember that the first Midwestern settlers were primarily immigrants themselves.

While panic and pessimism are not appropriate reactions to our changing world, neither is taking our dominance and innovation for granted.  Instead, we should consider what our values have to teach us in light of certain austerity.  Our forebears, facing similar challenges, learned to do more with less. Happily, this concept applies better to software than to almost anything else.  Moore’s Law states that computing power doubles roughly every two years.  Given the ingenuity of our software developers, we should absolutely demand twice the performance for half the cost when it comes to IT infrastructure for healthcare, defense, treasury, and other mission-critical departments.  It is not enough to slash underperforming programs (and they are legion) without making concurrent investments in world-class technology.  This may seem counter-intuitive given our shrinking budgets, but we can only achieve the efficiency and productivity gains required by thinking in terms of value as well as cost. We must reward our top producers, but only while demanding their very best efforts. 

The unique software engineering culture of Silicon Valley may simply be the most recent manifestation of America’s pioneer values. However, it is these enduring values, more than any technical achievement, which will ensure America’s continuing dominance of the software world in a time of global change. 

 

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/74494 2012-07-14T19:00:00Z 2016-09-13T22:34:22Z Time For a New Type of Product Company

Even a cursory glance at the worlds of technology and services (and the murky realm that lies between them) makes one thing clear: it is time for a new type of product company.  

What’s wrong with the old one?  Take enterprise resource planning as an example.  ERP/supply chain management was once a world-changing problem, yet even after decades in which to evolve, the typical ERP system solution still costs tens or hundreds of millions of dollars and requires substantial time to implement - sometimes as long as or longer than the first versions.  Why, then, does this model of “solution” persist so stubbornly?

One explanation is outdated reference points.  The previous generation of product companies grew up in a world in which bespoke software was the only viable option.  To solve a problem by conventional standards, they either had to build everything from scratch, or glue a bunch of existing products together into an uneasy coexistence.  Given the obvious shortcomings of the latter approach, the former takes hold in many cases.  

Of course, the bespoke approach has numerous shortcomings of its own.   It is largely inefficient, and discounts the extent to which large classes of solutions can be productized.  Interoperability with existing systems and future products invariably suffers.   Many customers will commission products that should be reusable elsewhere, yet impose IP restrictions that prevent wider application.  The largest companies may succeed in creating a continuum of seemingly complementary offerings (end-user software, administrative software, databases, mainframes, hardware, etc) - yet behind the kimono they are actually services businesses.  Each additional piece of hardware or software exists to sell more services, and in doing so, consolidate ownership of customer environments.  

However, the deeper (and much more destructive) phenomenon at work here is the matter of structural incentives.  From the vendor’s perspective, the bespoke development model usually means each product has been developed on a specific, proprietary customer’s dime (indeed, it is hard to imagine a purely bespoke model working without rigid exclusivity clauses).   The bespoke developer bills for the engineers’ hours, not the final product, and inevitably requires a legion of consultants to make the product work and provide further customization ad infinitum.  Conveniently, the longer the project takes to complete, the more money the company makes.  Even if there is no gross corruption, there is no incentive on the margin to make anything more efficient.  

On the customer side, structural incentives should be better aligned, but loss aversion tends to create resistance to change, especially in big companies.  The larger an organization’s technology purchasing infrastructure (and the more layers between it and the end-user), the more the incentive structure is influenced by the practical need to defend oneself to the outside world.  Because appearance is everything, the relative downside of being associated with an unsuccessful technology acquisition is fairly high, while the relative upside of a successful new implementation is surprisingly low.  As a result, “safe” decisions quite often win out over the best decisions, which are usually non-linear and difficult for everyone to easily recognize, even with the purest of motivations.  The effects of loss aversion reverberate far beyond technical functionality – they limit the vision of the whole organization, even among those who ordinarily might embrace innovation.   As consumers and as a society, we have not been as perceptive as we probably ought to be about the efficacy of the traditional approach to building products - if we were, we would be more willing to learn and iterate with a long-term outlook.  

Left alone, the traditional approach, for all its faults, seems destined to persevere.  Hence, I maintain that we need a new type of product company: one that actively takes responsibility for the outcome the product is supposed to deliver.  This may sound intuitive enough, but it actually represents a revolutionary departure from traditional companies that deliver technologies or capabilities at best.   This company must have both the skills to productize new technologies and the incentive to continue iterating until its products work out of the box (and working out of the box must sooner or later be seen as a firm requirement, not a nice-to-have).   The tricky part, however, is that this incentive must be created alongside the product itself.

Let’s first consider the incentives driving the traditional model.  Creeping development schedules and requirements reliably create larger and broader revenue opportunities.  This results in an obvious disincentive to stay on schedule and under budget, but it is a commonly accepted model that many large customers are well-prepared to accommodate.  Increased dependency on services creates more revenue opportunities, and has the additional benefit of locking in customers, who may not have the desire to administer their own technical systems (or even believe it to be possible).  Finally, a subtle but profound incentive is that companies are rewarded for convincing customers that each of their problems is a unique snowflake and requires an equally unique solution, built from the ground up, at bespoke speeds.  

The new model is a study in contrasts.  There is no incentive to delay, partly because the company does not rent labor by the hour, but largely because working out of the box is a major part of the value proposition.  The outcome-based model prizes efficiency for both customer and vendor, and as a result, the focus is on solving immediate problems with the subsequent goal of generalizing and productizing solutions to whole classes of problems.  However, the desired outcome must be achieved without compromise, so there is also a strong incentive to create a product that is adaptable to very specific requirements.  The overriding incentive, however, is to do something truly extraordinary, and realize substantially more value based on a substantially better outcome.   If you can save a customer $1 billion, it is not unreasonable to charge $250 million for the privilege.  By the same token, a solution that is duct-taped together over months and years and thrown over a wall, leaving the customer to struggle to realize a fraction of the savings themselves, should be rewarded proportionally.

This is the big bet driving the outcome concept: outsized rewards for outsized (and demonstrable) value.  Of course, it will absolutely never happen this way overnight.  No doubt the outcome model is much riskier business in the short term.  The company must assume the risk of development costs, since the new model requires them to create value first before they can charge for it.  Delivering a meaningful outcome also requires top engineering talent, which cannot simply be purchased, and certainly not as quickly as one would like.  Finally, lest we forget, the structural incentives to avoid change are alive and well.   In order to have any chance at all of making it, the entrepreneur must take on the lion’s share of the risk, betting that a fair price for a guaranteed outcome will be a lagging indicator in the best case scenario.  In the short term, true outcomes will almost inevitably be undervalued, but the reality is that the burden of proof is on the innovator.  (To be fair, there is risk for the customer as well, even if their initial monetary outlay is low.  Ideal outcomes are not usually achieved all at once – early on, they will require the customer to invest their time, provide access to their hardest problems, and be willing to iterate without real certainty that the new company will necessarily succeed where previous vendors fell short).

A business model based purely on delivering outcomes would amount to a seismic shift – a wholesale reinvention of what it means to be a product company.  So be it.  This is unquestionably a difficult road, but if the past has taught us anything, it is that the only way for entrepreneurs to achieve the changes they desire is to see them through from inspiration to outcome – as high-tech companies everywhere like to say, end-to-end.   More often than not, this means solving problems (and even conceptualizing them) vertically rather than horizontally.  A truly vertical approach represents a major distinction and radical departure from the norm, and will be covered in depth in the next installment.

 

 

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/74496 2012-07-14T02:37:00Z 2013-10-08T15:37:34Z The Difference Between High Achievers and Leaders

 

Those selected for development have one universal trait in common: They are by definition high achievers. But there is a difference between those superstar achievers that can make the leap to CEO and those that will implode: To what degree do they feel invigorated by the success and talent of others, and to what degree does the success of others cause an involuntary pinch of insecurity about their own personal inadequacies? Only an individual who feels genuinely invigorated by the growth, development, and success of others can become an effective leader of an enterprise. And it remains the most common obstacle of success for those trying to make that leap.

from Narcissism: The Difference Between High Achievers and Leaders

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/74497 2012-06-26T13:45:00Z 2013-10-08T15:37:34Z Jason Silva is AWE

]]>
Shyam Sankar
tag:shyamsankar.com,2013:Post/74498 2012-05-11T16:22:40Z 2022-01-18T06:04:00Z Know Thyself

Doing what you’re good at is an obvious prescription – perhaps a little too obvious for the aspiring entrepreneur who most likely takes pride in discovering the hidden dimensions of everything!  It can’t be that simple, can it? I would argue that it really is simple - just not easy.  Discovering what you’re good at takes time, intellectual honesty, and an even greater awareness of what you’re not so good at.  Usually, some failure is required.  It’s also not a binary system – not everything fits into neat categories of what you should and shouldn’t be doing, and as discussed, starting something substantive inevitably requires you to do a little bit of everything.  That said, ascending to the next level will require you to leverage your strengths as never before.  

 

You have certain obvious strengths you already know about.  Getting into (and through) college requires a broad range of competencies that we often take for granted, and it’s easy to confuse competencies with real strengths, especially if you have the kind of work ethic that can compensate for subtle weaknesses.  Even so, you probably have a good idea where your major strengths lie, and they can all serve as a clue as to what you should be doing with your life.  

 

Within these strengths there is something you are so good at that it seems effortless – and because it’s so intuitive, you may assume others can do it too.  This sometimes manifests itself when working in teams - when something comes so easily it can be hard to appreciate how it could be a real struggle for someone else.  As a result, you may even undervalue this strength, which would be a shame, because this is your superpower - the key to making your greatest impact.  The common factor among every great entrepreneur I know is having a superpower and knowing how to use it (despite often being below average in many other facets).  “Well-balanced” individuals, by contrast, tend to be a hit at management consulting firms and other places where job titles actually include the word “generalist”.

 

Then, there are the things you aren’t good at, but would really like to be.  The problem is that early on, you define yourself in part by being good at these things, and this can be really hard on the ego.  It is a massive distraction from your true strength.  But it is really important to emotionally and intellectually learn to let go here.  And context matters as well – you might be perfectly adequate at one of these aspirational strengths in most people’s eyes, yet exist in an arena where you need to be among the best.  I eventually had to accept that I would never be the greatest programmer (as much as I wish that weren’t true!).   I could have continued to program for a living, but not if I wanted to someday work with the top software developers in the world (as I now do).  I don’t regret having tried, but really this was just part of my journey to really understanding my strengths.  

 

This is not to say that struggle isn’t valuable, but as with learning, people overestimate the value of struggle for its own sake.  It’s largely a matter of recognizing what’s worth struggling for and what isn’t.  Achieving your maximum impact isn’t just about identifying your talent and riding it to greatness.  Almost everyone has weaknesses that blunt the impact of their strengths, and while these weaknesses might never be banished, you can absolutely learn to control them.  

 

The final thing to remember is that this is largely a process of self-discovery.  Your mentors and colleagues can help you get there, and they will definitely have insights that only an outside vantage point can provide.  Ultimately, however, no one can do it for you – and that realization should excite and inspire you even more.

 

 

]]>
Shyam Sankar