Mindful Vendor Assessment – Counteracting Cognitive Bias When Choosing Third-Party Solutions


As software engineers, we are tasked daily with choosing third-party libraries, tools, and services to integrate into our company’s products. Unless we make a special effort, those choices are consistently under the influence of various subconscious biases. Some stem from the society we live in and cultural norms. Others depend on personal experience and whether or not we get targeted effectively by social media and advertisers. 

I’ll identify several biases which influence us to make less than rational choices when we aren’t mindful of them:

  • We are often prejudiced, either for or against third-party solutions. 
  • Our consumer mentality affects the way we trust.
  • We assume a level of governance by association.
  • We equate popularity with quality.
  • We associate cost with quality.
  • We attribute our motivations to our service providers.

Mindful Vendor Assessment

Imagine we have a box of playing cards in front of us. How many kings are in the box?

Would we be willing to bet money on our answer? $10, $100, $1,000,000? Does it matter if we know the person betting against us or who provided the deck of cards? Does it look like a new deck or maybe it’s an old deck, missing cards? Could cards have been added, replaced, or is it a sealed deck? Could the deck have been opened and resealed without us noticing? Does every deck leave the factory perfect or are there brand new decks with mistakes out there?

Initially, we assume the most common scenario, the perfect deck of cards. Only once some probing questions prime us to think more critically, do we consider what mistakes we might have made in that assumption. There are only so many factors to consider about a deck of cards but software vendor risk, on the other hand, is much more complex, even emergent.

Research shows that complex situations “create an environment where employees are more susceptible to creative interpretation, social pressure, and incentives.” In other words, we are more likely to let our many types of cognitive biases influence us to make irrational choices. 

Consciously adopting a critical mindset allows us to become cognizant of our biases and evaluate risks objectively rather than by instinct. This leads not only to more rational decisions but to a better, easier to share, easier to justify, and easier to review, decision-making process.

Specifically, when identifying the risks involved with choosing a third-party provider, there are several prevalent biases to confront:

Not Invented Here Syndrome

Engineers cannot discuss the question of buy vs build without bringing up the Not Invented Here Syndrome. Theoretically, NIH syndrome describes the reluctance to use third-party software for no other reason than it is third-party software but that is a practical impossibility. There are myriad differences between any third-party product and the alternative we would develop on our own. Depending on the specific issues triggering our reluctance, we can define several phenotypes of NIH:

Shoemaker’s NIH
The adage says “The shoemaker’s son goes barefoot.” For many people, this is describing a work-life balance problem. Creators and engineers, however, know that this is a case of NIH. We know we can do a better job than any off-the-shelf solution so we refuse to compromise.
Fear, uncertainty, and doubt are often at the heart of NIH. We can’t know on paper, and many times even in production, how exactly a third-party product will behave, be operated, or be secured, so we hesitate to relinquish control.
Job Security NIH
One of the, possibly unfortunate, consequences of companies implementing Forter’s solutions, is the replacement of human resources with our automated services. Situations like these are not uncommon and can often bias engineers and other employees against solutions that jeopardize their job security.
Neo-NIH aka Proudly Found Elsewhere(PFE)
Traditional forms of NIH, like those above, seem much less common than they used to be. Developers with less than 14 years of experience these days have never worked in a world without GitHub and XaaS. Possibly as a result, they tend to the opposite extreme, assigning near-blind preference to the integration of third-party services and code over new development.

The reality of build vs buy is that there are pros and cons either way. We should neither prefer nor avoid a solution purely based on its provenance. Instead, we need to weigh the risks against the benefits in each case.

Acknowledge our consumer mentality

Our society and economy depend on specialists from different fields trading on the relative value of their expertise. The vast majority of products and services we consume, e.g. utilities, mass transportation, information, tools, staple ingredients, etc., are completely beyond our individual capabilities to produce. 

The result is a society where we choose to consume orders of magnitude more times per day than we choose to produce. Our culture is predisposed, if not politely held hostage, to trusting professionals and merchants even when that trust is pretty arbitrary. We teach our children not to take candy from a stranger but we’ll likely be fine taking one from an unattended bowl left in a lobby, and we’ll certainly buy one in a store with no knowledge or guarantee of how it got there.

It wouldn’t hurt to be a little more careful about our personal consumption habits but, as those responsible for the quality of the products we provide our consumers, we must take a more critical approach to our professional decisions.

Be conscious of governance, or lack thereof, in trusting goods and services

Our faith in providers and our cultural norms are not entirely misplaced. Most products we purchase have been around in some form or another for centuries and, over that time, many forms of governance and regulation have evolved to protect us as consumers. 

Professional licensing boards govern the training, testing, and review of practitioners. Food and drugs are regulated at every step of their supply chains, from harvesting, manufacturing, packaging, and delivering, to selling and even returning expired goods. Health codes cover restaurants. Building codes cover construction. All these types of governance provide a safety net that we largely take for granted. The comparatively new realms of software and software services have no such safety net.

Experience shows that even within regulated industries, software is still the wild west. One needs a license to practice medicine or fly a plane, but not to write the software which treats cancer, or keeps a plane in the air. Using third-party services or software in unregulated, data-rich, industries like eCommerce, social media, gaming, and advertising is even riskier.

Until society’s governance of software matures, adopting a critical mindset will:

  • Help us avoid assumptions about the professionalism or quality of the code and services provided to us
  • Help us analyze the risks objectively and implement our own, independent controls, to mitigate risks to the confidentiality, integrity, and availability of our business.

Don’t mistake popularity for quality

Because of something known in social psychology as conformity bias, we tend to attribute quality to what we consider popular. A type of informational conformity, popularity alleviates the discomfort and uncertainty we feel from not having a strong opinion of our own. As such, the less objective facts we have when making a decision, the more likely we are to rely on weak social proofs like social media, hype, and buzzwords. 

A classic example of this is the “customers” page, a wall of logos designed to convince us that we’ll be in the good company of giants. Marketing departments know that this information will play on our conformity bias, priming our subconscious to feel good about their product. It’s up to us to focus on objective facts. If we want to learn something meaningful from a provider’s customers we need to:

  • Speak directly to the customers involved
  • Identify how closely our use-cases match
  • Understand what else they evaluated and what factors contributed to their decisions
  • Ask about any difficulties they had or risks they mitigated

On a side note, popularity also invites more attacks. Morgan Stanley, Flagstar Bank, multiple health insurance companies, and various universities all relied on Accellion File Transfer Appliances. Once attackers found a vulnerability in these appliances, specifically designed to securely share documents with customers, they gained access to hundreds of the “treasure troves” and millions of users’ personal information. 

Extra caution is required when considering the popularity of products that don’t have clear costs acting as a gatekeeper. The temptation and risks of downloading a popular utility or project to a work laptop without a proper risk assessment are real. 

Services with free tiers and free/open-source software are more easily adopted by other projects and organizations, especially when the adopting projects themselves have fewer security or reliability requirements. 

Popularity counters like Github stars often signify no more than friendship with a developer, a passing appreciation for an idea, or a reminder to look into a project later.  Developers actively campaign for stars because Github rewards starred repositories with various forms of publicity. Some fraudulently star repositories for personal gain, while others do it maliciously.

Studies show that package downloads are a similarly poor metric for security. Recent events were a no less brutal teacher when a disgruntled developer sabotaged his very popular (over 20M weekly downloads) library and with it, no less than 18,925 public packages which depended on it. 

Amongst the dependent projects were Karma, a common test runner developed originally by Google’s AngularJS team, and AWS-CDK, AWS’ official cloud development kit each with millions of weekly downloads themselves. Both had chosen to trust the colors npm package despite it being someone’s pet project. If Github and NPM had not taken controversial steps to revert the developer’s changes, the recovery could have been much more painful for everyone.

It’s hard to estimate how much damage was done in wasted compute cycles and development hours to handle the backlash from this incident. That said, with this rogue code running unexpectedly in countless CI/CD pipelines and infrastructure deployments, the ramifications could have been much worse.

In short, put very limited value into popularity metrics or hearsay when making decisions about the reliability of a product. Beware of letting conformity bias and peer pressure blind us to red flags:

  • Poor stewardship (personal repository, history of bad temperament, lack of LTS commitment)
  • Poor branch protections, self-approved merges, or lack of required code review
  • Insecure CI/CD
  • Poor test coverage
  • Permissive dependency management
  • Poor dependency choices
  • Large footprint to utility ratio
  • Unresponsive to, or overwhelmed by, issues 
  • Unpatched security vulnerabilities

The relationship between cost and quality, or lack thereof

We all grow up hearing “You get what you pay for” and “You don’t get something for nothing.” In the light of our consumer mentality, this makes perfect sense. We wouldn’t easily trade valuable skills or resources for free and conversely, when we pay for something with our hard-earned money, we expect a commensurate return. Unconsciously, these feelings bias us towards placing more value on more expensive products.

In actuality, researchers have long investigated the complicated relationship between cost and quality and empirical studies have shown only a weak correlation between the two. The relationship is incredibly sensitive to variables such as competition, advertising, inflation, and the amount of information available to consumers. Realistically, cost is a poor signal of quality.

When we choose to be mindful of this, it’s obvious that marketers use this bias against us constantly. Not only do we expect to be charged more for a brand-name product, but finding such a product for less than we expect raises suspicions of counterfeiting, etc. For another example, consider how retailers increase the perceived value of a purchase by raising the non-sale price of a product. Raising prices effectively becomes essential to building a valued brand. 

Software’s intangible nature makes pricing an even less reliable quality indicator. Unlike tangible products, raw material availability and caliber play no role in pricing. Once developed, software costs nothing to reproduce or distribute, and once stable, requires only minor updates in response to external changes. Because development is essentially a one-time investment, vendors often price at a loss, focusing on frictionless onboarding to gain adoption, and vendor lock-in or inertia to upsell and recoup their costs in the future.

In summary, our perceived correlation between price and quality is a generally poor quality indicator and in the software industry, all the more so. When comparing options, keep this in mind and avoid letting assumed value influence our choices. 

Examine each provider and their motivations

One of the main rationalizations for outsourcing components is the inaccurate assertion that a company specializing in a product will certainly build a more, reliable, scalable, and secure version since it is their core business. While this sounds like infallible logic, the truth is much more nuanced.

Businesses optimize for return on investment, for the solution customers are willing to pay for. Young products, especially, can’t afford to prioritize non-functional requirements unless they are blocking the business. Depending on their motivations, companies may even sell you a service one day and shut it down a couple of months later, for example: 

  • Google’s famously large list of dead services and products
  • Joyent, HP, and Verizon close their public clouds with only 5, 3, and 2 months’ notice respectively

To determine if a commercial solution’s features come with an illusion of quality or true quality, it’s imperative to understand the motivations driving the provider.

  • Do they provide services as-is, without warranty, or with other liability limiting disclaimers? When push comes to shove, these indicate the provider’s bottom line.
  • Do they have the manpower and resources to provide a quality solution?
  • Are performance, availability, and/or security as critical for most of their users as it is for us?
  • Have they committed to similar organizational certifications, standards, and controls as we have?
  • Will they commit to meaningful SLAs, long-term support, and penalties in case they breach those commitments?
  • Are they transparent enough for us to hold them to their commitments? If we can’t prove they’ve broken a commitment, the commitment itself is meaningless. 

Open-sourced components have different tradeoffs than commercial ones but the motivations behind them are no less important. Commercial backers might expect open source adoption to bring paid customers for important features or support. Alternatively, they might hope for additional data from users or external development resources. Many developers expect projects to attract consulting contracts or employers. Projects backed by foundations might be safer than those maintained purely out of benevolence but without contracts, there are no actual obligations, and active development could end, disappear, or be sabotaged at any time.  


Choosing to depend on third-party software is a larger decision than we realize. Societally, we’re conditioned to trust providers but, pragmatically, we realize that the software industry is relatively young and lacking protections that we usually take for granted and therefore, requires more caution.

Even while trying to be cautious and attentive to the risks, we tend to allow unreliable indicators of quality like popularity and other social proofs to influence our decisions. Acknowledging this, we can consciously shift our focus to objective metrics and investigation upon which to base our decisions.

Lastly, the adage says, if you want something done right, do it yourself. The deeper meaning is that others have different considerations, driving them to make different decisions. Understanding each provider’s motivations helps us tell a win-win partnership from a ticking time bomb. 

Additional Resources

Cognitive Bias

Software Engineering Risk in Regulated Industries

Software Engineering Risk in Unregulated Industries

Lack of Correlation between Popularity and Quality

Relationship Between Price and Quality

Conflicting Motivations of Customers and Providers