Ethics and Tech Entrepreneurship: Why ‘Don’t Be Evil’ Isn’t Good Enough
For many years, ethics was a background question in tech, if it came up at all. Software developers and tech entrepreneurs focused on solving engineering or coding problems, creating functional solutions to the problems stated in project specifications.
Solving the problem at hand was the goal. Concerns about the ethical implications of the solution were academic at best.
That landscape is changing, however. In the past few years, “tech coverage has grown more skeptical, investigative, and serious — a shift from treating Silicon Valley as a novelty to seeing it as the power center it has become,” Anna Wiener writes at The New Yorker.
Since Wiener wrote those words, two other important developments have altered the landscape further: A global pandemic that’s forced millions of people to embrace remote work, and an antitrust lawsuit that the U.S. government has filed against Google.
Amid all these shifts is an increasing demand for tech companies to consider the ethical implications of their offerings and to take responsibility for missteps.
Here’s the thing: Many tech companies have operated for years on ethical systems that aren’t sufficiently developed to address the myriad roles technology plays in our lives.
‘Don’t Be Evil’ =/= ‘Do Good’
Central to the issue of ethics in tech entrepreneurship is the question “What makes software good?”
To a developer or a tech startup, good software is seamless, efficient and intuitive for users. Yet software that meets these conditions, while good to its creators, may not do good for its users, Wiener writes.
Individual software developers may see problems as they arise, but developers don’t have many sources of guidance outside the direction given by the tech companies they serve.
And the guidance given by tech companies can be questionable at best. For instance, Google’s now-famous ethical guideline, “Don’t Be Evil,” ran into trouble almost immediately upon its inclusion in the company’s 2004 prospectus.
While the company clearly had high hopes for maintaining its ethical purity, “the problem was that purity requires a business model to support it and in 2000 the venture capitalists who had invested in Google pointed out to [Sergey Brin and Larry Page, Google’s founders] that they didn’t have one,” John Naughton writes in The Guardian.
Google’s response — to sell advertising based on aggregated user data — raises ethics concerns of its own. It’s unclear whether targeted advertising helps customers or companies, but it has spurred a race to acquire user data, which may then be used for nefarious purposes, David Dayen in The New Republic writes.
Ultimately, Google hasn’t even been able to cling to its goal of avoiding evil, says Ross LaJeunesse, Google’s former head of international relations. LaJeunesse left the company in 2019 after 11 years at Google, citing concerns about the company’s willingness to cooperate with censorship demands in China and his suspicions that Google was actively pursuing deals with the Saudi government.
Before leaving, LaJeunesse says, he pushed for a company-wide Human Rights Program that would give the company’s teams more ethical guidance than “don’t be evil.” Every attempt to launch the program, however, was met with resistance.
“I then realized that the company had never intended to incorporate human rights principles into its business and product decisions,” he writes. “Just when Google needed to double down on a commitment to human rights, it decided to instead chase bigger profits and an even higher stock price.”
Compliant =/= Good, Either
It’s worth talking about the Google antitrust suit here.
The U.S. Department of Justice, in its suit, alleges that Google acts as “the gatekeeper of the Internet.”
“Google was accused in the long-expected lawsuit of harming competition in internet search and search advertising through distribution agreements – contracts in which Google pays other companies millions of dollars to prioritize its search engine in their products – and other restrictions that put its search tool front and center whenever consumers browsed the web,” Guardian reporter Kari Paul writes.
Whatever the outcome of that suit, it’s important to remember that even if the U.S. government compels Google or any other tech company into a less-monopolistic position, that doesn’t wipe the slate clean or hold that company accountable for its previous ethical breaches.
Attempts to Codify ‘Good’ in Software Development
Some efforts have been made to provide ethical guidance for developers within the professional field, rather than leaving developers to the whims of individual employers.
For instance, the International Standard for Professional Software Development and Ethical Responsibility, promulgated by IEEE-CS/ACM, recognizes that “software engineers have significant opportunities to do good or cause harm, to enable others to do good or cause harm, or to influence others to do good or cause harm.”
The document attempts to give broad guidance in eight areas, though it acknowledges that it can only provide guidance in the abstract.
To solve tech’s ethics problems, however, ethics boards and advisory documents won’t be enough, says Andrew Maynard, director of the Arizona State University Risk Innovation Lab: “Businesses also need to focus on outcomes, the often tortuous pathway between aspirational goals and what happens when the rubber hits the road.”
When businesses fail to consider outcomes broadly, nightmare scenarios can result.
A Superhuman Error
In 2019, startup Superhuman made headlines when tech executive Mike Davidson discovered a concerning feature in the company’s email software: the use of embedded pixels to track whether, when and where recipients opened their emails. The feature ran by default, and recipients of emails from a Superhuman inbox could not opt out of telegraphing this information to the email’s sender.
Davidson emphasized the privacy and security concerns this feature created: “Ask yourself if you expect this information to be collected on you and relayed back to your parent, your child, your spouse, a co-worker, a salesperson, an ex, a random stranger, or a stalker every time you read your email.”
How did this feature make it into the hands of consumers? In short: Failure to consider the ethical implications of read receipts from multiple perspectives.
“I have come to understand that there are indeed nightmare scenarios involving location tracking,” says Rahul Vohra, founder and CEO of Superhuman. “… When we built Superhuman, we focused only on the needs of our customers. We did not consider potential bad actors.”
Little Decisions, Big Impacts
Even small decisions matter when it comes to building a company’s ethics, Davidson says. Those small decisions accrete over time, becoming habits that define the way a company does business.
Distributed to enough customers, those small decisions begin to change the way customers think, as well. Customers who see unethical or questionable features in software may assume that the feature is legal (or it wouldn’t be present). They may also fail to consider whether the feature’s uses are ethical, says Davidson.
Features that do an end-run around privacy or security concerns, for instance, may cause customers to skip the ethics question entirely if the features are presented as defaults. These customers don’t think, “Wait, should I use these?” when read receipts are presented as a default or when opt-out functions aren’t included.
“Superhuman teaches its user to surveil by default,” Davidson writes. “I imagine many users sign up for this, see the feature, and say to themselves ‘Cool! Read receipts! I guess that’s one of the things my $30 a month buys me.’”
Building an Ethical Future for Tech Entrepreneurship
Corporate ethics matter to today’s job-seekers. Increasingly, they matter to customers, as well.
Tech startups and other businesses find themselves held to standards regarding fair dealing, sustainability, data security, privacy, diversity and inclusion, Larry Alton at Forbes writes. Tech companies can no longer let these issues slide past, or hope that a pat answer like “Don’t be evil” will account for them effectively.
“At first sight, it is easy to think that the technical part of development is not directly related to people’s lives,” Daniel Alcanja writes at Simple Programmer. “After all, it is the business practices that really affect users.”
Since software developers are the ones closest to software’s creation and functionality, however, developers know the software best. Thus, developers are uniquely positioned to address key ethical concerns — which means they also have an ethical responsibility to do so, says Alcanja.
But what exactly does that ethical responsibility entail?
“The difference between what you, as a software developer, can do legally, a floor, and what you should do, a ceiling, is often vast, confusing, and not always intuitive,” Olga V. Mack writes at VentureBeat.
The problem is the companies these developers work for can be the source of unethical demands rather than a source of guidance regarding the proper course of action. Further, it’s not the job of individual coders to be responsible for upholding company ethics. That’s labor that requires the buy-in and efforts of people much higher up the chain of command.
There are people and organizations trying to push the tech sector in the right direction. For example, Ethics in Entrepreneurship, founded by Theranos whistleblower Erika Cheung, is working to build and nurture stronger ethical foundations across the tech sector.
Internally, managers and executives need their own sources of ethical guidance, as do the companies they work for — and the investors those companies are beholden to.
That’s a decades-long project. As a starting point, SAP’s Max Wessel and Nicole Helmer propose a three-point framework for companies to guide their innovations toward more ethical outcomes:
- Assume your innovation becomes the standard bearer. “Assume you become dominant,” Wessel and Helmer write. “Then ask what is most likely to break, what can be done to prevent breaks, and how to handle them when they occur.”
- Identify and document the safeguards present in the market your innovation disrupts, then imagine how the efficiencies your product has created could be used to circumvent those safeguards were your tech to fall into careless hands. As an example, think about Airbnb and its house-party problem.
- Clearly and explicitly define what people or teams are responsible for re-introducing those safeguards.
As Wessel and Helmer put it, this is a path toward both tech success and “[taking] responsibility for the future that will be created” when you achieve that success.
Images by: Peter G., Katarzyna Bialasiewicz/©123RF.com, scyther5/©123RF.com