The Trust-Profit Paradox
Platform companies live and die by the trust of their users—but it seems you can't have your trust and generate shareholder value, too.
Sean flipped his iPad around to show me something. It was a picture of Pope Francis wearing a big puffy white coat with the undeniable aura of street-style luxury. My first thought was, "That's hilarious." My second thought was, "Huh, I thought Pope Francis was all about dialing back the papal finery."
The thought I never had was, "That's not real."
But, it wasn't real. Pope Francis really is all about that vow of poverty, after all.
The Pope coat pic is an AI-generated image. And it wasn't perfect—but it was good enough to fool people (like me) who didn't look closer.
The Pope coat pic came up in a recent conversation on Decoder between The Verge's Nilay Patel and Brian Chesky, CEO of Airbnb.
Chesky brings up the Pope coat pic in response to Patel asking about the potential "tsunami of garbage AI content" on user-generated content platforms like Airbnb. Patel then says that one of his favorite philosophical questions to ask interviewees is: "What is a photo?" He poses this question to Chesky, specifically asking about the line between a photo that's been run through filters to "enhance" it and a photo that's been created through a generative AI tool.
The Airbnb CEO is stumped.
It seems Chesky had never considered that someone might type "mid-century modern living room" into MidJourney and use the result to boost bookings of their Airbnb listing.
Chesky pivots:
There’s going to be really subjective things where even if we could know where the line is, how would we ever enforce it if we don’t go and physically inspect the property? And so, therefore, what we need to do is put it back on the community.
…
But you’re probably asking a question we haven’t reckoned with yet. Where exactly is that line between mildly embellishing to make something look great and actual misrepresentation?
I don’t know the answer to that.
He says that instead of trying to draw that line as a company, "the community" will make sure that kind of thing doesn't happen. He cites the high rate of reviews that listings receive after a stay.
In order for "the community" to stop misleading listings using AI-generated photos, "the community" will need to book and stay at the misleading listings. "The community" may arrive at their vacation destination and realize they've been had. They may arrive and realize their destination doesn't meet their needs or even jeopardizes their safety. In order for "the community" to moderate bad behavior, "the community" will suffer.
Chesky’s off-hand suggestion that the only way Airbnb could ensure the authenticity of its listings would be to visit every property is presented as completely absurd. How could they ever manage those logistics? How could they ever pay for it? It just doesn't make sense.
But why?1 Why is ensuring that when someone pays a few thousand dollars for a week-long stay that they'll actually get what they paid for? How is that not the company's business? By the time someone leaves a negative review, the damage is done. Chesky would prefer not to take any responsibility for that damage. Imagine the logistics!
Profit seems to hinge on avoiding responsibility in the internet age.
Social media platforms aren't responsible for hate speech posted on their sites. Marketplaces aren't responsible for misleading listings. Gig work companies aren't responsible for meeting minimum wage requirements or fair labor standards. Influencers aren't responsible for the products they shill. Course creators aren't responsible for the results they promise.
The goal appears to be building a business that uses the law, market forces, behavioral psychology, and social capital to transfer the risk of use completely from company to consumer. It's understandable, really.
Responsibility is expensive. It requires labor. It takes time to build systems—both operational and moral. It necessitates saying no to people who might really want to give you money. Responsibility is political in a way that seems dangerous today.
But responsibility is the cost of capital when your capital is primarily trust. And if a company can't pay the cost of capital, then is it really working as a company?
Profit must cover the cost of trust.
In an essay originally published in the Wall Street Journal, management theorist Peter Drucker categorically declares that there is "no such thing as profit:"
...businessmen [sic] owe it to themselves and owe it to society to hammer home that there is no such thing as profit. There are only costs: costs of doing business and costs of staying in business; costs of labor and raw materials, and costs of capital; costs of today's jobs and costs of tomorrow's jobs and tomorrow's pensions.
The cost architecture of a company like Airbnb is very different from the kinds of companies that Drucker studied between the 1940s and 1975 when this essay was first published. So how could we think about the costs that Airbnb should be accounting for? Traditionally, "the factors of production" have been "labor, land (e.g., physical resources), and capital."
For a platform company like Airbnb, labor accounts for both the paid work that goes into building and maintaining software and operations, as well as the unpaid work that goes into creating, advertising, and updating property listings. Airbnb, despite not paying for those listings, does incur costs on that unpaid work through community support and marketing.
Airbnb doesn't require land or factories to make its product. But it does require technological infrastructure. There are costs to hosting, networking, and securing the code the platform is built on.
Finally, there's the cost of capital. Airbnb is a publicly traded company, listed on the NASDAQ—which means all kinds of investors can contribute capital with the hope of a future return. But before it went public, it was a venture-backed startup soliciting financial capital to fuel growth. Some of that capital was used to secure labor and technological infrastructure. But the riskier investment—what made the cost of capital dramatically higher—was in trust. Only by generating trust in the Airbnb platform, could the company generate the return required to create capital for investors.
Niko Matouschek, a professor at the Kellogg School of Management, put it this way:
If you look at the sharing economy, for instance—to a large extent, their success depends on their ability to create trust between third parties, trust between somebody who wants to rent out their apartment and trust between somebody who wants to rent that apartment.
Venture capital creates the runway required for a startup to build trust.
Matouschek goes so far as to say that trust is an "economic asset on which [firms] can earn a return." Trust as a form of capital, though, is easily destroyed.
Since a company's chief function in capitalism is to generate new capital, the function of Airbnb's paid and unpaid labor plus the technological infrastructure is to increase trust—which is converted to financial capital. Without trust, the company can’t generate new capital. It’s effectively worthless.
Enter the Trust-Profit Paradox
It would make sense for companies to focus on maintaining (and increasing) their supply of trust. But instead, we find companies routinely self-sabotaging their trust reserves for short-term gain. Examples abound: Facebook, Twitter, Uber, Etsy, etc.
Maintaining the trust of the people who make your business model work should be business management 101. In fact, both Peter Drucker and management thinker Roger Martin, make exactly that case. Martin lays out the evidence in a lengthy article for the Harvard Business Review magazine; optimizing a company for shareholder value delivers no greater returns to those shareholders than managing a company based on customer needs and trust. He writes:
Why is it that companies that don’t focus on maximizing shareholder value deliver such impressive returns? Because their CEOs are free to concentrate on building the real business, rather than on managing shareholder expectations.
Platform companies must maintain existing and generate new trust capital in order to maintain market share, aka relevance. But to ensure the supply of financial capital, platform companies must maximize profit to grow shareholder value. While trust capital and shareholder value should be positively correlated, they are, in fact, at odds in the current market environment.
I'm starting to think of this as the Trust-Profit Paradox.
Today's platform companies, like Airbnb, talk a good game about caring about their customers. But at the end of the day, their chief concern is shareholder value. Martin also makes it clear that you can't optimize for both customer satisfaction and shareholder value. Optimization, he says, is linear—one must follow the other in terms of priority.
Niko Matouschek again explains that it's rational to distrust:
...firms in which employees are being rewarded very strongly for short-term performance—for quarterly earnings or quarterly performance—because, again, decisions are then made by employees who care a lot about the present profits, and they care much less about future profits.
Because companies choose to optimize for shareholder value which hinges on quarterly expectations, they will inevitably destroy trust capital over the long term. If companies were to optimize for trust capital, they would likely lose access to the financial capital they need to cover the cost of trust. Hence, the Trust-Profit Paradox.
Maintaining trust requires accountability.
Companies must take responsibility for the ways their platforms interact with the wider world in order to maintain or increase their supply of trust capital. Sure, Facebook couldn't have known in 2007 that, by 2016, bad actors would use its data to meddle in US politics. But it has a responsibility to safeguard against that given what it knows (or could have easily surmised pre-2016) about how the proliferation of misinformation impacts the trust of a large segment of its (former) user base. I'm sure that building the operational and technological security to bolster trust in this environment is expensive—but that's the cost of doing business.
Profit, Drucker argues, should be an insurance premium against future risks—including the jobs that will need to be created, the technology that needs to be upgraded, and changes to the cost of capital. A company that views profit in this way would be more prepared for changing conditions that jeopardize trust. It would make sure that its people and technology could respond to new threats. Even if they didn't have the response teams waiting for action, they'd have a plan to execute that would bring in the people and tech required to maintain trust. That is what a responsible company would do.
But the incentives to act responsibly in the long term simply aren’t there.
Responsibility requires care and maintenance work (i.e., "operations" in business-speak).
And while I believe that Chesky cares if his platform becomes overrun with AI-generated images, he doesn't seem to care enough to build the operational infrastructure to prevent that from happening. If he and the Airbnb team paused to consider how they could implement more responsible systems when it comes to AI, well, they'd need to pause. Pausing is antithetical to the "move fast and break things" ethos that lingers in Silicon Valley. Pausing jeopardizes profit and shareholder value.
What does it look like for an organization, entrepreneur, or creator to take trust and responsibility and build it into an operational strategy?
First, responsibility requires wrestling with the way things could go wrong (sometimes very wrong).
Any new project or idea will likely seem to be all up-side. But no project or idea ever is. There are always opportunity costs and trade-offs, at the very least. But there are typically potential unintended consequences that will require looking down the timeline to notice. It might be all good at launch time, for instance, but what about 3 years from now?
What was so troubling to me about Chesky's non-answer to the question of AI-generated images was that it indicated the Airbnb team wasn't wrestling with the way things could go wrong. Chesky had all sorts of ideas about how it could go right. They imagine Airbnb as "one of the most personalized AI layers on the web." They're dreaming up ways that the AI-powered system will get to know guests so well that it will serve up only the perfect properties for you. They see the potential for a multi-modal interface that interacts with guests in different ways depending on the task.
But they didn't see people gaming the system by creating listings with AI-generated images? They didn't wrestle with whether the ability to tell if a listing was authentic was kind of make or break for them? Chesky deflects:
...authenticity of information and verification of information is now a whole new problem that we have to re-solve on the internet. So I think that we need to develop new technology to reauthenticate photos, to reauthenticate people’s identity, reauthenticate people’s information.
The problem, of course, is that the tech is already here. It's being deployed. And I could be wrong, but I don't think the "we" Chesky uses here is Airbnb. I think that "we" is the collective "we" of market-driven techno-solutionists that "we" assume will save the world from the ravages of those who use technology for evil.
Only by wrestling with the way things can go wrong can we build systems and safeguards to (hopefully) avoid those things. If we can't avoid unintended consequences, we need to be able to ask ourselves if the upside is worth the downside. If this new project or idea is worth the potential downside, then we should have plans in place to deal with things going wrong when they inevitably do.
Second, taking responsible action requires knowing what a company is and is not responsible for.
What a company takes responsibility for is a statement of its true values. It demonstrates trustworthiness, perhaps more than any other aspect of operations. More companies and entrepreneurs should take responsibility for the value they create and the way that value is experienced by customers. But that doesn't mean that companies and entrepreneurs are responsible for everything.
When I ran a social network for small business owners, we decided to take responsibility for the fact that you wouldn't know whether our community was right for you until you joined and gave it a try. We built an approach to welcoming new members, helping them figure out whether they were in the right place, and making it easy for them to quit (and get their money back) if it wasn't. We realized that we couldn't live up to our values if our business model was built on people remaining members simply because we didn't want to take responsibility for a key component of their experience.
But once a member had committed to the community and started to use the platform, we needed to let go of responsibility for how they used the resources we provided. One person might binge our entire library or hang out and post every day. Another person might only show up for live events. Those were both good ways to interact. As long as we offered encouragement and opportunities to realize the value they invested in, we had to let them decide the particular way they'd do that.
When Substack co-founder Chris Best rightly received pushback on his bad responses to questions (also from The Verge's Patel) about moderating hate speech on the platform, it was a signal that the way Best interpreted the company's values wasn't how many Substackers thought the company values should be interpreted. Substackers signaled that the platform was running the risk of losing their trust—an existential threat. And while the cleanup wasn't perfect, the company did make moves to demonstrate that it was listening and considering where its responsibility would start and stop.
The Trust-Profit Paradox creates an environment in which questions of responsibility are sometimes the last questions to get asked.
Consumers find themselves in a responsibility vacuum of management's making.
When everything can be reduced to dollars and cents, lines of code, or behavioral psychology, no one needs to think of the complex and unpredictable systems that create and maintain trust. We're only responsible for making the numbers work. Every user for themselves.
Modern political economy shifts the burden of responsibility away from the large public and private institutions that are in the best position to accept the burden. First, it was privatized—turning social responsibility into a matter for markets. Now, companies shift the burden of responsibility by making consumers and workers absorb the risk.
In the end, we're the ones left with the bill.
If you’re self-employed or a small business owner:
When have you decided to jeopardize profit in order to secure trust? When have you done the opposite?
How do you think about building and maintaining trust over the long term? In what ways do you invest in long-term trust?
If you’re traditionally or contractually employed:
What role does building trust play in operational decisions at work?
Have you ever been asked to compromise trust in order to secure profit?
I’m under no illusions that visiting every property, given Airbnb’s business model, is in any way feasible. However, instead of questioning whether they could take on that responsibility, I think it’s worth asking whether a business model that avoids that responsibility should be able to succeed.
SO RIGHT ON! TRUST is EVERYTHING. It's what binds us.