AI Is Testing the Limits of Corporate Governance
Author: Roberto Tallarita

Few doubt that artificial intelligence (AI) is going to be disruptive for society, and governments are beginning to devise regulatory strategies to control its social cost. In the meantime, however, AI is being developed by private firms, run by executives, supervised by boards of directors, and funded by investors. In other words, what is likely to prove the most important technological innovation of our lifetime is currently overseen by corporate governance—the set of rules, mostly of private creation, that allocate power and manage conflicts within a corporation.
The recent boardroom war at OpenAI, the company that developed ChatGPT, has put a spotlight on the role of corporate governance in AI safety. OpenAI Global LLC, the Delaware company in which Microsoft and other investors have invested billions of dollars, is controlled by a nonprofit, OpenAI. On November 17, 2023, the board of directors fired OpenAI cofounder and CEO Sam Altman, on the grounds that “he was not consistently candid … with the board.” Investors protested, but the board stood by its decision.
Shortly thereafter, on November 20, Microsoft announced that it had hired Altman and Greg Brockman, another cofounder of OpenAI, to continue their work on AI development within Microsoft. Hundreds of OpenAI employees threatened to join Microsoft as well. On November 22, less than a week after his ousting, Sam Altman was back as CEO of OpenAI, and all but one director of OpenAI resigned.
Microsoft could not fire the board and reinstate Altman as CEO, but it could hire Altman and hundreds of other employees. It could essentially “buy” OpenAI without paying a price to the company’s shareholders. The legal entity OpenAI was constrained by its governance structure, but the knowledge developed by the company (its main asset) could be acquired and redeployed free from these constraints.
Such events bring questions to the fore: Can AI safety research shed any light on old corporate governance problems? And can the law and economics of corporate governance help us frame the new problems of AI safety? I identify five lessons—and one dire warning—on the corporate governance of AI that the corporate turmoil at OpenAI has made vivid.
1. Companies cannot rely on traditional corporate governance to protect the social good
At least, this is what OpenAI and Anthropic—two of the most advanced players in AI development—believed when they were set up.
Unlike in a conventional corporation, investors cannot hire or fire the board members, and neither the investors nor the CEO control the board. The company charter warns investors that OpenAI’s mission is “to ensure that artificial general intelligence (AGI) … benefits all of humanity,” and the company’s “primary fiduciary duty is to humanity.” In other words, that duty takes precedence over any obligation to generate a profit.
Anthropic is organized as a public benefit corporation (PBC), with the specific mission to “responsibly develop and maintain advanced AI for the long-term benefit of humanity.” In a powerful tweak to the standard PBC structure, a common law trust with the same social goal as the company is entitled to elect an increasing number of directors over time, which will become a majority after a certain period of time or the reaching of certain fundraising milestones.
Both structures are highly unusual for cutting-edge tech companies. Their purpose is to isolate corporate governance from the pressure of profit maximization and to constrain the power of the CEO. If the company chooses safety over profits, investors and executives can protest but they cannot compel the board to make a different choice.
Compare and contrast this approach with the recent wave of support for stakeholder governance. In 2019 the Business Roundtable, a prominent association of CEOs of leading companies, issued a statement in which many of its members pledged to deliver value not only to shareholders but also to employees, customers, and society at large. Similar stakeholder governance manifestos—by the World Economic Forum, corporate governance experts, and major asset managers—insist on the need for corporations to consider social goals alongside profit maximization.
But as fellow law professor Lucian Bebchuk and I have documented, the companies that have embraced the rhetoric of stakeholder protection have not changed their governance.In fact, both the Business Roundtable and other vocal supporters of stakeholder governance have argued that there are no significant trade-offs between profit maximization and social purpose. Most of the stakeholder governance movement is therefore predicated on the ability of conventional corporate governance to pursue both profits and social goals.
The governance structures of OpenAI and Anthropic suggest otherwise. Whatever one may think of the decision to fire Altman or of the effectiveness of the governance structures of OpenAI or Anthropic, these governance experiments teach us an important lesson: If a company wants to get serious about social purpose and stakeholder welfare, it cannot rely on traditional corporate governance, but it must constrain the power of both investors and executives.
2. Even creative governance structures will struggle to tame the profit motive
In an influential paper, economists Oliver Hart and Luigi Zingales argued that, in an unrestricted market for corporate control, a profit-driven buyer can easily hijack the social mission of a firm. They called this phenomenon “amoral drift.” Hart and Zingales refer to corporate takeovers, but something similar happened at OpenAI. Right or wrong, the board of OpenAI had strong nonfinancial reasons to fire Altman, but it eventually capitulated to profit-maximizing pressure.
But what if OpenAI could effectively stop Altman and other employees from working for a for-profit organization? A bit of backward induction can help us guess the answer. Would Microsoft have invested $13 billion in OpenAI if that capital could have been more effectively committed to OpenAI’s social goals? More generally, would investors fund new AI startups if their socially oriented governance were more effective in taming the profit motive? And if investors could choose between investor-friendly AI companies and socially committed AI companies, which ones would they choose?
It is easy to imagine the answers to these questions. AI companies with more effective mechanisms to remain committed to AI safety even at the expense of investor return might struggle to get funding. In equilibrium, this might mean that investor-friendly companies would win out and socially oriented companies would be wiped out. Perhaps it is possible to design waterproof solutions to avoid the amoral drift. So far, however, no corporate planner has come up with one.
3. Independence and social responsibility do not necessarily converge
An important concept in AI safety is the so-called orthogonality thesis, which posits that AI’s intelligence and its final goals are not necessarily correlated. We can have unintelligent machines that serve us well and superintelligent machines that harm us. Intelligence alone does not guarantee against harmful behavior.
Corporate governance experts should borrow this helpful concept. Textbook corporate governance prescribes companies to appoint independent directors, who are freer from the influence of CEOs and are supposed to be loyal to shareholders. But independence from management and loyalty to shareholders are orthogonal: The former does not necessarily result in the latter. An independent director might well choose not to pay attention, to pursue their own interests, or to follow some personal convictions that are harmful to shareholders. We cannot presume that independent directors will automatically do the right thing.
By the same token, we cannot presume that insulation from investor and CEO pressure, of the kind sought by OpenAI and Anthropic, will automatically result in socially desirable decisions. Directors who cannot be fired by investors are less likely to follow investor preferences, but are they more likely to choose what is best for society?
Socially oriented governance structures should not be content with independence from executives and investors. They should also set up mechanisms that encourage directors to pursue social goals and take them to account. Corporate planners should experiment with methods that allow outside scrutiny of board decisions, with incentives for socially oriented decision-making and with creative forms of accountability for board members.
4. Corporate governance should try to solve for the alignment of profit and safety
One crucial problem in AI safety is the so-called “alignment problem”: Superintelligent AI might have values and goals that are incompatible with human well-being. This sounds like a science fiction fantasy, but the consensus among AI researchers is that human-level AI is imminent, and the alignment problem is real.
We can program a superintelligent AI to pursue socially desirable goals, but we cannot exclude that, in pursuing those terminal goals, the AI will decide to pursue harmful instrumental goals. The problem is that we do not yet know how to teach AI to behave in a way that is always compatible with human values. We can list dozens or hundreds of human-compatible behaviors, but this list will never be exhaustive.
The AI alignment problem is quite similar to the central problem of corporate governance. In a corporation, investors entrust their money to corporate managers, and they want to make sure that managers do what is best for investors. Investors can write down some rules, but just like AI programmers, they cannot specify all the possible rules applicable to all the possible situations. The contract with the managers is, as economists like to say, an incomplete contract.
Corporate governance tries hard to solve this problem. Companies give managers incentives, like stock options, that align their interests with the interests of investors. They appoint independent directors. They disclose material information so that investors can monitor how companies are run. They give investors voting rights and other control devices so that they can step in when necessary and remove unfaithful managers.
The whole machinery of corporate law and governance is preoccupied with what experts call the managerial “agency problem”: how to reduce the risk that managers deviate from the preferences of investors. It does not solve the problem entirely, but it considerably alleviates it.
What OpenAI’s and Anthropic’s alternative governance structures do is to try to protect AI safety from the profit-seeking drive of managers and investors. But, as we have seen, the profit motive is a powerful force, which can find ways to upset governance designs.
An alternative route is to try to make AI safety profitable. The best hope for the private governance of AI safety (if such a thing is achievable at all) is to strike an alliance with the profit motive.
This is easier said than done. However, the history of liberal societies suggests that investing more talent and energy in this inquiry is worthwhile. Our most successful institutional designs, from liberal constitutions to capitalist institutions, do not depend on suppressing greed and ambition. Instead, they focus on harnessing these passions for the greater good.
The alignment of profit and safety is perhaps as hard a problem to solve as the alignment of AI and human values, but among the possible strategies for the corporate governance of AI, it has the largest potential upside. More creative experiments should focus on this project.
5. AI companies’ boards must maintain a delicate balance in cognitive distance
AI safety is a niche field. While many businesspeople are now learning about AI and some of its risks, the real experts are often outsiders with little or no experience in the corporate world.
More importantly, AI safety experts and mainstream businesspeople often have very different competences, backgrounds, and beliefs about how fast AI will develop and how dangerous it could be. What is a highly probable and imminent development to many AI safety experts, such as human-level or superintelligent AI, is a wild speculation to many outsiders; what is a small but concrete risk to many AI safety experts, such as an uncontrollable AI, is a nonsensical sci-fi fantasy to many outsiders.
This difference between how AI safety experts and outsiders interpret and understand the world is what some scholars have termed cognitive distance. Cognitive distance may be beneficial to collective decision-making, especially in innovative firms. Indeed, to develop novel knowledge, decision-makers must be exposed to new ideas and points of view.
But finding the optimal degree of cognitive distance is hard. Too little cognitive distance may result in groupthink and echo chambers; too much may prevent mutual understanding and any kind of meaningful cooperation.
Was the drastic and sudden decision to fire Altman, with little or no warning to major investors and no explanations to the public, the product of too little cognitive distance? OpenAI’s board members, beyond Altman and Brockman, were the company’s chief scientist, an academic AI safety expert, a RAND Corporation scientist focused on AI governance, and a tech CEO. It is possible that their beliefs on AI safety were strongly aligned, and that their decision-making process did not benefit from outside and discordant points of view. Likely, they did not have to convince any outsiders that firing the CEO was the right thing to do.
But can AI companies’ social mission be effectively pursued if board members do not have a strong safety mindset or intuitively reject the bleakest scenarios? The makeup of the interim board of OpenAI is now more aligned with the business establishment. It has no AI “geeks,” and it includes former treasury secretary and Harvard president Larry Summers and big-tech veteran Bret Taylor.
It is possible that, on the new board, the level of cognitive distance has remained unchanged, but the shared beliefs have simply become more mainstream. In other words, while the previous board might have been cohesive in sharing the AI safety experts’ beliefs on the risks of human-level AI, the new board might be equally cohesive in sharing a more conventional business view of the world. In both setups, the cognitive distance in the boardroom might be too little.
Corporate boards are complex social systems. The ideal decision-making dynamic in the boardroom should be one in which directors with different backgrounds, competences, and points of view discuss vigorously and intelligently, willing to contribute their insights but also to learn and change their minds when appropriate. Real-world boardrooms often fail to live up to this standard.
Considering the significant risks associated with AI safety and the substantial differences in viewpoints and expertise, board composition in AI companies should become a top priority. These companies should strive for greater cognitive distance than more conventional companies, and their boardroom norms should aggressively reward time commitment and robust, open-minded discussion. Though often underrated in discussions of corporate governance, boardroom social and cognitive dynamics are crucial. If there is any business sector where this should become a central concern, it is unquestionably in AI development.
A warning: Corporate governance cannot handle catastrophic risk
Many risks posed by AI are serious but not fatal. Job displacement, misinformation, rise of online scams, copyright infringement, and privacy concerns might prove seriously harmful, but they will not irreparably damage our civilization.
Many AI experts, however, believe that there is a small but nonnegligible chance that AI will be catastrophic for humanity. In a 2022 survey of AI experts, the median respondent said that the probability that AI will lead to “something extremely bad, for example, human extinction” is 5%. Almost half of the respondents (48%) gave at least 10% chance of a disastrous outcome.
While corporate governance might help mitigate serious risks, it is not good at handling existential risk, even when corporate decision-makers have the strongest commitment to the common good. To understand why, we should go back to the problem of incomplete contracts.
An incomplete contract is a contract that does not contain rules for all possible future scenarios. All real-world contracts are incomplete, and firms often accept this problem as an inevitable cost of doing business.
When the costs of incompleteness are too high, however, firms can choose another strategy: They can integrate their contractual counterparty within their own organization. This way, the firm will retain the “residual rights of control” over the relevant assets and can therefore regulate unexpected situations if and when they occur.
Consider a contract between a carmaker and a supplier of auto parts. The contract will specify the obligations of the supplier under many but not all circumstances. What happens in an unregulated circumstance? The supplier is free to refuse an order and the carmaker might not get the auto parts it wants.
To avoid this problem, the carmaker can acquire the supplier and integrate it within the company. This means that in an unregulated circumstance, the carmaker can still get the auto parts if it wants to.
Now translate this problem to AI safety. In this setting, the residual rights of control are the ability of the AI company to turn off the machine. In any unexpected circumstances, when the AI is behaving in harmful ways, the AI company can decide that the risks are greater than the benefits and pull the plug. A corporate governance system geared toward AI safety can do precisely that.
But what happens if the AI becomes uncontrollable? In that scenario, the residual rights of control are of little help. As anyone who has watched a few sci-fi movies knows, the “owner” of the rogue AI cannot turn off the machine that easily.
When it comes to catastrophic risks, our legal system typically gives up on ordinary legal controls—such as property rights, contracts, or lawsuits—and focuses on extraordinary legal controls, of the kind used to regulate nuclear proliferation or biohazard. The pursuit of AI safety warrants this kind of extraordinary effort.
Top AI experts and commentators have already invoked a Manhattan Project for AI, in which the U.S. government would mobilize thousands of scientists and private actors, fund research that would be uneconomic for business firms, and make safety an absolute priority. Even the most creative corporate governance innovations cannot be a long-term substitute for the public governance of catastrophic risks. While good corporate governance can help in the transitional phase, the government should quickly recognize its inevitable role in AI safety and step up to the historic task.
TAKEAWAYSCan questions around AI safety and governance shed any light on old corporate governance problems? And can the law and economics of corporate governance help us frame the new problems of AI safety? The events surrounding the boardroom turmoil at OpenAI reveal five lessons—and one warning—on the corporate governance of socially sensitive technologies:
✓ Companies cannot rely on traditional corporate governance to protect the social good.
✓ Even creative governance structures will struggle to tame the profit motive.
✓ Independence and social responsibility do not necessarily converge.
✓ Corporate governance should try to solve for the alignment of profit and safety.
✓ AI companies’ boards must maintain a delicate balance in cognitive distance.
✓ Still, the emergence of AI carries unlikely but nonnegligible extreme risks for humanity; even the most creative corporate governance innovations cannot be a long-term substitute for the public governance of catastrophic risks.
Please Log in to leave a comment.