
The expanding presence of artificial intelligence (AI) in modern society has permeated nearly every facet of human existence including the corporate landscape. Examining the legal challenges of this new reality, a faculty member from De La Salle University’s Department of Commercial Law pushes for the creation of a general legal framework for AI in Corporate Governance.
The earliest concept of artificial intelligence (AI) came from the groundwork laid by the famous British scientist Alan Turing during the mid-20th century, which was the development of an “algorithm” that would ultimately pave the way for modern-day computers. Today, Turing’s research and philosophy serve as the foundation of modern-day artificial intelligence.
“In our world right now, artificial intelligence has crept into a lot of the many aspects of our daily lives, one of which is its integration in the corporate setting,” says De La Salle University faculty member Atty. James Keith Heffron from the Department of Commercial Law. His curiosity about robots and AI has led him to pursue research about AI’s role in Philippine Corporate Governance.
In a paper he wrote, he noted that the efficacy of AI functioning within a corporate framework was first observed back in 2014, when a Hong Kong venture capitalist firm on the brink of bankcruptcy commissioned a team of big data analysts to help assist the board of directors in solving its management issues. The team created an artificial intelligence system called VITAL (Validating Investment Tool for Advancing Life Sciences),
which eventually proved to be so successful that it was made a part of the management team and appointed as a director on “observer status.”
Citing the success of VITAL, Heffron notes the many advantages of artificial intelligence for other corporate entities to adopt it. “The advantage of using AI really is all about performance, consistent quality, and productivity. Its decisions are also based on pure scientific methods that are devoid of any emotional bias and fallacious logic, which means there is less potential for human error and a higher cost-benefit ratio of efficiency.”
Despite these fascinating conveniences with artificial intelligence, however, Heffron also points out a number
of disadvantages with its use for decision-making. “The main benefit of artificial intelligence is that its decisions are free from any biased human emotion, but there are some decisions that demand the need for human empathy and moral compass that it cannot provide.“ He also indicates the susceptible nature of AI to negative outside influence, such as data breaches and hacking. “At the end of the day, a robot is still a machine—a machine created by humans and therefore not immune from malicious tampering.”

Encompassing effort
Heffron also raises a basic concern on liability issues that a company may encounter with the use of artificial intelligence, which does not have any legal personality. In the case of management decisions that result in damages, losses, or even injuries to stakeholders, who, then, should be made liable?
In his paper, Heffron references Israeli Professor of Criminal Law Gabriel Hallevy on traditional forms of criminal punishment vis-a-vis artificial intelligence. “The positivist theory under criminal law dictates essentially that punishment should be rehabilitative—
the idea is to correct the behavior of an erring individual until he is taken back by society again. The thing about robots or artificial intelligence is that you cannot have some kind of punishment that could rehabilitate it. Because how can you punish somebody who’s not self-aware?”
In response to the revolutionary impact of artificial intelligence on the corporate world, Heffron proposes a theoretical framework that is based on the “ASEAN Guide on AI Governance and Ethics,” a living document that provides a set of guidelines for governments and businesses in the use of artificial intelligence. The guidelines offer seven key principles, which are transparency and explainability, fairness and equity, security and safety, robustness and reliability, human centricity, privacy and data governance, and lastly, accountability and integrity.
Heffron proposes that there must be at least one human director who is well-versed about the AI system and tasked to explain its various processes and functions to all stakeholders, who in turn should also practice their own due diligence and research before affirming any decisions made using the AI system. Failing to do this duty, any harmful decision executed by the AI system shall hold the director and the approving stakeholders accountable. He adds that the framework does not particularly apply only to corporate governance. “The framework should be applicable to all. If you analyze it further, the basic principle or value here really is integrity and security or accountability, so it essentially is universal.”
Envisioning what lies ahead, Heffron expresses optimism as he refers to the law of accelerating returns and the exponential development of technology: “One hundred years ago, we really didn’t have any computers yet, but now we have already created machines that can learn for themselves and decide on their own. I also hope that other articles such as mine, could help contribute to that ongoing conversation, so that our policymakers could eventually develop a good legal and ethical framework to address this emerging technology.”
The post How can we prepare for AI in corporate governance? appeared first on De La Salle University.
0 Commentaires