top of page

Exploring the Role of AI as an Independent Director: Legal Implications and Foreseeable Consequences

[Ananya Tripathi & Atharva Shukla are 4th year law students at Maharashtra National Law University, Nagpur]


Introduction


In the present era, Artificial Intelligence (‘AI’) has made a remarkable presence in every sphere of human life. Even in the corporate world, businesses are utilizing numerous ways to reap its benefits. The introduction of AI into the corporate boardroom has become a modern phenomenon worldwide. For instance, corporations outside India have developed AI models, such as ‘VITAL AI’ in Hong Kong and ‘AIDEN Insight’ in the UAE, which are being appointed in their boardrooms. As India has not yet explored this possibility, this article aims to examine the scope of integrating AI into the boardroom, specifically in the role of an ‘Independent Director’ (‘ID’), in light of the Companies Act of 2013 (‘Act’) and possible impediments such as the restricted legal framework that only recognizes a natural person in the chair of a director and other unresolved concerns relating to attribution of liability, data confidentiality, AI hallucination etc. The purpose is to evaluate the viability of this possibility in the Indian context and to offer viable solutions to traverse the highlighted impediments.


AI in Shoes of an Independent Director: Potential Benefits


As per the Companies Act of 2013, a company’s management is vested mainly with its Board of Directors (“BOD”) comprising of both executive and non-executive directors. Good corporate governance for a company can ensue when the rights of every stakeholder are considered in the decision-making process. To safeguard these rights, the Act provides for the position of an Independent Director, who should not have any direct or indirect relations or pecuniary ties with the company and must possess the requisite skills in relation to its business. 


The position of an ID has traditionally been associated with natural persons, but today it appears plausible to envisage AI models in this role. Supporting this shift are AI robots ‘Vital’ by Deep Knowledge Ventures (Hong Kong) and ‘Aiden Insight’ by International Holding Company (UAE), both serving as observer members in the BOD of their respective companies. These models are augmented intelligence models of AI, since they are not fully independent and require active collaboration with humans. Moreover, technological advancements have led to the development of the autonomous AI model that functions without human agency and is yet to be introduced in the corporate realm. This model has the capability to efficiently discharge the duties of an ID, such as ensuring proper abidance of corporate governance norms, protecting the interests of all stakeholders, detecting fraud and financial irregularities, etc.


The appointment of AI in Indian corporate boardrooms holds relevance because there have been practical instances wherein ID’s could not truly perform their expected functions, thereby defeating the intent of the law. A pertinent example is the dispute between the TATA Group and Cyrus Mistry, wherein Nusrat Wadia was purported to be removed by the TATA Company because of his support for Mistry’s decision. The removal of Mr. Wadia had influenced and deterred other IDs who were supposed to act in accordance with their fiduciary duties and uphold the best interests of the company. This raises questions over the true ‘independence’ of IDs who have onerous responsibilities cast on them and are expected to make an independent judgment, but are swayed due to external pressure.


Unlike a Human ID, an AI model remains immune to external pressure or perks, as it works in accordance with a fixed algorithmic framework. For instance, ‘Vital AI’ appointed as an observer member in the BOD of Deep Knowledge Ventures, has been credited for contributing to board level decision making with enhanced objectivity and transparency. An extensive analysis of quantifiable survival probabilities of portfolio companies made possible with Vital’s algorithmic framework, aided the group in its investment decisions, preventing unwanted external influences on the decision making process. This example is crucial for understanding the potential of AI in bringing an uncompromised and data driven perspective into the corporation’s governance and working. Thus, in this context, it can be said that AI is better suited to perform the role of an ID in both letter and spirit, unlike natural persons who, at times, are prone to a conflict of interest. By inculcating the company’s goals and legal standards into its algorithmic framework, AI can minimize deviations from the rules and regulations and help in reducing the imposition of penalties.


Major Hurdles in AI’s appointment as an Independent Director   


The appointment of AI as an ID can be a transformative step in the corporate sector. However, this has not yet turned into a reality due to limitations in the present legal framework, which has been specifically tailored for natural persons. A perusal of relevant provisions substantiates this point, for example, under Section 149 of the Act, the terms ‘individual’ and ‘persons’ used for referring to directors have been interpreted in Tristar Consultants v. V Customer Services India (p) Ltd as being exclusively applicable to natural persons, thereby excluding “AI” from their ambit. Moreover, the grounds for disqualification of directors under Section 164 of the Act, such as insolvency, unsound mind, criminal conviction, etc., are inapplicable to an AI model. A major hurdle in reforming this framework for the accommodation of AI will be the aspect of attribution of liability. Since AI does not have a physical presence, a question arises as to who should be held accountable when any damage or harm is caused by its usage.


There are also other significant challenges associated with the integration of AI into corporate boardrooms. One of these is the issue of confidentiality, because AI functions on algorithms programmed by external experts, thereby exposing confidential data to third-party elements and giving rise to privacy-related concerns. AI hallucination might be another roadblock, as AI can generate inaccurate and nonsensical output (hallucinate responses) due to insufficient or biased programming. Furthermore, at times, the inability of AI to give a rationale for outcomes, also known as the ‘black box phenomenon’, raises questions about transparency in its working as an independent director. Lastly, the lack of emotional intelligence and prior subjective experience owing to its functioning based on pre-incorporated algorithms can be a hindrance in decision making, because at times emotional intelligence is crucial for holistic and equitable decision making. 


Traversing the Hurdles with Plausible Measures 


While there are several hurdles associated with incorporating AI as an ID, some plausible measures can help in effectively mitigating them. One of the most pressing concerns, as highlighted above, is attributing the liability, which could still be imposed if one understands the nature and origin of harm that can be caused by AI. Since AI is incapable of possessing ill intention, any harm is likely an outcome of a defect, malfunction, or a third-party intervention with the software, which could be caused intentionally or unintentionally.


When the harm is a consequence of a programming related error caused intentionally by the programmer/third party, liability can be attributed to them. Moreover, when an ‘Expert Committee’ or an ‘AI agent’ is tasked with the periodical review of an AI model’s working mechanism and negligence is caused, they can be held liable. Further, in cases of unintentional and unanticipated harm, the ‘Company’ can be held strictly liable, since the appointment of AI as an ID implies the acknowledgment of associated risks with its usage.


Additionally, the questions related to the susceptibility of confidential information can be quelled by the development of an in-house infrastructure within a company to avoid outsourcing data to third parties for programming. To tackle the ‘Black Box Phenomenon’, the Explainable AI (XAI) technique can be used, which encompasses a set of processes and methodologies that enable an AI system to disclose the rationale underlying its decisions or outputs, by demystifying its inner workings and elucidating on the journey it undertakes to arrive at those decisions. The effects of AI hallucination can be mitigated by using high-quality training datasets and retrieval augmented generation technique, under which an AI model first retrieves relevant information from external sources or databases and then uses that information to generate accurate and context-aware responses. Further, AI models should be programmed to flag uncertainty when unsure about the veracity of their output. Given that the AI models are intended to work synergistically with humans, implementing these measures along with the periodic expert inspections will ensure accurate and reliable outputs.


Moreover, as AI works on pre-incorporated algorithms, it is unrealistic to expect that every conceivable circumstance or eventuality can be anticipated by it. This limitation, however, applies equally to humans, who despite their experience and intelligence, cannot foresee all possible contingencies. Therefore, the argument that AI works on pre-incorporated algorithm and lacks prior subjective experience should not form the basis to discourage its inclusion in the boardrooms. Lastly, it is recommended that provisions should be made for the establishment of an expert committee for timely evaluation and monitoring of the AI software. The State can also play a crucial role by formulating regulations for the use of AI and establishing a licensing authority at the national level, which could test and grant AI, the licence to be appointed as an independent director. By incorporating these suggestions, along with the regular updates and inspection of AI’s software, the integration of AI as an ID can be smoothly facilitated in corporate boardrooms.


Way Forward & Conclusion


The idea of appointing AI in corporate boardrooms has gained traction all around the world, as it offers the potential to remodel the working of the boardrooms. As observed by Former Chief Justice of India Shri B.R. Gavai, “Technology must complement, not replace, the human mind in judicial decision-making.” In a similar manner, AI, while acting as a complementary partner to humans, can be a viable tool in corporate boardrooms for effective decision-making. However, the full optimisation of this potential requires addressing major hurdles such as AI Hallucination, Blackbox Phenomenon, concerns about data confidentiality etc., which could be resolved with the aforementioned suggestions. Above all, the need of the hour is to revamp the current legal framework by taking into account the plausible issues that may arise from the introduction of AI as an ID, the prominent issue being the attribution of liability.

While the integration of AI may come with its own set of challenges, corporate entities should not fall behind in exploiting its potential, especially when AI has become an omnipresent subject of the current technological discourse. Therefore, what is required is a balanced approach to make an efficacious and responsible use of this groundbreaking technology.

 


 
 
 

2 Comments


Lisa Smith
Lisa Smith
14 hours ago

This is a fascinating exploration of how AI could function as an independent director and the legal complexities that come with it. One aspect that stood out is accountability—while AI can assist in decision-making, assigning legal responsibility remains a major challenge. Similar to publishing, where understanding what is book edition helps clarify ownership, revisions, and liability, defining AI’s legal “edition” or role is crucial before granting it formal authority. Clear regulatory frameworks and ethical guidelines will be essential to ensure transparency, governance, and trust as AI continues to evolve in corporate leadership roles.

Like

John Adam
John Adam
16 hours ago

I found your article on the evolving role of AI as an independent director truly fascinating it reads like the start of a new chapter in governance, where judgment, precision, and foresight must adapt alongside innovation. It made me reflect on my own learning journey, where navigating complex concepts can feel just as uncertain, and there are times I quietly think about an online course help service for me to give myself a moment’s space to breathe, regroup, and return with clearer focus and confidence. Thanks for such a thoughtful piece it reminded me that whether in law, technology, or education, fresh perspectives can help light the path forward.

Like

Thanks for submitting!

  • LinkedIn
  • Instagram
  • Twitter

©2020 by The Competition and Commercial Law Review.

bottom of page