LOCHANA HEGDE
CHRIST UNIVERSITY, LAVASSA

How does Artificial Intelligence influence Corporate Governance?
Introduction
When applied to corporate governance, AI brings changes that are beneficial in several ways – from better decision-making regarding the company’s operations to optimization of numerous areas of the company’s activity. However, this advancement in technology brings with it another legal factor that needs to be looked at keenly. The first major issue of legal consideration is algorithm blindness or bias. The strength of AI algorithms is that these are trained on data, and if the data used is biased in terms of the current existing society, then the AI system will also bias the decision making which in turn will develop biased results. This brings up issues on equity, Equal rights as well as non-discrimination legislation. Another critical legal issue is data privacy and security because with the increase of technology and therefore the amount of data being accumulated by organizations there’s a significant risk of the data being accessed by unauthorized people. AI systems use a lot of data, and many of these data are personal and corporate data that may be confidential.
The protection of this information must therefore be done in a secure internationally acceptable manner by data protection laws such as the General Data Protection Regulation [GDPR]. Thirdly, accountability and transparency of AI decision-making represent a legal issue. Attributing responsibility for the decisions made by failing artificial intelligence is usually challenging. -To overcome the legal risks and to develop trust it is necessary that the accountability of the measures undertaken is clearly defined and all the decision-making of artificial intelligence is transparent. Last but not least, there are issues with AI algorithms and data used in training the algorithms that cover intellectual property rights. Preserving the ownership of ideas and ideas as well as avoiding practices that might threaten AI as a valuable component of corporate governance is yet another legal consideration. Therefore, it is possible to turn the subordinate thesis, which means using AI, into a leading one: using the opportunities of artificial intelligence, minimizing its legal risks, and applying it responsively and fairly.
Fiduciary Duties –

Board decision-making is one of the areas of the application of artificial intelligence that raises questions in corporate governance. With advances in technologies in this line of artificial intelligence, they can be used to collect humongous quantities of data and then analyze those data to arrive at conclusions that may be useful in board decisions. But this inevitably leads to the questions as to where human judgment is employed and whether the AI system will inevitably bring in the bias or errors of its own. Much of a director’s conduct is considered to be a legal duty to the company and its shareholders. This duty envisages that they behave reasonably in carrying out their functions and that their decision-making processes are reasonable. As directors leverage AI when making board decisions, care has to be taken to know when AI is not capable of making the decisions or when directors are outsourcing their responsibilities to machines. It is in this context that if an outcome of the decisions made by the algorithms yields negative implications for the operations of the corporation, directors then can be impeached for a violation of their duties required of them under the corporate laws. This leads to the question of whether directors are legally responsible for decisions made with AI, even where the directors themselves acted with due care owed to the company in overseeing the AI. AI algorithms may contain unstructured, intertwining processes, which prevents the assessment of the outcome’s validity. Because of this, directors may find that evaluating the credibility of the AI-recommended solutions is difficult or explaining their actions to shareholders or other stakeholders is difficult.
What about Compliance?

AI can be a powerful asset for managing and preventing risks, both, internal and external, including fraud, cyber threats, and reputational risk. However, proper planning, deployment, and overseer of these systems are important so that they don’t cause harm. Many AI systems depend on big data, which in most cases contains the personal information of individuals. When using technology in corporate governance, legal regulations, for instance, the General Data Protection Regulation (GDPR), must be observed to protect the rights of its clients. This means that algorithms hindered, limited, or programmed by AI can also be biased, reflecting the biased data they have been fed. This can result in prejudicial conclusions including discrimination in employment or discrimination in credit accomplishment. It is incumbent upon companies to address how algorithms are being unfairly applied and how AI systems are being made to maintain fairness. Some risks AI systems may indeed be hacked thus putting critical data or business processes at the mercy of hackers. Businesses need to establish good defensive measures because AI needs to be safeguarded because of the information it uses.
What about other Ethical Considerations?

Besides the legal concerns, there are ethical concerns associated with the use of AI in corporate governance among companies. This includes the principles of how AI is being used and incorporated into the company and that the said practice is adopted in the right sense of morality. Managers of the companies must declare the usage of artificial intelligence and should take responsibility for the actions being carried out by artificial intelligence tools. This is explained by a lack of information about how AI systems function as well as how these systems are used for making corporate decisions. AI can however be of great use if people put it into practice but they should always oversee it. This includes guaranteeing that people are in the end accountable for such decisions made by AI and also are in a position to redo or counter AI’s recommendations in case they are undesirable. There are therefore two key ways that companies can ensure that their uses of AI are more socially beneficial, as well as prevent the use of AI in ways that harm society. This is in part to prevent the misuse of AI for discrimination or harm and ensure that AI is used for good, for equality, and the environment.
Conclusion –
Recently, AI has emerged as one of the potential development approaches that is already reshaping the general landscape of corporate governance both in its potential opportunities and potential risks. As this paper argues, it is advantageous for companies to develop AI since it is helpful for business and a number of its risks can be minimized. It includes involving users in the use of AI responsibly and transparently to allow them to oversee the operations of the system, and to follow all the legal requirements. This paper has shown that as AI advances, there is need for organizations to future-proof their AI governance.
-LOCHANA HEGDE
CHRIST UNIVERSITY, LAVASSA
תגובות