select a.companyid,a.companytitle,a.management_name,a.image,a.management_name2,a.magazine_id,a.image2,c.cat_id from companies a, magazine_details c where a.magazine_id=c.sno and a.web_id = 27 and c.web_id = 27 order by a.companyid desc limit 4
Patrick Henz, Head Governance & Compliance US, Regional Compliance Officer Americas, Primetals Technologies
Intelligent algorithms disrupt all industries and, of course, the compliance function is not excluded. Based on the international accounting network BDO and its survey “Inside E-Discovery and Beyond,” approximately 40 percent of in-house counsels currently use an technology assisted review (TAR). Such software reads and understands contracts to highlight important parts and propose changes (for example to include adequate compliance and audit provisions), based on local law and company policies. Similar to today’s automated customer service, the software is the first level of review and control. Only if it identifies special risk factors, it sends the document to the second level, which is the human contract manager.
Furthermore, artificial intelligence (AI) powers the continuous monitoring of vendors and other business partners. Connecting the internal vendor master database with external sources, AI can detect red flags and minimize potential legal or reputational risks, especially if it also connects to the payment-process and detects suspicious patterns, which are automatically sent to compliance for approval.
The U.S. Securities and Exchange Commission (SEC) expects that a company takes adequate measures to minimize compliance risks.
Depending on the particular organization, this may include the use of AI. On the other hand, the SEC also decided that a “check-the-box” compliance system is not sufficient to protect a company against corruption. The compliance core function not only informs employees about guidelines and executes efficient controls. Moreover, this function implements an adequate positive and sustainable corporate culture. After all, compliance is about humans.
As long as organizations hire human employees, they require a human ethics & compliance officer. The value of this function is not the performance of standard processes, but to systematically create situations of disruption, like workshops to discuss cognitive biases. In this example, only a human can facilitate such discussions with insight gained from personal observation and experience.
Furthermore, the human ethics and compliance officer has to include AI inside his or her responsibilities. Software and humans both have to act based on laws and guidelines. To support with the second task, the university-and business-workgroup “fairness, accountability, and transparency in machine learning” (FATML) identified five areas of responsible decision-making by AI software: responsibility, explainability, accuracy, auditability, and fairness. An open corporate culture, where employees feel free to discuss their different opinions, is an efficient protection against potential future shocks. The ethics and compliance function can evolve into something new, first to GRC; a holistic approach to handle governance, risk, and compliance together.
This philosophy is based on W. Edwards Deming’s “system of profound knowledge” and the four pillars: Appreciation for a system, Knowledge about variation, Theory of knowledge, and Psychology.
Here, Deming concluded that “a bad system will beat a good person every time”. In Industry 4.0 the IT-environment adapts to the employee. As the human being is the key element of the organizational structure, the same must apply for processes, independent if they are manual or automated.
Evolving to GRC, the compliance officer has to adapt. Besides having expertise in anti-corruption and psychology, now understanding of systems is also relevant. This is especially true as processes and humans influence each other. Processes have to mitigate an identified business risk, and that risk is often balanced against how strict and bureaucratic the process is. If a process is overly bureaucratic and the risk is minimal, the process will trigger defense- and avoidance-mechanisms. If they are not strict enough, the process may not be sufficient to address the risk.
“An open corporate culture, where employees feel free to discuss their different opinions, is an efficient protection against potential future shocks”
GRC will not be the final step of this evolution. Much more, it prepares organizations for the implementation of AI, which can include automating its own processes. Resources can then focus on additional tasks, which require human empathy, like being the corporate futurist and storyteller, who anticipates new developments. This prepares the organization for the things to come, as new technologies are not only business challenges, but also may raise new legal questions and ethical dilemmas—data protection, cognitive hacking, drones, machine learning (ML), chat-bots, neuroprotection, cobots and autonomous vehicles, for example. Such a new job function not only needs technical understanding, but also a creative and philosophical mind.