4 min read

US vs EU. Fairness, accuracy, and more transparency. Why is the explainability of ai models so important, and why might it not always be achievable

US vs EU. Fairness, accuracy, and more transparency. Why is the explainability of ai models so important, and why might it not always be achievable

Transparency as a "hard" requirement for machine and deep learning models is still a relatively rare situation, and it could be derived from consumer protection regulations and sectoral provisions. Still, not many authorities and institutions have decided (yet) to impose a hard requirement for clarity (there is a debate about the differences between transparency, explainability, and interpretability). The US Congress and the European Union institutions are currently working on acts - Algorithmic Accountability Act and Artificial Intelligence Act, respectively - that will (or should) impose such requirements on particular ai applications.

One reason for such a passive approach is the lack of a well-defined standard for explainability and concerns regarding the effect explainability may have on the accuracy and fairness of the model. Indeed, many researchers believe that more explainability may result in lesser accuracy. Still, at the same time, they are sure that some degree of explainability should be applied to achieve ethical and responsible ai. D. Martnes indicates that "[t]here is a need for transparency in the data used, the logic of the prediction model, what we consider to be potential sensitive groups, the predictive performance of the prediction model (including misclassification costs and misclassification rate over the different sensitive groups), what is the most appropriate measure for fairness of the prediction model, and how the model is being applied".

Achieving "full" transparency and explainability may not always be possible. One reason is that more advanced models are difficult to explain (the decision-making process, features applied, data used) and even though they are explainable such explanations may not be apparent to stakeholders. The second thing is that ethics (and more transparency and explainability is a part of the ethical approach) is always a compromise. You have to sacrifice "something" (e.g. less accuracy or less fairness) to get "something else" (more accuracy or a more fair model).  Therefore, it is essential to find the equilibrium that fits the organization and particular #ai applications. It will require a great effort and multistakeholder meetings and discussions to find the right "answer". Now, explainability and transparency is not always "binding" but this will change in the (near) future.

The US proposal includes the following requirement "a covered entity is required to submit (...) for any automated decision system or augmented critical decision process shall, to the extent possible, documentation of whether and how the covered entity implements any transparency or explainability measures, including:

(i) which categories of third-party decision recipients receive a copy of or have access to the results of any decision or judgment that results from such system or process; and

(ii) any mechanism by which a consumer may contest, correct, or appeal a decision or opt out of such system or process, including the corresponding website for such mechanism, where applicable.

Consequently, any covered entity will be obliged to implement appropriate AI & Data Governance measures, including technical ones, to ensure a sufficient level of transparency and explainability. The European Commission has proposed a less detailed approach in Article 13.1 of the AI Act where [h]igh-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable users to interpret the system's output and use it appropriately. This will require organizational and technical solutions to be put in place. In addition, the appendix to the AI Act includes additional requirements for technical documentation that should be added to every high-risk #ai system. Therefore, the same result (in comparison to US proposal) will be achieved - more transparency for the users.

Transparency and explainability aim to ensure that the user knows the grounds for a particular decision, what features had an impact, and what data has been used in applying the ai model. Such clarity is essential, especially regarding the models that use personal data and may have adverse effects on humans. The degree of explainability will vary depending on a particular use and the user's expectations (and legal requirements, if applicable). What is essential is that the explainability does not require revealing trade secrets or confidential information - the aim is to give adequate feedback to particular stakeholders, including data scientists.

The organization that is obliged (or would like to adhere to ethical expectations) to provide more clarity to its ai systems should identify the models that may require more transparency and explainability and the content (data) being used for training the model(s). The company should also ensure that appropriate technical solutions are available for the developer and users interacting with a particular ai system. This might be pretty challenging since not many tools are available on the market, and a decision to sacrifice some accuracy to ensure more clarity may be difficult for every manager. At the same time, more transparency and explainability may positively impact the business - more reliable and precise information and feedback equal customers' trust.

Becoming a more transparent and reliable organization is not a "one shot." It will require a lot of effort and constant adaptation to changes. It is not only about policies and procedures but also education and promoting ethical values, and it is about checking the data and features that a model is using. Nevertheless, a gap analysis should be one of the organization's first steps to consider, mainly if explainability (and transparency) is not at the forefront. Without a doubt, new US and UE proposals will positively impact the development of ethical artificial intelligence applications. What will be challenging is finding the "individual" approach to new requirements and ethical practices. It should be based on values that the organization indeed adapted.