5 min read

Risk responsibly.

Risk responsibly.

Any activity that touches a human and his environment can generate certain risks. These risks can be of a more or less visible nature, and affect more or less the person "affected" by these risks understood as the probability of the materialization of certain events, which can have negative consequences for a wide range of "actors", not only the risk subject. Risks can be both internal and external (also when it comes to the source of such irks) and may or not be easy to manage and mitigate. Risk management is particularly important in regulated sectors and where we are using personal data, especially in sensitive areas such as race, gender, and sexuality, where the administrator (and processor) has legal and/or regulatory obligations. In practice, however, proper risk management can be relevant to any entity that interacts with the outside world.

The digitization of relationships, including legal ones, results in more and more challenges that we have been previously ignoring or at least downplayed as unlikely to occur. Increasingly, however, we are living in a world where individuals and corporations are extensively using more or less advanced solutions that use automated learning techniques, such as machine learning or deep learning, which we usually treat as "artificial intelligence." In reality, however, this "artificial intelligence" is nothing more than a certain technique (or technology) of processing data with a human being behind it, whose responsibility is to manage and oversee it and to ensure that everything works smoothly and without harming external entities.

Artificial intelligence systems - within the meaning of the Artificial Intelligence Act, since we often cede a fair amount of "autonomy" to them, and since they are largely a reflection of our mistakes (as such systems process historical - biased - data), can be the source of many risks that are not "known" to classical IT or at least they are not that common. In the ISO/IEC 38507:2022 standard developed by the IEEE, we may find a sample catalog of risks associated with the application of artificial intelligence systems, which may include:

1. lack of explainability of machine learning and similar models.

2. Lack of expertise in the "AI" field.

3. Problems related to access to data (of good quality, in sufficient quantity).

4. Challenges associated with "multi-threaded" access to various services associated with systems, including outsourcing.

5. Unclear technical specifications or instructions.

6. Bias and algorithmic bias that may turn into discrimination in many fields.

7. Threats from the area of cybersecurity, such as data poisoning.

No alt text provided for this image

This is not a closed list, of course, and we could also supplement it with issues from the broader area of artificial intelligence ethics (that should be a part of wider data science processes) or data management, but that is not the intention of today's column. It is true that the decision to use artificial intelligence systems that affect humans and their environment in any way should be preceded by proper identification of risks, as well as regular monitoring related to their possible materialization. The proposal for the Artificial Intelligence Act includes a list of prohibited practices and therefore each and every external application of such systems will require a deep dive into inherent risks.

The materialization of these risks can have far-reaching (and adverse) consequences for both the operator or provider of such solutions, as well as the "recipients" of the final product. The result of a system's behavior may be, for example, a recommendation, prediction, or decision, sometimes of a crucial or significant nature (e.g., regarding treatment), like a health diagnosis or recommendation to undertake a certain decision.

Thus, the lack of an appropriate framework for managing these risks can be a source of many problems, and unfortunately, we often ignore those risks that seem unlikely to us. Too bad that we are not treating it as something more than a mere obligation or legal & regulatory requirement (if any). Responsibility for other human beings. A human-centered approach should be the absolute principle in times of significant data processing and automation.

At the same time, however, let us have no illusions that we will always be able to protect ourselves from the materialization of risks. These are an immanent part of an uncertain world, and even the best-designed, implemented, and 'on run' risk management system may be ineffective against "something" or "someone" - all the more so since it is often a man who is the source of such problems - consciously or unconsciously. Such solutions, which we can describe as organizational, technical, and "human", are only to minimize the likelihood of their occurrence, and if they do occur, to enable their rapid elimination and restoration. And to draw appropriate conclusions and implement corrective actions, of course.

No alt text provided for this image

The risk management system for the "AI" area will always be individualized in nature. Just as organizations have different business models, these systems can be built in different ways and thus create differentiated opportunities, but also generate "other" risks. A risk-based approach and proportionality principles are not only recommended but necessary in the dynamic real world. There is no point in wasting resources where it is not needed. Therefore, always apply solutions that are adequate to the individual circumstances.

Undoubtedly, however, such a risk management system for "AI" must be based on certain principles that may not be entirely intuitive for the "classical" IT area. The use of machine learning or deep learning algorithms and models is often encapsulated by the need to use diverse data (including personal data), to ensure diversity and non-discrimination, or to constantly monitor changes in the system, including from a legal and regulatory perspective, but also from a "customer" perspective. This is especially true if we are creating solutions that are intended to serve human beings and provide them with the highest level of proper user experience.

The risk management system itself can be defined, according to Article 9(2) of the draft Artificial Intelligence Regulation, as a continuous, iterative process throughout the life cycle of an artificial intelligence system [high-risk - although this applies to any system], requiring regular, systematic updates. It should include at least several stages, which we can include:

1. identification and analysis of risks associated with the use of "AI".

2. Assessments and evaluations of these risks.

3. Assessment of the risks that may arise during the use of the system.

4. The adoption of appropriate solutions or tools to manage the risks, particularly in the event of their materialization.

5. [In addition,] to develop "defense" mechanisms that will ensure the restoration of the system's operation after the materialization of risks, as well as the introduction of corrective actions.

The task of developing such a system requires the involvement not only of those in the risk management unit [or a designated person] but of all those who know and understand these systems and use them (and of course, the risks associated). "Artificial intelligence" systems can be a source of human-related risks, requiring a move away from siloed management and focus on cooperation. Involving people "outside" of IT and data science, therefore, seems crucial here. Interdisciplinary teams must therefore be formed and put into action.

I will end here today, but we will of course return to the topic, because it is extremely important, both from the perspective of the private and public sectors. Regardless of whether we think about the requirements that - perhaps - the proposed regulation on artificial intelligence will impose. It is our responsibility to ensure that "AI" is for good and not for bad.